CN103700118A - Moving target detection method on basis of pulse coupled neural network - Google Patents

Moving target detection method on basis of pulse coupled neural network Download PDF

Info

Publication number
CN103700118A
CN103700118A CN201310731768.1A CN201310731768A CN103700118A CN 103700118 A CN103700118 A CN 103700118A CN 201310731768 A CN201310731768 A CN 201310731768A CN 103700118 A CN103700118 A CN 103700118A
Authority
CN
China
Prior art keywords
histogram
pixel
neural network
background model
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310731768.1A
Other languages
Chinese (zh)
Other versions
CN103700118B (en
Inventor
汪晋宽
才溪
韩光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201310731768.1A priority Critical patent/CN103700118B/en
Publication of CN103700118A publication Critical patent/CN103700118A/en
Application granted granted Critical
Publication of CN103700118B publication Critical patent/CN103700118B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a moving target detection method on the basis of a pulse coupled neural network, which comprises the following steps of a, sensing a video image sequence by utilizing the pulse coupled neural network and extracting global features of a video image; b, establishing a global feature histogram of each pixel; c, for each pixel, utilizing the global feature histograms corresponding to first K frames to establish an initial background model of the pixel; d, for each pixel, calculating similarity of a global feature histogram of a current frame image and the corresponding global feature histogram in the background model and detecting whether the pixel is a moving target; e, for each pixel, utilizing the global feature histogram of the current frame image to update the background model of the pixel. The moving target detection method uses the integral characteristics of human visual perception for reference, utilizes the pulse coupled neural network to extract the global features of the image and is beneficial for inhibiting the negative influence of disturbance of a dynamic background on detection of the moving target so as to improve accuracy of detecting the moving target.

Description

基于脉冲耦合神经网络的动目标检测方法Moving Target Detection Method Based on Pulse Coupled Neural Network

技术领域technical field

本发明涉及一种基于脉冲耦合神经网络的动目标检测方法,属于视频图像处理技术领域。The invention relates to a moving target detection method based on a pulse-coupled neural network, belonging to the technical field of video image processing.

背景技术Background technique

在智能视频监控系统中,动目标检测技术是其它各种后期处理(如目标跟踪、目标识别及行为分析等)的基础。为了能在各种监控环境下准确有效地分割出运动目标区域,研究对复杂动态场景(如光照变化、背景扰动等)具有鲁棒性的动目标检测方法具有重要的意义。In the intelligent video surveillance system, moving target detection technology is the basis of other post-processing (such as target tracking, target recognition and behavior analysis, etc.). In order to accurately and effectively segment moving target areas in various monitoring environments, it is of great significance to study moving target detection methods that are robust to complex dynamic scenes (such as illumination changes, background disturbances, etc.).

为解决复杂的动态场景为动目标检测带来的种种困难,目前,人们主要利用像素点的局部邻域特征对像素点的背景模型进行描述,通过提取图像的局部区域特征来提高特征对于区域中光照变化、背景扰动的鲁棒性。该方法的局限性在于:因其未能很好地利用图像的全局特征,所以只有在动态背景运动局限于局部区域内时才能有效的抑制背景扰动,并不适用于存在剧烈运动的动态背景环境(这里所说的局限于局部区域,是指局限在计算局部特征的区域里。定性来讲,就是当动态背景在计算局部特征的区域里运动,计算出的局部区域特征就容易具有稳定性;当动态背景运动超出了计算局部特征的区域,得到的局部特征就不易稳定,由其建立的背景模型对剧烈运动的动态背景的鲁棒性也较差)。因而,当前急需一种能适用于存在剧烈运动的动态背景环境的,同时能提高检测准确性和鲁棒性的动态目标检测方法。In order to solve the difficulties brought by complex dynamic scenes to moving target detection, at present, people mainly use the local neighborhood features of pixels to describe the background model of pixels, and extract the local area features of the image to improve the feature for the area. Robustness to illumination changes and background perturbations. The limitation of this method is that because it fails to make good use of the global features of the image, it can only effectively suppress the background disturbance when the dynamic background motion is limited to a local area, and it is not suitable for dynamic background environments with violent motion. (The term “limited to the local area” here means limited to the area where the local features are calculated. Qualitatively speaking, when the dynamic background moves in the area where the local features are calculated, the calculated local area features tend to be stable; When the dynamic background motion exceeds the area where the local features are calculated, the obtained local features are not easy to stabilize, and the background model established by it is also less robust to the dynamic background with violent motion). Therefore, there is an urgent need for a dynamic object detection method that can be applied to a dynamic background environment with severe motion and can improve detection accuracy and robustness at the same time.

发明内容Contents of the invention

本发明的目的在于,提供一种基于脉冲耦合神经网络的动目标检测方法,它可以有效解决现有技术中存在的问题,尤其是只有在动态背景运动局限于局部区域内时才能有效的抑制背景扰动,并不适用于存在剧烈运动的动态背景环境的问题。The purpose of the present invention is to provide a moving target detection method based on a pulse-coupled neural network, which can effectively solve the problems in the prior art, especially when the dynamic background movement is limited to a local area, the background can be effectively suppressed. Perturbation is not suitable for the problem of dynamic background environment with heavy motion.

为解决上述技术问题,本发明采用如下的技术方案:基于脉冲耦合神经网络的动目标检测方法,包括以下步骤:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions: the moving target detection method based on pulse-coupled neural network comprises the following steps:

a.利用脉冲耦合神经网络感知视频图像序列,提取视频图像的全局特征;a. Use the pulse-coupled neural network to perceive the video image sequence and extract the global features of the video image;

b.建立每个像素的全局特征直方图;b. Establish a global feature histogram for each pixel;

c.针对每个像素,利用前K帧对应的全局特征直方图作为该像素的初始背景模型;c. For each pixel, use the global feature histogram corresponding to the previous K frames as the initial background model of the pixel;

d.针对每个像素,计算当前帧图像的全局特征直方图与背景模型中相应的全局特征直方图的相似性,检测该像素是否为动目标;d. For each pixel, calculate the similarity between the global feature histogram of the current frame image and the corresponding global feature histogram in the background model, and detect whether the pixel is a moving target;

e.针对每个像素,利用当前帧图像的全局特征直方图对其背景模型进行更新。e. For each pixel, use the global feature histogram of the current frame image to update its background model.

优选的,步骤a具体包括:脉冲耦合神经网络中的一个神经元对应视频图像中的一个像素;对于脉冲耦合神经网络中位于(i,j)的神经元,其在时刻n受到外界刺激信息Sij和相邻k×l邻域内其它神经元n-1时刻脉冲信息{Ykl}的影响后,反馈输入Fij、线性连接输入Lij、内部活动项Uij、膜电位动态阈值θij、脉冲发生器的脉冲输出Yij以及提取的视频图像的全局特征Qij分别为:Preferably, step a specifically includes: a neuron in the pulse-coupled neural network corresponds to a pixel in the video image; for a neuron located at (i, j) in the pulse-coupled neural network, it receives external stimulus information S at time n After ij and other neurons in the adjacent k×l neighborhood are affected by the pulse information {Y kl } at time n-1, feedback input F ij , linear connection input L ij , internal activity item U ij , membrane potential dynamic threshold θ ij , The pulse output Y ij of the pulse generator and the global feature Q ij of the extracted video image are respectively:

Fij(n)=SijF ij (n) = S ij ;

LL ijij (( nno )) == 11 ,, ΣΣ klkl YY klkl (( nno -- 11 )) >> 00 00 ,, otherwiseotherwise ;;

Uij(n)=Fij(n)·(1+βLij(n));U ij (n)=F ij (n)·(1+βL ij (n));

θθ ijij (( nno )) == ee -- αα θθ θθ ijij (( nno -- 11 )) ++ VV θθ YY ijij (( nno )) ;;

YY ijij (( nno )) == 11 ,, Uu ijij (( nno )) >> θθ ijij (( nno -- 11 )) 00 ,, otherwiseotherwise ;;

Qij(n)=Qij(n-1)+(iter-n)Yij(n);Q ij (n)=Q ij (n-1)+(iter-n)Y ij (n);

其中,iter为脉冲耦合神经网络迭代感知的总次数;Sij为位于(i,j)的神经元所感知的任一波段图像的像素灰度值;αθ为动态阈值θij的衰减时间常数;Vθ为阈值放大系数;β为连接强度。Among them, iter is the total number of iterative perception of the pulse coupled neural network; S ij is the pixel gray value of any band image perceived by the neuron at (i,j); α θ is the decay time constant of the dynamic threshold θ ij ; V θ is the threshold amplification factor; β is the connection strength.

优选的,步骤b具体包括:根据彩色图像R、G、B三个波段图像的特征,使用脉冲耦合神经网络对彩色图像的R、G、B三个波段图像分别进行感知,迭代iter次后分别得到三个波段的特征QR、QG、QB;针对每个像素,在其邻域内统计得到三个波段的特征直方图HR、HG、HB,将这些直方图串联作为对应像素的全局特征直方图H,从而可实现对彩色照片的动目标检测;另外,彩色图像R、G、B三个波段的特征直方图采用串联方式,从而降低了特征表示的维度和稀疏性,有利于减少背景模型的存储空间。Preferably, step b specifically includes: according to the characteristics of the three band images of the color image R, G, and B, using a pulse-coupled neural network to perceive the three band images of the color image R, G, and B respectively, and after iter times, respectively Obtain the features QR, QG, and QB of the three bands; for each pixel, statistically obtain the feature histograms HR, HG, and HB of the three bands in its neighborhood, and concatenate these histograms as the global feature histogram H of the corresponding pixel , so that the moving target detection of color photos can be realized; in addition, the feature histograms of the three bands of color images R, G, and B are connected in series, thereby reducing the dimensionality and sparsity of feature representation, which is conducive to reducing the storage of background models space.

具体的说,步骤c中,初始背景模型的建立具体包括:对于每个像素,利用前K帧图像通过步骤a和b提取获得K个全局特征直方图{H1,…,HK};对每个全局特征直方图赋予一个权重wk,且

Figure BDA0000447765840000024
Specifically, in step c, the establishment of the initial background model specifically includes: for each pixel, use the previous K frames of images to obtain K global feature histograms {H 1 ,...,H K } through step a and b extraction; Each global feature histogram is assigned a weight w k , and
Figure BDA0000447765840000024

本发明的步骤d具体包括:针对每个像素,若当前帧图像的全局特征直方图Hc与背景模型中前B个对应于背景的任一直方图的相似性大于阈值Ts,则该像素为背景;否则,该像素为前景,即动目标。Step d of the present invention specifically includes: for each pixel, if the similarity between the global feature histogram H c of the current frame image and any of the previous B background histograms corresponding to the background in the background model is greater than the threshold T s , then the pixel is the background; otherwise, the pixel is the foreground, that is, the moving object.

本发明步骤e中所述的对背景模型进行更新具体包括:The updating of the background model described in step e of the present invention specifically includes:

e1.针对每个像素,对待检测的当前帧图像使用步骤a和b提取获得其全局特征直方图Hc;利用直方图交集计算当前帧图像的全局特征直方图Hc与背景模型中K个直方图Hi之间的相似性:e1. For each pixel, use steps a and b to extract the current frame image to be detected to obtain its global feature histogram H c ; use the histogram intersection to calculate the global feature histogram H c of the current frame image and K histograms in the background model Similarity between graph H i :

Figure BDA0000447765840000031
其中,N表示直方图bin的个数;i=1,2,......,K;
Figure BDA0000447765840000031
Among them, N represents the number of histogram bins; i=1, 2,..., K;

e2.设相似性度量阈值为TS,若当前帧图像的全局特征直方图Hc与{H1,…,HK}中K个直方图之间的相似性全部低于阈值TS,则当前的直方图无法和背景模型匹配,此时用当前帧图像的全局特征直方图Hc替换背景模型中权重最小的直方图;若当前帧图像的全局特征直方图Hc与背景模型{H1,…,HK}中某几个直方图间的相似性高于阈值TS,则选择相似性最大的背景模型直方图Hm作为最佳匹配直方图,并由当前帧图像的直方图Hc对最佳匹配直方图Hm及其对应的权重wm进行更新,即:e2. Set the similarity measurement threshold as T S , if the similarities between the global feature histogram H c of the current frame image and the K histograms in {H 1 ,…,H K } are all lower than the threshold T S , then The current histogram cannot match the background model, so replace the histogram with the smallest weight in the background model with the global feature histogram H c of the current frame image; if the global feature histogram H c of the current frame image is consistent with the background model {H 1 ,...,H K }, the similarity between some histograms in H K } is higher than the threshold T S , then select the background model histogram H m with the largest similarity as the best matching histogram, and the histogram H of the current frame image c updates the best matching histogram H m and its corresponding weight w m , namely:

Hm=αhHc+(1-αh)HmH mh H c +(1-α h )H m ;

wm=wmw(1-wm);w m =w mw (1-w m );

其中,αh为直方图学习因子,αw为权重学习因子;同时对背景模型中其它直方图的权重wi也进行相应的调整,即:Among them, α h is the histogram learning factor, α w is the weight learning factor; at the same time, the weight w i of other histograms in the background model is also adjusted accordingly, namely:

wi=(1-αw)wi,其中i≠m;w i =(1-α w )w i , where i≠m;

e3.根据各直方图的权重由大到小对背景模型中的K个直方图重新排序,并选择前B个直方图对应于背景;其中,B<K。e3. Reorder the K histograms in the background model according to the weight of each histogram from large to small, and select the first B histograms corresponding to the background; wherein, B<K.

本发明中,前K帧用来初始化背景模型,从K+1帧开始检测运动目标;背景模型更新也从K+1帧开始,在之后的每一帧都要进行,且每次都只利用当前这一帧对背景模型进行更新。In the present invention, the first K frames are used to initialize the background model, and the moving target is detected from the K+1 frame; The current frame updates the background model.

与现有技术相比,本发明通过借鉴进化成熟的人类视觉感知系统,根据人类视觉感知的整体特性(即人类视觉感知系统对观测对象整体的认识往往先于对局部的认识,且这种整体认识并非是对组成观测对象的个体认识的简单相加;观测对象中的各部分个体都对视觉感知系统产生刺激,各种刺激相互作用、相互影响、相互衬托),利用脉冲耦合神经网络模拟人眼视觉神经系统,基于该神经网络感知彩色图像的脉冲反应来提取图像的全局特征信息,并在此基础上建立基于全局特征的背景模型,从而可以提高动目标检测的鲁棒性。此外本发明借鉴人类视觉感知的整体特性,利用脉冲耦合神经网络提取图像的全局特征,有利于抑制动态背景扰动对动目标检测的不利影响,从而提高动目标检测的准确性和F-measure值;且通过对比实验发现,与基于局部特征的动目标检测方法相比,本发明方法的检测结果对动态背景扰动的抑制效果更好。据大量数据统计表明:去除预处理及后处理过程后,在同样的实验环境下与现有的动目标检测方法进行对比,现有动目标检测方法的检测准确率为78.3%、F-measure为84.9%,而本发明的检测准确率为82.6%、F-measure为86.7%,准确率提高了4.3%,F-measure提高了1.8%。另外,本发明根据彩色图像R、G、B三个波段图像的特征,使用脉冲耦合神经网络对彩色图像的R、G、B三个波段图像分别进行感知,迭代iter次后分别得到三个波段的特征QR、QG、QB,从而可实现对彩色图像进行动目标检测;彩色图像R、G、B三个波段的特征直方图采用串联方式,从而降低了特征表示的维度和稀疏性,有利于减少背景模型的存储空间。Compared with the prior art, the present invention draws lessons from the mature human visual perception system, and according to the overall characteristics of human visual perception (that is, the human visual perception system often recognizes the whole of the observed object before the recognition of the part, and this overall Cognition is not a simple summation of the individual cognitions that make up the observation object; each part of the individual in the observation object stimulates the visual perception system, and various stimuli interact, influence each other, and complement each other), using pulse-coupled neural networks to simulate human The eye visual nervous system, based on the impulse response of the neural network to perceive the color image, extracts the global feature information of the image, and builds a background model based on the global feature on this basis, so as to improve the robustness of moving target detection. In addition, the present invention draws on the overall characteristics of human visual perception and uses the pulse-coupled neural network to extract the global features of the image, which is conducive to suppressing the adverse effects of dynamic background disturbance on moving target detection, thereby improving the accuracy and F-measure value of moving target detection; And it is found through comparative experiments that, compared with the moving target detection method based on local features, the detection result of the method of the present invention has a better suppression effect on dynamic background disturbance. According to a large amount of data statistics, after removing the pre-processing and post-processing processes, compared with the existing moving target detection method in the same experimental environment, the detection accuracy of the existing moving target detection method is 78.3%, and the F-measure is 84.9%, while the detection accuracy rate of the present invention is 82.6%, F-measure is 86.7%, the accuracy rate has increased by 4.3%, and the F-measure has increased by 1.8%. In addition, according to the characteristics of the three band images of the color image R, G, and B, the present invention uses a pulse-coupled neural network to perceive the three band images of the color image R, G, and B respectively, and obtains the three bands after iter times. The features Q R , Q G , and Q B of the color image can be used to detect moving targets; the feature histograms of the three bands of the color image R, G, and B are connected in series, thereby reducing the dimensionality and sparsity of feature representation , which is beneficial to reduce the storage space of the background model.

采用本发明的动目标检测方法与采用经典的基于高斯混合模型的背景减除法(GMM)、改进的基于高斯混合模型的背景减除法(IGMM)及基于局部二值模式纹理的背景减除法(LBP)进行动目标检测相比,检测效果如表1所示:Using the moving target detection method of the present invention and using the classic background subtraction method (GMM) based on Gaussian mixture model, the improved background subtraction method (IGMM) based on Gaussian mixture model and the background subtraction method (LBP) based on local binary pattern texture ) for moving target detection, the detection effect is shown in Table 1:

表1Table 1

采用的方法类型The type of method used 准确率Accuracy F-measureF-measure 本发明方法The method of the invention 0.82600.8260 0.86730.8673 GMMGMM 0.65780.6578 0.77330.7733 IGMMIGMM 0.78270.7827 0.84850.8485 LBPLBP 0.69570.6957 0.78230.7823

本发明的难度体现在使用何种方法、采用何种方式能提取出反映图像全局特征的信息。The difficulty of the present invention is reflected in which method and method can be used to extract the information reflecting the global features of the image.

附图说明Description of drawings

图1为一组视频图像中的一帧图像;Fig. 1 is a frame image in a group of video images;

图2为使用本发明方法从图1中检测出的动目标区域;Fig. 2 is the moving target region detected from Fig. 1 using the method of the present invention;

图3是本发明的方法流程图。Fig. 3 is a flow chart of the method of the present invention.

下面结合附图和具体实施方式对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

具体实施方式Detailed ways

本发明的实施例1:基于脉冲耦合神经网络的动目标检测方法,如图3所示,包括以下步骤:Embodiment 1 of the present invention: the moving target detection method based on pulse coupled neural network, as shown in Figure 3, comprises the following steps:

a.利用脉冲耦合神经网络感知视频图像序列,提取视频图像的全局特征;其中,脉冲耦合神经网络的一个神经元对应视频图像的一个像素;对于脉冲耦合神经网络中位于(i,j)的神经元,其在时刻n受到外界刺激信息Sij和相邻k×l邻域内其它神经元n-1时刻脉冲信息{Ykl}的影响后,反馈输入Fij、线性连接输入Lij、内部活动项Uij、膜电位动态阈值θij、脉冲发生器的脉冲输出Yij以及提取的视频图像的全局特征Qij分别为:a. Utilize the pulse-coupled neural network to sense the video image sequence and extract the global features of the video image; wherein, a neuron of the pulse-coupled neural network corresponds to a pixel of the video image; for the neural network located at (i, j) in the pulse-coupled neural network After being affected by the external stimulus information S ij and other neuron n-1 time impulse information {Y kl } in the adjacent k×l neighborhood at time n, the feedback input F ij , the linear connection input L ij , the internal activity The term U ij , the membrane potential dynamic threshold θ ij , the pulse output Y ij of the pulse generator, and the global feature Q ij of the extracted video image are respectively:

Fij(n)=SijF ij (n) = S ij ;

LL ijij (( nno )) == 11 ,, &Sigma;&Sigma; klkl YY klkl (( nno -- 11 )) >> 00 00 ,, otherwiseotherwise ;;

Uij(n)=Fij(n)·(1+βLij(n));U ij (n)=F ij (n)·(1+βL ij (n));

&theta;&theta; ijij (( nno )) == ee -- &alpha;&alpha; &theta;&theta; &theta;&theta; ijij (( nno -- 11 )) ++ VV &theta;&theta; YY ijij (( nno )) ;;

YY ijij (( nno )) == 11 ,, Uu ijij (( nno )) >> &theta;&theta; ijij (( nno -- 11 )) 00 ,, otherwiseotherwise ;;

Qij(n)=Qij(n-1)+(iter-n)Yij(n);Q ij (n)=Q ij (n-1)+(iter-n)Y ij (n);

其中,iter为脉冲耦合神经网络迭代感知的总次数;Sij为位于(i,j)的神经元所感知的任一波段图像的像素灰度值;αθ为动态阈值θij的衰减时间常数;Vθ为阈值放大系数,它决定了神经元点火时刻阈值的提升程度,对神经元点火周期起着重要的调节作用,因此通常取值较大;β为连接强度,它决定了线性连接输入Lij对内部活动项Uij的贡献,β≠0时PCNN各神经元之间存在耦合连接,一个神经元的点火会对其连接范围内的其他神经元做出贡献;通常根据经验选择连接强度并设其为常数;Among them, iter is the total number of iterative perception of the pulse coupled neural network; S ij is the pixel gray value of any band image perceived by the neuron at (i,j); α θ is the decay time constant of the dynamic threshold θ ij ; V θ is the threshold amplification factor, which determines the degree of improvement of the threshold at the time of neuron firing, and plays an important role in regulating the firing cycle of neurons, so it usually takes a larger value; β is the connection strength, which determines the linear connection input The contribution of L ij to the internal activity item U ij , when β≠0, there is a coupling connection between the neurons of PCNN, and the ignition of a neuron will contribute to other neurons within its connection range; the connection strength is usually selected based on experience and let it be a constant;

b.根据彩色图像R、G、B三个波段图像的特征,建立每个像素的全局特征直方图:使用脉冲耦合神经网络对彩色图像的R、G、B三个波段图像分别进行感知,迭代iter次后分别得到三个波段的特征QR、QG、QB;针对每个像素,在其邻域内统计得到三个波段的特征直方图HR、HG、HB,将这些直方图串联作为对应像素的全局特征直方图H;b. According to the characteristics of the R, G, and B band images of the color image, establish a global feature histogram for each pixel: use the pulse-coupled neural network to perceive the R, G, and B band images of the color image, and iterate After iter times, the features Q R , Q G , and Q B of the three bands are obtained respectively; for each pixel, the feature histograms HR , H G , and H B of the three bands are statistically obtained in its neighborhood, and these histograms Concatenated as the global feature histogram H of the corresponding pixel;

c.针对每个像素,利用前K帧对应的全局特征直方图作为该像素的初始背景模型;初始背景模型的建立具体包括:对于每个像素,利用前K帧图像通过步骤a和b提取获得K个全局特征直方图{H1,…,HK};对每个全局特征直方图赋予一个权重wk,且

Figure BDA0000447765840000055
c. For each pixel, use the global feature histogram corresponding to the previous K frames as the initial background model of the pixel; the establishment of the initial background model specifically includes: for each pixel, use the previous K frame images to extract and obtain through steps a and b K global feature histograms {H 1 ,…,H K }; assign a weight w k to each global feature histogram, and
Figure BDA0000447765840000055

d.针对每个像素,计算当前帧图像的全局特征直方图与背景模型中相应的全局特征直方图的相似性,检测该像素是否为动目标,具体的说,针对每个像素,若当前帧图像的全局特征直方图Hc与背景模型中前B个对应于背景的任一直方图的相似性大于阈值Ts,则该像素为背景;否则,该像素为前景,即动目标;d. For each pixel, calculate the similarity between the global feature histogram of the current frame image and the corresponding global feature histogram in the background model, and detect whether the pixel is a moving target. Specifically, for each pixel, if the current frame If the similarity between the global feature histogram H c of the image and any of the first B histograms corresponding to the background in the background model is greater than the threshold T s , then the pixel is the background; otherwise, the pixel is the foreground, that is, the moving target;

e.针对每个像素,利用当前帧图像的全局特征直方图对其背景模型进行更新;所述的对背景模型进行更新具体包括:e. For each pixel, update its background model using the global feature histogram of the current frame image; the updating of the background model specifically includes:

e1.针对每个像素,对待检测的当前帧图像使用步骤a和b提取获得其全局特征直方图Hc;利用直方图交集计算当前帧图像的全局特征直方图Hc与背景模型中K个直方图Hi(i=1,2,......,K))之间的相似性:e1. For each pixel, use steps a and b to extract the current frame image to be detected to obtain its global feature histogram H c ; use the histogram intersection to calculate the global feature histogram H c of the current frame image and K histograms in the background model Similarity between graphs H i (i=1, 2, ..., K)):

Sim ( H c , H i ) = &Sigma; n = 0 N - 1 min ( H cn , H in ) ; 其中,N表示直方图bin的个数; Sim ( h c , h i ) = &Sigma; no = 0 N - 1 min ( h cn , h in ) ; Among them, N represents the number of histogram bins;

e2.设相似性度量阈值为TS,若当前帧图像的全局特征直方图Hc与{H1,…,HK}中K个直方图之间的相似性全部低于阈值TS,则当前的直方图无法和背景模型匹配,此时用当前帧图像的全局特征直方图Hc替换背景模型中权重最小的直方图;若当前帧图像的全局特征直方图Hc与背景模型{H1,…,HK}中某几个直方图间的相似性高于阈值TS,则选择相似性最大的背景模型直方图Hm作为最佳匹配直方图,并由当前帧图像的直方图Hc对最佳匹配直方图Hm及其对应的权重wm进行更新,即:e2. Set the similarity measurement threshold as T S , if the similarities between the global feature histogram H c of the current frame image and the K histograms in {H 1 ,…,H K } are all lower than the threshold T S , then The current histogram cannot match the background model, so replace the histogram with the smallest weight in the background model with the global feature histogram H c of the current frame image; if the global feature histogram H c of the current frame image is consistent with the background model {H 1 ,...,H K }, the similarity between some histograms in H K } is higher than the threshold T S , then select the background model histogram H m with the largest similarity as the best matching histogram, and the histogram H of the current frame image c updates the best matching histogram H m and its corresponding weight w m , namely:

Hm=αhHc+(1-αh)HmH mh H c +(1-α h )H m ;

wm=wmw(1-wm);w m =w mw (1-w m );

其中,αh为直方图学习因子,αw为权重学习因子;同时对背景模型中其它直方图的权重wi也进行相应的调整,即:Among them, α h is the histogram learning factor, α w is the weight learning factor; at the same time, the weight w i of other histograms in the background model is also adjusted accordingly, namely:

wi=(1-αw)wi,其中i≠m;w i =(1-α w )w i , where i≠m;

e3.完成上述背景模型直方图及相应权重的更新之后,根据各直方图的权重由大到小对背景模型中的K个直方图重新排序,并选择前B个直方图对应于背景。e3. After completing the update of the background model histogram and corresponding weights, reorder the K histograms in the background model according to the weight of each histogram from large to small, and select the first B histograms corresponding to the background.

以上方法可实现对彩色照片的动目标检测。The above methods can realize the detection of moving objects in color photos.

实施例2:基于脉冲耦合神经网络的动目标检测方法,如图3所示,包括以下步骤:Embodiment 2: The moving target detection method based on pulse coupled neural network, as shown in Figure 3, comprises the following steps:

a.利用脉冲耦合神经网络感知视频图像序列,提取视频图像的全局特征;其中,脉冲耦合神经网络中的一个神经元对应视频图像中的一个像素;对于脉冲耦合神经网络中位于(i,j)的神经元,其在时刻n受到外界刺激信息Sij和相邻k×l邻域内其它神经元n-1时刻脉冲信息{Ykl}的影响后,反馈输入Fij、线性连接输入Lij、内部活动项Uij、膜电位动态阈值θij、脉冲发生器的脉冲输出Yij以及提取的视频图像的全局特征Qij分别为:a. Utilize the pulse-coupled neural network to sense the video image sequence and extract the global features of the video image; wherein, a neuron in the pulse-coupled neural network corresponds to a pixel in the video image; for the pulse-coupled neural network located at (i, j) After the neuron is affected by the external stimulus information S ij and the impulse information {Y kl } of other neurons in the adjacent k×l neighborhood at time n-1 at time n, the feedback input F ij , the linear connection input L ij , The internal activity item U ij , the membrane potential dynamic threshold θ ij , the pulse output Y ij of the pulse generator, and the global feature Q ij of the extracted video image are respectively:

Fij(n)=SijF ij (n) = S ij ;

LL ijij (( nno )) == 11 ,, &Sigma;&Sigma; klkl YY klkl (( nno -- 11 )) >> 00 00 ,, otherwiseotherwise ;;

Uij(n)=Fij(n)·(1+βLij(n));U ij (n)=F ij (n)·(1+βL ij (n));

&theta;&theta; ijij (( nno )) == ee -- &alpha;&alpha; &theta;&theta; &theta;&theta; ijij (( nno -- 11 )) ++ VV &theta;&theta; YY ijij (( nno )) ;;

YY ijij (( nno )) == 11 ,, Uu ijij (( nno )) >> &theta;&theta; ijij (( nno -- 11 )) 00 ,, otherwiseotherwise ;;

Qij(n)=Qij(n-1)+(iter-n)Yij(n);Q ij (n)=Q ij (n-1)+(iter-n)Y ij (n);

其中,iter为脉冲耦合神经网络迭代感知的总次数;Sij为位于(i,j)的神经元所感知的灰度图像的像素灰度值;αθ为动态阈值θij的衰减时间常数;Vθ为阈值放大系数;β为连接强度;Among them, iter is the total number of iterative perception of the pulse coupled neural network; S ij is the pixel gray value of the gray image perceived by the neuron at (i, j); α θ is the decay time constant of the dynamic threshold θ ij ; V θ is the threshold amplification factor; β is the connection strength;

b.建立每个像素的全局特征直方图H;b. Establish a global feature histogram H for each pixel;

c.针对每个像素,利用前K帧对应的全局特征直方图作为该像素的初始背景模型;其中,初始背景模型的建立具体包括:对于每个像素,利用前K帧图像通过步骤a和b提取获得K个全局特征直方图{H1,…,HK};对每个全局特征直方图赋予一个权重wk,且

Figure BDA0000447765840000074
c. For each pixel, use the global feature histogram corresponding to the previous K frames as the initial background model of the pixel; wherein, the establishment of the initial background model specifically includes: for each pixel, use the previous K frame images to pass steps a and b Extract K global feature histograms {H 1 ,…,H K }; assign a weight w k to each global feature histogram, and
Figure BDA0000447765840000074

d.针对每个像素,计算当前帧图像(从K+1帧开始)的全局特征直方图与背景模型中相应的全局特征直方图的相似性,检测该像素是否为动目标;若当前帧图像的全局特征直方图Hc与背景模型中前B个对应于背景的任一直方图的相似性大于阈值Ts,则该像素为背景;否则,该像素为前景,即动目标;d. For each pixel, calculate the similarity between the global feature histogram of the current frame image (starting from K+1 frame) and the corresponding global feature histogram in the background model, and detect whether the pixel is a moving target; if the current frame image The similarity between the global feature histogram Hc of the background model and the first B histogram corresponding to the background in the background model is greater than the threshold Ts, then the pixel is the background; otherwise, the pixel is the foreground, that is, the moving target;

e.针对每个像素,利用当前帧图像的全局特征直方图对其背景模型进行更新;所述的对背景模型进行更新具体包括:e. For each pixel, update its background model using the global feature histogram of the current frame image; the updating of the background model specifically includes:

e1.针对每个像素,对待检测的当前帧图像(从K+1帧开始)使用步骤a和b提取获得其全局特征直方图Hc;利用直方图交集计算当前帧图像的全局特征直方图Hc与背景模型中K个直方图Hi之间的相似性:e1. For each pixel, the current frame image to be detected (starting from K+1 frame) is extracted using steps a and b to obtain its global feature histogram Hc; use the histogram intersection to calculate the global feature histogram Hc of the current frame image and Similarity between K histograms Hi in the background model:

Figure BDA0000447765840000075
其中,N表示直方图bin的个数;i=1,2,......,K;
Figure BDA0000447765840000075
Among them, N represents the number of histogram bins; i=1, 2,..., K;

e2.设相似性度量阈值为TS,若当前帧图像的全局特征直方图Hc与{H1,…,HK}中K个直方图之间的相似性全部低于阈值TS,则当前的直方图无法和背景模型匹配,此时用当前帧图像的全局特征直方图Hc替换背景模型中权重最小的直方图;若当前帧图像的全局特征直方图Hc与背景模型{H1,…,HK}中某几个直方图间的相似性高于阈值TS,则选择相似性最大的背景模型直方图Hm作为最佳匹配直方图,并由当前帧图像的直方图Hc对最佳匹配直方图Hm及其对应的权重wm进行更新,即:e2. Set the similarity measurement threshold as T S , if the similarities between the global feature histogram H c of the current frame image and the K histograms in {H 1 ,…,H K } are all lower than the threshold T S , then The current histogram cannot match the background model, so replace the histogram with the smallest weight in the background model with the global feature histogram H c of the current frame image; if the global feature histogram H c of the current frame image is consistent with the background model {H 1 ,...,H K }, the similarity between some histograms in H K } is higher than the threshold T S , then select the background model histogram H m with the largest similarity as the best matching histogram, and the histogram H of the current frame image c updates the best matching histogram H m and its corresponding weight w m , namely:

Hm=αhHc+(1-αh)HmH mh H c +(1-α h )H m ;

wm=wmw(1-wm);w m =w mw (1-w m );

其中,αh为直方图学习因子,αw为权重学习因子;同时对背景模型中其它直方图的权重wi也进行相应的调整,即:Among them, α h is the histogram learning factor, α w is the weight learning factor; at the same time, the weight w i of other histograms in the background model is also adjusted accordingly, namely:

wi=(1-αw)wi,其中i≠m;w i =(1-α w )w i , where i≠m;

e3.根据各直方图的权重由大到小对背景模型中的K个直方图重新排序,并选择前B个直方图对应于背景。其中,B<K,B可以根据现有技术,设定为一个固定值或自适应变化。e3. Reorder the K histograms in the background model according to the weight of each histogram from large to small, and select the first B histograms corresponding to the background. Wherein, B<K, B can be set as a fixed value or adaptively changed according to the prior art.

接下来,针对每个像素,计算K+2帧图像的全局特征直方图与背景模型中相应的全局特征直方图的相似性,判断该像素是否为动目标;再利用K+2帧图像的全局特征直方图对背景模型进行更新,以此类推,计算K+3帧图像的.....。Next, for each pixel, calculate the similarity between the global feature histogram of the K+2 frame image and the corresponding global feature histogram in the background model, and judge whether the pixel is a moving target; then use the global feature histogram of the K+2 frame image The feature histogram updates the background model, and so on, calculates the K+3 frame image.....

以上方法可实现对灰度图像的动目标检测。The above method can realize the moving target detection on the gray scale image.

实验例:图1是一组视频图像中的一帧图像(图1中,人物的外套1为紫色,衬衫2为绿色,头发3为黑黄色,左边的树枝4显示为绿色,右边的树枝5显示为黑色,楼房的墙砖6为土黄色,墙砖6上有透过树枝射过来阳光7,天空8为蓝色),这组视频图像的监控背景由于包含了随风摇曳的树枝而动态多变,这给动目标检测带来了困难。Experiment example: Figure 1 is a frame of images in a group of video images (in Figure 1, the character's coat 1 is purple, shirt 2 is green, hair 3 is black and yellow, branch 4 on the left is displayed in green, branch 5 on the right It is displayed in black, the wall brick 6 of the building is khaki, the wall brick 6 has sunlight 7 shining through the branches, and the sky 8 is blue), the monitoring background of this group of video images is dynamic because it contains tree branches swaying in the wind Changeable, which brings difficulties to moving target detection.

使用本发明的方法对图1进行动目标检测,具体包括以下步骤:Use method of the present invention to carry out moving target detection to Fig. 1, specifically comprise the following steps:

(1-1)利用脉冲耦合神经网络提取图像的全局特征:对于像图1这样的一组120×160的彩色视频图像可由120×160个脉冲耦合神经元构成的神经网络进行感知;对于神经网络中位于(i,j)的神经元,其在时刻n受到外界刺激信息Sij和相邻k×l邻域内其它神经元n-1时刻脉冲信息{Ykl}影响后,其反馈输入Fij、线性连接输入Lij、内部活动项Uij、膜电位动态阈值θij、脉冲发生器的脉冲输出Yij以及本发明提取特征Qij可以描述为:(1-1) Use the pulse-coupled neural network to extract the global features of the image: for a group of 120×160 color video images like Figure 1, a neural network composed of 120×160 pulse-coupled neurons can be used for perception; for the neural network The neuron located at (i, j) in the middle is affected by the external stimulus information S ij and the impulse information {Y kl } of other neurons in the adjacent k×l neighborhood at time n-1 at time n, its feedback input F ij , the linear connection input L ij , the internal activity item U ij , the membrane potential dynamic threshold θ ij , the pulse output Y ij of the pulse generator, and the extracted feature Q ij of the present invention can be described as:

Fij(n)=Sij    (1)F ij (n) = S ij (1)

LL ijij (( nno )) == 11 ,, &Sigma;&Sigma; klkl YY klkl (( nno -- 11 )) >> 00 00 ,, otherwiseotherwise -- -- -- (( 22 ))

Uij(n)=Fij(n)·(1+βLij(n))    (3)U ij (n)=F ij (n)·(1+βL ij (n)) (3)

YY ijij (( nno )) == 11 ,, Uu ijij (( nno )) >> &theta;&theta; ijij (( nno -- 11 )) 00 ,, otherwiseotherwise -- -- -- (( 44 ))

&theta;&theta; ijij (( nno )) == ee -- &alpha;&alpha; &theta;&theta; &theta;&theta; ijij (( nno -- 11 )) ++ VV &theta;&theta; YY ijij (( nno )) -- -- -- (( 55 ))

Qij(n)=Qij(n-1)+(iter-n)Yij(n)    (6)Q ij (n)=Q ij (n-1)+(iter-n)Y ij (n) (6)

其中,iter=16表示脉冲耦合神经网络迭代感知的总次数;Sij设置为位于(i,j)的神经元所感知的任一波段图像的像素灰度值;αθ=0.5表示动态阈值θij的衰减时间常数;Vθ=100为阈值放大系数,它决定了神经元点火时刻阈值的提升程度,对神经元点火周期起重要调节作用,因此通常取值较大;β=0.2为连接强度,它决定了线性连接输入Lij对内部活动项Uij的贡献,β≠0时PCNN各神经元之间存在耦合连接,一个神经元的点火会对其连接范围内的其他神经元做出贡献,通常根据经验选择连接强度并设其为常数;Among them, iter=16 represents the total number of iterative perception of the pulse-coupled neural network; S ij is set to the pixel gray value of any band image perceived by the neuron located at (i,j); α θ =0.5 represents the dynamic threshold θ The decay time constant of ij ; V θ = 100 is the threshold amplification factor, which determines the degree of improvement of the threshold at the moment of neuron firing, and plays an important role in regulating the firing cycle of neurons, so it usually takes a larger value; β = 0.2 is the connection strength , which determines the contribution of the linear connection input L ij to the internal activity item U ij , when β≠0, there is a coupling connection between the neurons of the PCNN, and the ignition of a neuron will contribute to other neurons within its connection range , the connection strength is usually chosen empirically and set as a constant;

(1-2)建立每个像素的全局特征直方图:使用脉冲耦合神经网络对彩色图像的R、G、B三个波段图像分别进行感知,在迭代iter次后分别得到三个波段的特征QR、QG、QB,并针对每个像素,在其邻域内统计得到三个波段的特征直方图HR、HG、HB,并将这些直方图串联作为对应像素的全局特征直方图H。统计区域内的全局特征直方图作为背景模型;(1-2) Establish the global feature histogram of each pixel: Use the pulse-coupled neural network to perceive the R, G, and B band images of the color image separately, and obtain the feature Q of the three bands after iter times R , Q G , Q B , and for each pixel, statistically obtain the feature histograms HR , H G , H B of the three bands in its neighborhood, and concatenate these histograms as the global feature histogram of the corresponding pixel H. The global feature histogram in the statistical area is used as the background model;

(1-3)背景模型初始化:对于每个像素,利用前K=4帧图像采用步骤(1-1)和(1-2)提取全局特征直方图,获得K个全局特征直方图{H1,…,HK},并对每个直方图赋予一个权重wk=1K;(1-3) Background model initialization: For each pixel, use steps (1-1) and (1-2) to extract global feature histograms using the previous K=4 frame images, and obtain K global feature histograms {H 1 ,...,H K }, and assign a weight w k =1K to each histogram;

(1-4)动目标检测:针对每个像素,若当前帧图像的全局特征直方图Hc与背景模型中前B个对应于背景的任一直方图的相似性大于阈值TS,则该像素为背景;否则,该像素为前景,即动目标;(1-4) Moving target detection: For each pixel, if the similarity between the global feature histogram H c of the current frame image and any histogram corresponding to the background in the background model is greater than the threshold T S , then the The pixel is the background; otherwise, the pixel is the foreground, that is, the moving target;

(1-5)背景模型更新:针对每个像素,对待检测的当前帧图像使用步骤(1-1)和(1-2)提取其全局特征直方图Hc;利用直方图交集计算当前帧直方图Hc与背景模型中K个直方图Hi(i=1,…,K)之间的相似性:(1-5) Background model update: For each pixel, use steps (1-1) and (1-2) to extract the global feature histogram H c of the current frame image to be detected; use the histogram intersection to calculate the current frame histogram Similarity between graph H c and K histograms H i (i=1,...,K) in the background model:

SimSim (( Hh cc ,, Hh ii )) == &Sigma;&Sigma; nno == 00 NN -- 11 minmin (( Hh cncn ,, Hh inin )) -- -- -- (( 77 ))

其中,N表示直方图bin的个数;设相似性度量阈值为TS=0.5,若当前帧图像的全局特征直方图Hc与{H1,…,HK}中K个直方图之间的相似性全部低于阈值TS,则认为当前的直方图无法和背景模型匹配,此时用当前直方图Hc替换背景模型中权重最小的直方图;若当前帧图像的全局特征直方图Hc与背景模型{H1,…,HK}中某几个直方图间的相似性高于阈值TS,则选择相似性最大的背景模型直方图(如Hm)作为最佳匹配直方图,并由当前直方图Hc对最佳匹配直方图Hm及其对应权重wm进行更新,即Among them, N represents the number of histogram bins; if the similarity measure threshold is T S =0.5, if the global feature histogram H c of the current frame image and the K histograms in {H 1 ,…,H K } The similarities of all are lower than the threshold T S , it is considered that the current histogram cannot match the background model, and the current histogram H c is used to replace the histogram with the smallest weight in the background model; if the global feature histogram H of the current frame image The similarity between c and some histograms in the background model {H 1 ,…,H K } is higher than the threshold T S , then select the background model histogram with the greatest similarity (such as H m ) as the best matching histogram , and update the best matching histogram H m and its corresponding weight w m by the current histogram H c , namely

Hm=αhHc+(1-αh)Hm    (8)H mh H c +(1-α h )H m (8)

wm=wmw(1-wm)    (9)w m =w mw (1-w m ) (9)

其中,αh=0.01为直方图学习因子,αw=0.01为权重学习因子,与此同时,背景模型中其它直方图的权重wi(其中i≠m)也要做相应调整,即:Among them, α h =0.01 is the histogram learning factor, and α w =0.01 is the weight learning factor. At the same time, the weight w i (where i≠m) of other histograms in the background model should be adjusted accordingly, namely:

wi=(1-αw)wi    (10)w i =(1-α w )w i (10)

完成上述背景模型直方图及相应权重的更新之后,根据各直方图权重对背景模型中的K个直方图重新排序,并选择前B=2个直方图对应于背景。After completing the updating of the background model histograms and corresponding weights, the K histograms in the background model are reordered according to the weights of the histograms, and the first B=2 histograms are selected to correspond to the background.

图2即为经过上述步骤,并且未采取任何后处理方式获得的动目标检测结果,与其它已有检测方法比较,采用本发明的检测方法后,检测结果中包含的虚假目标较少,动目标的检测准确率较高。Figure 2 is the moving target detection result obtained through the above steps without any post-processing method. Compared with other existing detection methods, after using the detection method of the present invention, the detection results contain fewer false targets, and the moving target The detection accuracy is higher.

Claims (6)

1. the moving target detection method based on Pulse Coupled Neural Network, is characterized in that, comprises the following steps:
A. utilize Pulse Coupled Neural Network perception sequence of video images, extract the global characteristics of video image;
B. set up the global characteristics histogram of each pixel;
C. for each pixel, the global characteristics histogram that before utilizing, K frame is corresponding is as the initial back-ground model of this pixel;
D. for each pixel, the histogrammic similarity of global characteristics accordingly in the global characteristics histogram that calculates current frame image and background model, whether detect this pixel is moving-target;
E. for each pixel, utilize the global characteristics histogram of current frame image to upgrade its background model.
2. the moving target detection method based on Pulse Coupled Neural Network according to claim 1, is characterized in that, step a specifically comprises: a pixel in the corresponding video image of a neuron in Pulse Coupled Neural Network; For the neuron that is positioned at (i, j) in Pulse Coupled Neural Network, it is subject to environmental stimuli information S at moment n ijwith other neuron n-1 moment pulse information { Y in adjacent k * l neighborhood klimpact after, feed back input F ij, the linear input L that connects ij, internal activity item U ij, film potential dynamic threshold θ ij, pulse producer pulse output Y ijand the global characteristics Q of the video image extracting ijbe respectively:
Fi j(n)=Si j
L ij ( n ) = 1 , &Sigma; kl Y kl ( n - 1 ) > 0 0 , otherwise ;
U ij(n)=F ij(n)·(1+βL ij(n));
&theta; ij ( n ) = e - &alpha; &theta; &theta; ij ( n - 1 ) + V &theta; Y ij ( n ) ;
Y ij ( n ) = 1 , U ij ( n ) > &theta; ij ( n - 1 ) 0 , otherwise ;
Q ij(n)=Q ij(n-1)+(iter-n)Y ij(n);
Wherein, iter is the total degree of Pulse Coupled Neural Network iteration perception; S ijfor being positioned at the grey scale pixel value of arbitrary band image of the neuron institute perception of (i, j); α θfor dynamic threshold θ ijdamping time constant; V θfor threshold value amplification coefficient; β is strength of joint.
3. the moving target detection method based on Pulse Coupled Neural Network according to claim 2, it is characterized in that, step b comprises: according to the feature of coloured image R, G, tri-band images of B, use Pulse Coupled Neural Network to carry out respectively perception to the R of coloured image, G, tri-band images of B, after iteration iter time, obtain respectively the feature Q of three wave bands r, Q g, Q b; For each pixel, in its neighborhood, statistics obtains the feature histogram H of three wave bands r, H g, H b, the global characteristics histogram H using these histogram series connection as respective pixel.
4. according to the arbitrary described moving target detection method based on Pulse Coupled Neural Network of claim 1~3, it is characterized in that, in step c, the foundation of initial back-ground model specifically comprises: for each pixel, before utilizing, K two field picture is extracted and obtained K global characteristics histogram { H by step a and b 1..., H k; To each global characteristics histogram, give a weight w k, and
5. the moving target detection method based on Pulse Coupled Neural Network according to claim 4, is characterized in that, steps d specifically comprises: for each pixel, if the global characteristics histogram H of current frame image cbe greater than threshold value T with the individual arbitrary histogrammic similarity corresponding to background of front B in background model s, this pixel is background; Otherwise this pixel is prospect, i.e. moving-target.
6. the moving target detection method based on Pulse Coupled Neural Network according to claim 5, is characterized in that, background model is upgraded specifically and being comprised described in step e:
E1. for each pixel, to current frame image to be detected, use step a and b to extract and obtain its global characteristics histogram H c; Utilize histogram intersection to calculate the global characteristics histogram H of current frame image cwith K in background model histogram H ibetween similarity:
Figure FDA0000447765830000022
wherein, N represents the number of histogram bin; I=1,2 ..., K;
E2. establishing similarity measurement threshold value is T sif, the global characteristics histogram H of current frame image cwith { H 1..., H kin similarity between K histogram all lower than threshold value T s, current histogram cannot mate with background model, now uses the global characteristics histogram H of current frame image creplace the histogram of weight minimum in background model; If the global characteristics histogram H of current frame image cwith background model { H 1..., H kin similarity between certain several histogram higher than threshold value T s, select the background model histogram H of similarity maximum mas optimum matching histogram, and by the histogram H of current frame image cto optimum matching histogram H mand corresponding weight w mupgrade, that is:
H m=α hH c+(1-α h)H m
w m=w mw(1-w m);
Wherein, α hfor the histogram study factor, α wfor the weight study factor; Simultaneously to other histogrammic weight w in background model ialso adjust accordingly, that is:
W i=(1-α w) w i, i ≠ m wherein;
E3. descending to the K in background model histogram rearrangement according to each histogrammic weight, and select front B histogram corresponding to background.
CN201310731768.1A 2013-12-27 2013-12-27 Based on the moving target detection method of pulse coupled neural network Expired - Fee Related CN103700118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310731768.1A CN103700118B (en) 2013-12-27 2013-12-27 Based on the moving target detection method of pulse coupled neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310731768.1A CN103700118B (en) 2013-12-27 2013-12-27 Based on the moving target detection method of pulse coupled neural network

Publications (2)

Publication Number Publication Date
CN103700118A true CN103700118A (en) 2014-04-02
CN103700118B CN103700118B (en) 2016-06-01

Family

ID=50361636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310731768.1A Expired - Fee Related CN103700118B (en) 2013-12-27 2013-12-27 Based on the moving target detection method of pulse coupled neural network

Country Status (1)

Country Link
CN (1) CN103700118B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018072066A1 (en) * 2016-10-18 2018-04-26 中国科学院深圳先进技术研究院 Pulse-based neural circuit
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
US10198655B2 (en) 2017-01-24 2019-02-05 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
CN111209771A (en) * 2018-11-21 2020-05-29 晶睿通讯股份有限公司 Neural-like network identification efficiency improvement method and related identification efficiency improvement device
CN113723594A (en) * 2021-08-31 2021-11-30 绍兴市北大信息技术科创中心 Impulse neural network target identification method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DANSONG CHENG ET AL: "Multi-object Segmentation Based on Improved Pulse Coupled Neural Network", 《COMPUTER AND INFORMATION SCIENCE》 *
HAIQING WANG ET AL: "A Simplified Pulse-coupled Neural Network for Cucumber Image Segmentation", 《2010 INTERNATIONAL CONFERENCE ON COMPUTATIONAL AND INFORMATION SCIENCES》 *
MEIHONG SHI ET AL: "A Simplified pulse-coupled neural network for adaptive segmentation of fabric defects", 《MACHINE VISION AND APPLICATIONS》 *
刘映杰 等: "基于多阈值PCNN的运动目标检测算法", 《计算机应用》 *
惠飞 等: "基于脉冲耦合神经网络的目标特征抽取方法", 《吉林大学学报(信息科学版)》 *
王慧斌 等: "基于脉冲耦合神经网络融合的压缩域运动目标分割方法", 《光子学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018072066A1 (en) * 2016-10-18 2018-04-26 中国科学院深圳先进技术研究院 Pulse-based neural circuit
US10198655B2 (en) 2017-01-24 2019-02-05 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
US10452946B2 (en) 2017-01-24 2019-10-22 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
US11062167B2 (en) 2017-01-24 2021-07-13 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
CN111209771A (en) * 2018-11-21 2020-05-29 晶睿通讯股份有限公司 Neural-like network identification efficiency improvement method and related identification efficiency improvement device
CN113723594A (en) * 2021-08-31 2021-11-30 绍兴市北大信息技术科创中心 Impulse neural network target identification method
CN113723594B (en) * 2021-08-31 2023-12-05 绍兴市北大信息技术科创中心 Pulse neural network target identification method

Also Published As

Publication number Publication date
CN103700118B (en) 2016-06-01

Similar Documents

Publication Publication Date Title
CN111738908B (en) Scene conversion method and system for generating countermeasure network by combining instance segmentation and circulation
Tao et al. Smoke detection based on deep convolutional neural networks
CN103700114B (en) A kind of complex background modeling method based on variable Gaussian mixture number
CN106570474B (en) A kind of micro- expression recognition method based on 3D convolutional neural networks
Zhao et al. SVM based forest fire detection using static and dynamic features
CN103700118B (en) Based on the moving target detection method of pulse coupled neural network
CN107909005A (en) Personage&#39;s gesture recognition method under monitoring scene based on deep learning
CN107273905B (en) Target active contour tracking method combined with motion information
CN110348376A (en) A kind of pedestrian&#39;s real-time detection method neural network based
CN106803063A (en) A kind of metric learning method that pedestrian recognizes again
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN110378288A (en) A kind of multistage spatiotemporal motion object detection method based on deep learning
CN106874894A (en) A kind of human body target detection method based on the full convolutional neural networks in region
CN102521616B (en) Pedestrian detection method on basis of sparse representation
CN107229929A (en) A kind of license plate locating method based on R CNN
CN103839065A (en) Extraction method for dynamic crowd gathering characteristics
CN111291696A (en) A Recognition Method of Handwritten Dongba Character Based on Convolutional Neural Network
CN107220611A (en) A kind of space-time feature extracting method based on deep neural network
CN104103082A (en) Image saliency detection method based on region description and priori knowledge
CN103164693B (en) A kind of monitor video pedestrian detection matching process
CN107301376B (en) A Pedestrian Detection Method Based on Deep Learning Multi-layer Stimulation
CN107729993A (en) Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement
CN104537356B (en) Pedestrian identification method and the device again that sequence carries out Gait Recognition are taken turns using Switzerland
CN113128308B (en) Pedestrian detection method, device, equipment and medium in port scene
CN103729862A (en) Self-adaptive threshold value moving object detection method based on codebook background model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160601

CF01 Termination of patent right due to non-payment of annual fee