CN103793920B - Detection method and system based on retrograde video - Google Patents

Detection method and system based on retrograde video Download PDF

Info

Publication number
CN103793920B
CN103793920B CN201210419365.9A CN201210419365A CN103793920B CN 103793920 B CN103793920 B CN 103793920B CN 201210419365 A CN201210419365 A CN 201210419365A CN 103793920 B CN103793920 B CN 103793920B
Authority
CN
China
Prior art keywords
optical flow
motion
image
point
video
Prior art date
Application number
CN201210419365.9A
Other languages
Chinese (zh)
Other versions
CN103793920A (en
Inventor
王超
刁平
刁一平
全晓臣
蔡巍伟
任烨
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to CN201210419365.9A priority Critical patent/CN103793920B/en
Publication of CN103793920A publication Critical patent/CN103793920A/en
Application granted granted Critical
Publication of CN103793920B publication Critical patent/CN103793920B/en

Links

Abstract

本发明涉及计算机视觉中的图像理解领域,公开了一种基于视频的逆行检测方法及其系统。 The present invention relates to the field of computer vision image understanding, discloses a method and system for detecting retrograde based video. 本发明中,在帧差图像中取角点,在原始图像中进行角点光流跟踪,根据光流特征矢量分解运动模式,判断是否有逆行发生,提高了逆行检测的鲁棒性,有效地降低了复杂场景下逆行检测漏报和误报发生的概率。 In the present invention, taken in the difference image frame corners, angular optical flow tracking point in the original image, according to the optical flow motion mode decomposition feature vector is determined whether retrograde occurred, improves the robustness of the detection of retrograde effectively reduces the probability of detection in complex scenes retrograde omissions and false positives occur. 针对视频中的运动区域,通过迭代聚类算法得到不同的运动模式,可以有效地提取出复杂场景下对应于不同运动物体的运动模式,以便于逆行检测分析。 Area for motion video, a different motion pattern obtained by an iterative clustering algorithm can effectively extract the motion pattern corresponding to different moving objects in complex scenes, in reverse to analyze the detection. 采用金字塔分层的思想可以应对图像块较大范围内的移动,适应于逆行检测的应用场景。 Thought hierarchical pyramid can cope with a wide moving range of the image block, adapted to the application scenario retrograde detected.

Description

基于视频的逆行检测方法及其系统 Detection method and system based on retrograde video

技术领域 FIELD

[0001] 本发明涉及计算机视觉中的图像理解领域,特别涉及基于视频的逆行检测技术。 [0001] The present invention relates to the field of computer vision image understanding, and more particularly to a video-based detection techniques retrograde.

背景技术 Background technique

[0002] 基于视频的逆行检测,是属于人群异常行为检测的一种。 [0002] retrograde detected based on video, it is a kind of abnormal behavior detection population. 在当前社会中,安全问题成为人们日益关注的问题。 In the current society, the security issue has become a matter of growing concern. 而人群的行为,包括人群的聚集,打架斗殴,运动轨迹趋势等行为逐渐成为计算机视觉研究人员关注的新的课题。 The behavior of the crowd, including the gathering crowd, assault, and other acts trajectory trend gradually become a new topic in computer vision researchers concerned. 人群的行为对于公共安全部门工作有较好的指导意义。 Behavior of the crowd has a good guidance for the public safety sector. 例如:公安部门可以利用视频技术检测到人群聚集或者打架斗殴,防止骚乱升级;交通部门利用区域内一段时间人群运动轨迹统计来做一些决策性的指导,比如在拥堵的地方增加交警警力,疏散交通等等。 For example: public security departments can use video technology to detect crowds or fights to prevent riots upgrade; the use of the transport sector for some time crowd trajectory in the area to do some statistics to guide decision-making, such as increased police presence in local traffic congestion, traffic evacuation and many more. 人群中的逆行检测同样有其应用的场景,大到比如沙特麦加朝圣,正是一部分人不遵守既定路线行进,导致人群践踏惨剧的发生;小到在超市或者地铁的自动扶梯,一些人逆行,不单单是缺乏公德的表现,也会带来潜在的危险。 Retrograde detection crowd scene also has its applications, such as large pilgrimage to Mecca, Saudi Arabia, what some people do not follow the established route of travel, leading the crowd trampling tragedy; small escalator to the subway or in the supermarket, some retrograde , not just the lack of performance of public morality, will bring potentially dangerous.

[0003] 现有技术中,大多考虑利用人体的运动轨迹或者人体关节的运动轨迹来检测异常行为,这类基于跟踪的方法往往在简单场景下可以取得很好的效果,但是在复杂场景,人群密度较大,人数较多的时候,由于跟踪失效,随之而来带来的是大量的漏报和误报。 [0003] prior art, most of them consider using the body's trajectory trajectory or human joints to detect abnormal behavior, such can often be achieved very good results in a simple scenario-based tracking method, but in a complex scene, the crowd density, higher number, because of the failure to track, followed brought a large number of false negatives and false positives.

发明内容 SUMMARY

[0004] 本发明的目的在于提供一种基于视频的逆行检测方法及其系统,提高了逆行检测的鲁棒性,有效地降低了复杂场景下逆行检测漏报和误报发生的概率。 [0004] The object of the present invention is to provide a method and system based on retrograde video detection, improves the robustness of the detection of retrograde, effectively reduces the probability of false negatives and complex scene retrograde false detection occurs.

[0005] 为解决上述技术问题,本发明的实施方式公开了一种基于视频的逆行检测方法, 包括以下步骤: [0005] To solve the above problems, embodiments of the present invention discloses a method for detecting retrograde based video, comprising the steps of:

[0006] 对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像; [0006] The original image pixel points adjacent video frames corresponding differencing, and the difference binarization give frame difference image;

[0007] 根据帧差图像提取运动区域; [0007] The motion area extracting frame difference image;

[0008] 在帧差图像中选取角点; [0008] Select the corner frame difference image;

[0009] 以所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量; [0009] The optical flow vectors to the feature point selected as the initial tracking point angle of the optical flow, flow tracking angular point of the light in the original image of the video to give;

[0010] 根据光流的初始跟踪点与运动区域的位置关系,获得运动区域与光流特征矢量的对应关系; [0010] The positional relationship between the initial tracking point to the motion optical flow area, and obtain the corresponding relationship between the moving region optical flow feature vectors;

[0011] 对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式; [0011] For each motion region, respectively, wherein the optical flow motion vector decomposition region motion pattern;

[0012] 根据运动模式的平均速度的大小和方向判断是否有逆行发生。 [0012] The size and direction of the average velocity of the movement pattern is determined whether retrograde occurred.

[0013] 本发明的实施方式还公开了一种基于视频的逆行检测系统,包括: [0013] Embodiment of the present invention also discloses a detection system based on retrograde video, comprising:

[0014] 帧差图像获取单元,用于对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像; [0014] The frame difference image acquiring unit for an original image pixels adjacent video frames corresponding differencing, and the difference binarization give frame difference image;

[0015] 运动区域提取单元,用于根据帧差图像获取单元所得到的侦查图像提取运动区域; [0015] The motion region extraction unit, an image detection unit for acquiring the extracted motion area obtained according to a frame difference image;

[0016] 角点选取单元,用于在帧差图像获取单元所得到的帧差图像中选取角点; [0016] corner selecting means for selecting a corner point in the difference image frame a difference frame image obtaining unit obtained;

[0017] 光流跟踪单元,用于以角点选取单元所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量; [0017] The optical flow tracking unit for selecting corner points corner unit selected as the initial tracking point of the light flow, the optical flow tracking angular point in the original image of the video, the feature vector to obtain the optical flow;

[0018] 运动信息获取单元,用于根据光流的初始跟踪点与运动区域的位置关系,获得运动区域与光流特征矢量的对应关系; [0018] The motion information obtaining unit, according to the positional relationship between the initial tracking point to the motion optical flow area, and obtain the corresponding relationship between the moving region optical flow feature vectors;

[0019] 运动模式分解单元,用于对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式; [0019] The motion mode decomposition unit for moving, for each region, respectively, according to the motion pattern vector decomposition characteristics of the optical flow motion region;

[0020] 逆行检测单元,用于根据运动模式分解单元所分解的运动模式的平均速度的大小和方向判断是否有逆行发生。 [0020] retrograde detecting means, for determining the average velocity magnitude and direction of the motion pattern decomposition unit decomposing retrograde motion mode has occurred.

[0021] 本发明实施方式与现有技术相比,主要区别及其效果在于: [0021] Embodiment of the present invention compared to the prior art, the main differences and effects in that:

[0022] 在帧差图像中取角点,在原始图像中进行角点光流跟踪,根据光流特征矢量分解运动模式,判断是否有逆行发生,提高了逆行检测的鲁棒性,有效地降低了复杂场景下逆行检测漏报和误报发生的概率。 [0022] The difference image frame taken at corner points, the corner points for optical flow tracking in the original image, according to the optical flow motion mode decomposition feature vector is determined whether retrograde occurred, improves the robustness of the detection of retrograde effectively reduce under complex scene retrograde detection probability of false negatives and false positives.

[0023] 进一步地,针对视频中的运动区域,通过迭代聚类算法得到不同的运动模式,可以有效地提取出复杂场景下对应于不同运动物体的运动模式,以便于逆行检测分析。 [0023] Further, the region for motion video, a different motion pattern obtained by an iterative clustering algorithm can effectively extract the motion pattern corresponding to different moving objects in complex scenes, in reverse to analyze the detection.

[0024] 进一步地,去掉了RANSAC中随机取光流特征矢量求解几何变换模型,而采用遍历所有的光流特征矢量求解几何变换模型的方法,可以避免随机数产生。 [0024] Furthermore, the randomly removed RANSAC optical flow feature vector solve geometric transformation model, while using the optical flow through all of the feature vector transformation method to solve the geometric model, to avoid random number generation.

[0025] 进一步地,采用金字塔分层的思想可以应对图像块较大范围内的移动,适应于逆行检测的应用场景。 [0025] Further, use may be thought pyramiding cope movable within a larger range of the image block, adapted to retrograde detection scenarios.

[0026] 进一步地,摄像机高度要适中,摄像头太低,人占的面积太大容易误报,太高容易漏报,焦距不能过大,不然人在视野内较大容易误报。 [0026] Further, the camera high to moderate, the camera is too low, people occupy an area much easier to false positives, false negative too easy, the focal length can not be too large, or larger people easily false positives in the visual field. 摄像机角度尽量垂直,即视野范围内的景深不要过大,这样可以减少透视效应带来的影响,方便算法阈值的设定。 Camera angle as perpendicular as possible, i.e. the depth of field range of vision is not too large, thus reducing the impact of effects of perspective, the threshold setting algorithm easily.

附图说明 BRIEF DESCRIPTION

[0027] 图1是本发明第一实施方式中一种基于视频的逆行检测方法的流程示意图; [0027] FIG. 1 is a first embodiment of a schematic diagram of one embodiment of the present invention based on detection of retrograde flow of video;

[0028] 图2是本发明第二实施方式中一种基于视频的逆行检测方法的流程示意图; [0028] FIG. 2 is a second embodiment of a schematic diagram of one embodiment of the present invention based on detection of retrograde flow of video;

[0029] 图3是本发明第三实施方式中一种基于视频的逆行检测系统的结构示意图。 [0029] FIG. 3 is a schematic structural diagram of a video-based detection system retrograde third embodiment of the present invention.

具体实施方式 Detailed ways

[0030] 在以下的叙述中,为了使读者更好地理解本申请而提出了许多技术细节。 [0030] In the following description, in order to enable the reader to better understand the application and made many technical details. 但是,本领域的普通技术人员可以理解,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请各权利要求所要求保护的技术方案。 However, those of ordinary skill in the art can be appreciated, without these technical details and various changes and modifications based on the following embodiments, the present application may also be implemented as claimed in claim various technical solution.

[0031] 为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明的实施方式作进一步地详细描述。 [0031] To make the objectives, technical solutions, and advantages of the present invention will become apparent in conjunction with the following drawings, embodiments of the present invention will be described in further detail.

[0032] 本发明第一实施方式涉及一种基于视频的逆行检测方法。 [0032] The first embodiment of the present invention relates to a method for detecting retrograde based video. 图1是该基于视频的逆行检测方法的流程示意图。 FIG 1 is a schematic flow chart of the video-based detection method is retrograde.

[0033] 具体地说,如图1所示,该基于视频的逆行检测方法包括以下步骤: [0033] Specifically, as shown, the detection method based on reverse video as shown in FIG 1 comprises the following steps:

[0034] 在步骤101中,对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像。 [0034] In step 101, the neighboring pixel point in the video frames corresponding to the original image differencing, and the difference binarization, to obtain the difference image frame.

[0035] 根据相邻两帧或者多帧图像相对应的像素点做差如公式(1)所示,其中D (t)为差分图像,I⑴为当前帧图像,I (ti)为上一帧图像。 [0035] The formula for calculating the difference between two adjacent pixels according to the multi-frame image or a corresponding formula (1), where D (t) is the difference image, I⑴ the current frame image on an I (ti) of image. 通过设定一定的阈值,得到运动的像素点乃至运动区域。 By setting a certain threshold, and the motion to obtain pixel motion area. 其实获得运动区域的方法多种多样,包括背景建模方法,光流法等。 In fact, a variety of method of obtaining a motion area, including background modeling method, optical flow method or the like. 本发明采用帧差法是因为本申请中提出的方法及系统是应用在复杂场景,在这种场景下背景建模方法已经失效了;而光流法,特别是基于像素点的光流由于计算量较大,往往在实际工程中不予采用。 Using a frame difference method of the present invention is a method and system as proposed in this application is the use in a complex scene, a scene in which the background model has failed; the optical flow method, in particular based on the pixel since the optical flow calculation is large, often non-adoption in the actual project. 帧差阈值的选择也要合适,太小的话对噪声敏感,太大的话真实目标运动区域容易破碎。 Selecting a frame difference threshold is also suitable, if too sensitive to noise, the true target motion too large area easily broken. 本实施方式中,根据大量实际场景测试,默认值为7。 In the present embodiment, according to a number of practical testing scenario, the default value is 7. 当然,这只是一种优选值,在本发明的其它某些实施方式中,也可以默认为其它的值。 Of course, this is only a preferred value, in certain other embodiments of the present invention, it may be defaulted to another value.

[0036] D ⑴=I (t)_I (t-1) (1) [0036] D ⑴ = I (t) _I (t-1) (1)

[0037] 此后进入步骤102,根据帧差图像提取运动区域。 [0037] Thereafter proceeds to step 102, extracts a motion region in accordance with the difference image frame.

[0038] 进一步地,在步骤102中,还包括以下子步骤: [0038] Further, in step 102, further comprising the sub-steps of:

[0039] 将帧差图像的像素做双向投影,得到水平投影直方图和垂直投影直方图。 [0039] The pixels of the difference image frame projection made bidirectional, to give vertical and horizontal projection histograms projection histogram.

[0040] 通过对水平投影直方图和垂直投影直方图进行自适应阈值,得到运动区域。 [0040] By performing adaptive thresholds and vertical projection histogram horizontal projection histogram, to obtain a motion area.

[0041] 2004年第29期《计算机工程与应用》中由袁基炜、史忠科发表的《一种图像序列自动分割新方法》中对自适应阈值技术进行了介绍,在这里不再详细阐述。 [0041] "A new method for image sequence automatic segmentation" in 2004 on 29 "Computer Engineering and Applications" published by Yuan-based Wei, Shi Zhongke in adaptive thresholding techniques were introduced, where not elaborate.

[0042] 具体地说,利用直方图双向投影方法如公式(2)和(3)所示,其中C、R为双向投影得到的直方图,M为图像的宽度,N为图像的高度,P为帧差图像的像素,然后通过自适应阈值得到帧差分割图。 [0042] Specifically, the bidirectional projection method using a histogram as shown in formula (2) and (3), wherein C, R is a histogram obtained by projecting bi, M being the width of the image, N is the height of the image, P a frame difference image pixel, then the frame worth adaptive threshold difference divided by FIG. 当然也可以利用连通区域标记法。 Course be performed using connected component labeling method. 在本申请中,不用连通区域标记法,主要原因是帧差前景在对比度比较低的场景会破碎一些,所以有可能得不到完整的区域。 In the present application, not connected component labeling method, a frame difference is mainly in the low contrast foreground scene will be broken, and therefore there may not be a complete area. 运动区域提取的另一个好处是:相对于在整幅图像设置光流初始点,在运动区域设置初始点减少了计算量,并且提高了检测的效率。 Another advantage is that the motion region extraction: the whole image with respect to a light stream initial point, the initial set point reduces the computation in the motion area, and the efficiency of detection.

Figure CN103793920BD00061

[0045] 此后进入步骤103,在帧差图像中选取角点。 [0045] Thereafter proceeds to step 103, in the corner points selected in the difference image frame.

[0046] 优选地,在本实施方式中,我们选择Harris角点作为光流的初始跟踪点。 [0046] Preferably, in the present embodiment, we choose the Harris corner point as the initial tracking optical flows.

[0047] 前面说到,帧差图像有两个作用,一方面是提取运动区域,另一方面给光流初始跟踪点的设置提供了依据。 [0047] As mentioned earlier, the difference image frame has two roles, one is to extract the movement area, the optical flow on the other hand to set the initial tracking point provided a basis. 本来对于卢卡斯光流跟踪来说,纹理比较丰富的区域往往特征点跟踪效果较好,而且具体操作一般都是选择Harris角点作为光流的初始跟踪点。 Originally for optical flow tracking Lucas, a region rich texture feature point tracking is often better, but the specific operation are generally selected Harris corner point as the initial tracking optical flows. 根据现有文南犬J.Shi and C. Tomasi .Goodfeatu res to track. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition,pages 593-600,1994(J.和C 托马西,好的特征跟踪,美国电气和电子工程师协会会刊关于计算机视觉与模式识别的会议,第593-600页,1994年)可知,这种角点选择的判别准则与卢卡斯光流跟踪是否准确的判别准则恰好一致。 According to the prior Wennan dogs J.Shi and C. Tomasi .Goodfeatu res to track. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, pages 593-600,1994 (J. C and Tomasi, Good features to track American Institute of electrical and Electronics engineers, Proceedings of the Conference on computer vision and pattern recognition, pp. 593-600, 1994) shows that this criterion corner and select Lucas accuracy of optical flow tracking criterion exactly consistent. 给定一个点坐标公式我们定义自相关函数E (x,y)如公式(4),其中I (Xk+ AX,yk+ △ y)利用泰勒公式展开如公式(5),这样我们就得到公式(6),其中矩阵A (X,y)特征值的大小作为角点选择的判别准则与卢卡斯光流跟踪是否准确的判别准则。 Given a point coordinate equation we define the autocorrelation function E (x, y) as shown in equation (4), wherein I (Xk + AX, yk + △ y) using Taylor's formula to expand equation (5), so that we obtain the equation (6 ), wherein the size criterion eigenvalue matrix a (X, y) as the corners of the selected track if the optical flow Lucas accurate criterion.

[0048] 但是我们取Harris角点并不是在原始图像上取,而是在帧差图像上取,其主要目的还是希望在光流跟踪准确的同时,尽可能的使光流点能够体现物体的运动信息,这就要求光流点最好能够均匀覆盖运动区域。 [0048] However, we take the Harris corner does not take on the original image, but rather, on the difference image frame, its main purpose is desirable optical flow tracking accurate as possible while the point is able to reflect the light object flow motion information, which requires point is preferably uniformly optical flow motion area coverage. 事实表明帧差图像上取角点作为光流的初始跟踪点满足了之前提到的要求,在我们的应用中,优选地,我们提取角点100个作为角点光流初始跟踪点。 The fact that the corners of the frame difference taking as an initial tracking point image light flow to meet the requirements mentioned earlier, in our application, preferably, we extract the corner point 100 as the corners of the optical flow the initial tracking point.

Figure CN103793920BD00071

[0054] 此后进入步骤104,以所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量。 [0054] Thereafter proceeds to step 104 to select the corners as an initial tracking point optical flow, angular point in the original image of a video tracking optical flow, the optical flow of feature vectors obtained.

[0055] 优选地,在步骤104中,采用的是卢卡斯•卡纳德光流算法。 [0055] Preferably, in step 104, using an optical flow algorithm Lucas • canard.

[0056] 卢卡斯•卡纳德算法是一种经典的稀疏光流算法,其基本思想就是通过迭代算法,找出第一帧某图像块(特征点)在均方判别准则下在第二帧的位置。 [0056] Lucas • canard is a classical algorithm sparse optical flow algorithm, the basic idea is that by the iterative algorithm to find the first frame of an image block (feature points) in a second mean square criterion position of the frame.

[0057] 除此之外,还可以米用Horn-Schunck method、Buxton-Buxtonmethod、Black-Jepson method或者General variational methods,等等。 [0057] In addition, the meter can also be used Horn-Schunck method, Buxton-Buxtonmethod, Black-Jepson method or General variational methods, and the like.

[0058] 具体地说,我们采用卢卡斯•卡纳德光流,该光流属于稀疏光流的一种,其基本思想就是通过迭代算法,找出第一帧某图像块(特征点)在均方判别准则下在第二帧的位置。 [0058] Specifically, we used Lucas • canard optical flow, the optical flow belongs to a sparse optical flow, the basic idea is that by the iterative algorithm to find the first frame of an image block (feature points) in the second position of the frames in the mean square criterion. 其中图像块中的像素点可以给予不同的权值,在本申请中为了计算方便,图像块中的每一个像素点权值都相同。 Wherein the pixels in the image block may be given different weights in the present application for convenience of calculation, the weight of each pixel in the image block are the same. 另外某些光流的改进算法考虑到图像块的平移,旋转,放大等变化。 Further improved algorithm in consideration of certain optical flow variation image block translation, rotation, zoom and the like. 而我们的应用情况匹配对应连续两帧,所以只需考虑图像块的平移变化。 Our application corresponding to two consecutive frames match, so only consider the change in the image block translation. 同时我们利用了金字塔分层的思想,即保存原始图像的1/4图像与1/16图像,特征点每次迭代寻找先在最底层(1/16)处寻找,然后在中间层(1/4)处寻找,最后在原图像寻找匹配特征块,这样做的一点好处是可以应对图像块较大范围内的移动,即适应于逆行检测的应用场景。 We use the idea of ​​a hierarchical pyramid, that is, save the original image and the image of 1/4 1/16 image, the feature points at each iteration seeking to find at the bottom (1/16), and then in the middle layer (1 / 4) Looking at the last to find a match characterized in blocks in the original image, this point is the advantage of a wide moving range can cope with the image block, i.e. adapted to the detection of retrograde application scenario.

[0059] 此后进入步骤105,根据光流的初始跟踪点与运动区域的位置关系,获得运动区域与光流特征矢量的对应关系。 [0059] Then the process proceeds to step 105, the initial tracking point from the positional relationship of optical flow motion region, a motion region and the correspondence relationship obtained optical flow feature vectors.

[0060] 步骤105的主要作用将运动区域与光流特征矢量对应起来。 The main role of [0060] Step 105, the motion vector region optical flow feature correspondence. 也就是说给出一个运动区域,我们需要计算哪些光流跟踪的初始点在该运动区域内。 That gives a motion region, we need to calculate the initial point in which the optical flow motion tracking region. 具体来说,首先遍历所有的光流初始点,对于每一个光流初始点,假设其坐标为(X,y),运动区域(因为其为矩形)假设其四个点坐标为(Xleft,ytQp),(Xright,ytop),(Xleft,ybottom),(Xright,ybottom)利用点在矩形框公式(7)判断该点是否在矩形框里。 Specifically, first, the optical flow through all of the initial point for each optical flow starting point, assuming that the coordinates (X, y), the motion area (since it is a rectangle) is assumed to be four point coordinate (Xleft, ytQp ), (xright, ytop), (Xleft, ybottom), (xright, ybottom) using a rectangular box points equation (7) determines whether the point is in a rectangular box. 这一步做完以后,我们就得到了一个运动区域与光流初始点的对应关系(这种关系是一种一对多的关系),进而知道了运动区域内的运动信息。 After this step is done, we get a motion area corresponding relationship between the initial point and the optical flow (this relationship is a relationship of one to many), and thus to know the motion information in the motion area. 根据前面所述,这种运动信息是一种采样的运动信息,因为我们用的光流方法是一种稀疏光流。 Based on the foregoing, this motion information is motion information of one kind of sampling because the optical flow method we used is a sparse optical flow. 其实不单单是逆行检测,剧烈运动检测或者逆行检测等等基本上也要先得到运动信息区域,只不过在此以后如何合理利用运动信息不同而已。 In fact, not just detect retrograde, violent or retrograde motion detection detects motion and so basically it must first get the information area, but after this the reasonable use of different motion information only.

[0061 ] xief t< X < Xright,yt〇p <y〈ybottom (7) [0061] xief t <X <Xright, yt〇p <y <ybottom (7)

[0062] 此后进入步骤106,对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式。 [0062] Thereafter proceeds to step 106, for each region a motion, the motion pattern in accordance with each vector decomposition characteristics of the optical flow motion area.

[0063] 上文所述,逆行检测有可能是人群整体的行为,也有可能是人群中局部的行为。 [0063] as described above, may be retrograde detect overall behavior of the crowd, the crowd is also possible that the local behavior. 运动模式分解其实是一种“分而治之”的思想,就是首先先把运动区域中不同的运动物体找出来(假设它们有不同的运动模式),运动模式提取出来以后,再针对不同的运动模式进行分析。 Sport mode decomposition is actually a "divide and conquer" the idea is to first first find out the moving areas of different moving objects (assuming that they have different movement patterns), after the sport mode is extracted, then analyzed for different movement patterns . 很多不同的领域都应用了这种“分解”的思想,包括傅里叶变换,主分量分析(PCA)等等。 Many different fields are applied this "break down" mentality, including Fourier transform, principal component analysis (PCA) and so on.

[0064] 此后进入步骤107,根据运动模式的平均速度的大小和方向判断是否有逆行发生。 [0064] Thereafter proceeds to step 107, depending on the size and direction of the average velocity of the movement pattern is determined whether retrograde occurred.

[0065] 因为我们已经得到不同运动模式的平均速度的大小以及方向,并且速度是一个矢量,我们只需要比较这种运动模式的速度方向与用户设定正常运动方向角度的关系。 [0065] Since we have different average velocity magnitude and direction of the movement pattern and velocity is a vector, we only need to compare the relationship between the speed of such movement direction of the user setting mode of the normal direction of movement of the angle. 如果两个角度的差大于直角(90度),并且运动模式速度的大小超过一定阈值,则认为有逆行行为发生。 If the difference between the two angles is greater than a right angle (90 degrees), and the pattern size of the motion velocity exceeds a certain threshold, it is considered to be a retrograde behavior.

[0066] 此后结束本流程。 [0066] Thereafter the process ends.

[0067] 在帧差图像中取角点,在原始图像中进行角点光流跟踪,根据光流特征矢量分解运动模式,判断是否有逆行发生,提高了逆行检测的鲁棒性,有效地降低了复杂场景下逆行检测漏报和误报发生的概率。 [0067] The difference image frame taken at corner points, the corner points for optical flow tracking in the original image, according to the optical flow motion mode decomposition feature vector is determined whether retrograde occurred, improves the robustness of the detection of retrograde effectively reduce under complex scene retrograde detection probability of false negatives and false positives.

[0068] 本发明第二实施方式涉及一种基于视频的逆行检测方法。 [0068] The second embodiment of the present invention relates to a method for detecting retrograde based video.

[0069] 第二实施方式在第一实施方式的基础上进行了改进,主要改进之处在于: [0069] The second embodiment is an improvement on the first embodiment, the main improvement is that:

[0070] 在步骤106中,也就是在对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式的步骤中,包括以下子步骤: [0070] At step 106, that is, for each motion region, respectively according to step optical flow motion vector for the feature region exploded motion mode, comprises the substeps of:

[0071] A根据运动区域中的光流特征矢量与对应的光流的初始跟踪点,计算出几何变换模型的平移参数。 [0071] A motion region according to the optical flow of feature vectors corresponding to the initial tracking point optical flows calculated translation parameters of the geometric transformation model.

[0072] 优选地,在本实施方式中,几何变换模型为仿射变换模型。 [0072] Preferably, in the present embodiment, the geometric transformation model affine transformation model.

[0073] 仿射变换属于几何变换的一种,是一种参数模型。 [0073] Affine transformation belongs to a geometric transformation, it is a parametric model. 仿射变换主要特征是不改变图像直线的平行关系,也就是一副图像的平行四边形在另一幅图像中仍然是平行四边形。 The main feature of the affine transformation do not change the image line parallel relationship, i.e. in parallel with a quadrangular image another image remains a parallelogram.

[0074] 除了仿射变换之外,还可以用平移变换、线性正变换或者射影变换,等等。 [0074] In addition to the affine transformation, but also can use translation transform, forward transform or a linear projective transformation, and the like.

[0075] B利用得到的几何变换模型去套用其它的光流特征矢量,计算符合这个模型的光流特征矢量的个数,符合是指某个光流的初始跟踪点根据几何变换模型计算得到的点与利用角点光流点跟踪得到的点足够近。 [0075] B obtained using the geometric transformation model to apply to other optical flow feature vectors, optical flow is calculated in line with the number of features of the model vector, refers to the initial tracking point is in line with an optical flow obtained by calculation according to the geometric transformation model corner point using the optical flow obtained point tracking point close enough.

[0076] C得到第一几何变换模型,其中,光流特征矢量符合该模型的个数达到最大,符合该第一几何变换模型的光流特征矢量为第一运动模式。 [0076] C to give a first geometric transformation model, wherein the optical flow of the feature vector meet the model reaches the maximum number, correspond to the first feature vector of the optical flow geometric transformation model is a first movement pattern.

[0077] 优选地,采用的是遍历所有的光流特征矢量求解几何变换模型。 [0077] Preferably, using the optical flows through all feature vectors to solve geometric transformation model.

[0078] 在这里,不是随机取光流特征矢量求解几何变换模型,而采用遍历所有的光流特征矢量求解几何变换模型的方法,可以避免随机数产生。 [0078] Here, instead of the optical flow of feature vectors randomly solve geometric transformation model, while using the optical flow through all of the feature vector transformation method to solve the geometric model, to avoid random number generation.

[0079] 在不属于第一运动模式的光流特征矢量中,按照上述步骤A、B和C,得到第二几何变换模型,符合该第二几何变换模型的光流特征矢量为第二运动模式。 [0079] In the optical flow feature vector does not belong to the first movement pattern in accordance with the above steps A, B and C, giving the second geometric transformation model, wherein optical flow vector in line with the second geometric transformation model is the second motion mode .

[0080] 在不属于第一和第二运动模式的光流特征矢量中,按照上述步骤A、B和C,得到第三几何变换模型,符合该第三几何变换模型的光流特征矢量为第三运动模式。 [0080] In the optical flow do not belong to the first feature vector and the second motion mode, according to the above steps A, B and C, giving a third geometric transformation model, wherein optical flow vector in line with the third geometric transformation model for the first three movements mode.

[0081] 图2是该运动模式分解过程的流程示意图。 [0081] FIG. 2 is a schematic flow diagram of the movement pattern of the decomposition process.

[0082] 具体地说,在步骤201 (也就是第一实施方式中的步骤105)中,根据光流的初始跟踪点与运动区域的位置关系,获得运动区域与光流特征矢量的对应关系。 [0082] Specifically, in step 201 (i.e., a first embodiment of the step 105), the initial tracking point from the positional relationship of optical flow motion region, a motion region and the correspondence relationship obtained optical flow feature vectors.

[0083] 此后进入步骤202,进行迭代RANSAC算法。 [0083] Thereafter proceeds to step 202, iterative RANSAC algorithm.

[0084] 此后进入步骤203,分解出运动模式1、运动模式2和运动模式3. [0084] Then the process proceeds to step 203, the motion mode decomposition 1, 2 and the motion mode motion mode 3.

[0085] 此后结束本流程。 [0085] Thereafter the process ends.

[0086] 此外,可以理解,一个运动区域中可能有多个运动物体,所以可能存在多种不同速度和方向的平移运动,本发明是找前三种最主要的运动模式。 [0086] Further, it is understood, a motion region may have multiple moving objects, so that there may be multiple different translational motion speed and direction, the present invention is to find the first three main movement patterns. 当然,如果在某次迭代时所剩的光流特征点矢量小于门限,则不进行下一种运动模式的分解了。 Of course, if the time remaining in an iteration optical flow vector of the feature point is less than the threshold, the decomposition is not a sport mode is carried out.

[0087] 综上所述,在本申请中,对于运动模式分解,利用了迭代RANSAC方法,针对视频中的运动区域,通过迭代聚类算法得到不同的运动模式,可以有效地提取出复杂场景下对应于不同运动物体的运动模式,以便于逆行检测分析。 [0087] As described above, in the present application, the movement pattern for the decomposition method using iterative RANSAC, for motion video area, obtained by different motion patterns iterative clustering algorithm can effectively extract in complex scenes corresponding to the different movement pattern of the moving object, in order to analyze the detection retrograde.

[0088] 除了RAN SAC方法,在本发明中还可以采用meanshift,k-means等聚类算法实现运动模式的分解。 [0088] In addition to methods RAN SAC, in the present invention may also be employed meanshift, k-means clustering algorithm exploded like movement pattern.

[0089] 下面我们对RANSAC方法具体加以阐述: [0089] Here we elaborate on the specific RANSAC method:

[0090] RANSAC是“Random Sample Consensus (随机抽样一致)”的缩写。 [0090] RANSAC is "Random Sample Consensus (uniform random sampling)" abbreviation. 它可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数。 Observational data it may contain "outliers" from a group of centralized, parameter mathematical model to estimate an iterative manner. 它是一种不确定的算法, 它有一定的概率得出一个合理的结果;为了提高概率必须提高迭代次数。 It is an uncertain algorithm, it has a certain probability to arrive at a reasonable result; in order to improve the probability of the need to improve the number of iterations. 该算法最早由Fischler 和Bo lies 于1981 年提出。 The algorithm was first proposed by Fischler and Bo lies in 1981.

[0091] RANSAC的基本假设是: [0091] The basic assumption of the RANSAC is:

[0092] (一)数据由“局内点”组成,例如:数据的分布可以用一些模型参数来解释; [0092] (a) data from the "Office Point", for instance: distribution data can be used to explain some of the model parameters;

[0093] (二)“局外点”是不能适应该模型的数据; [0093] (ii) "outliers" can not adapt to the model;

[0094] (三)除此之外的数据属于噪声。 [0094] Data (c) other than noise belongs.

[0095] 局外点产生的原因有:噪声的极值;错误的测量方法;对数据的错误假设。 Cause [0095] outliers are produced: noise extrema; measurement errors; wrong assumptions of the data.

[0096] RAN SAC也做了以下假设:给定一组(通常很小的)局内点,存在一个可以估计模型参数的过程;而该模型能够解释或者适用于局内点。 [0096] RAN SAC also made the following assumptions: given a set of (typically small) local interior point, there is a process model parameters can be estimated; while the model can explain or applied to the game point.

[0097] 我们假设一个运动区域有三种运动模式,其中的模式我们可以选择仿射变换模型,如公式(8)所示,也就是对应某帧的一个光流的初始点(xi,yi),下一帧的光流跟踪点(Xl+1,y1+1)(即光流特征矢量)满足公式(8)的对应关系。 [0097] We assume that a moving region has three motion mode, a mode in which we can select affine transformation model, such as equation (8), which is the initial point of a certain frame of an optical flow corresponding to (xi, yi), optical flow tracking point (Xl + 1, y1 + 1) (i.e., optical flow feature vector) of a next frame satisfies equation (8) corresponding relationship. 其中S为放大系数,α为旋转系数Λ X,A y为平移系数。 Wherein S is the amplification factor, α Λ X is a rotation coefficient, A y translational factor. 那么对于每一个特征点矢量来说有三种情况,一种是属于三种运动模式的某一种,第二种是属于我们假设三种运动模式以外,第三种是由于光流特征点跟踪失败产生的“局外点”。 Then there are three cases for each vector for a feature point, which are of a certain one of three modes of motion, we assume that belongs to the second mode than three motions, the third is due to the failure of an optical flow feature point tracking produced "outliers." 针对逆行检测应用属于图像局部运动,所以可以选择简化的平动模型即S =1,α=〇,这样模型的参数也就是Δ X与Δ y,如公式(9)所示。 For the application belonging to the detecting retrograde local motion image, it is possible to choose a simplified model translational i.e. S = 1, α = square, so that the model parameter is Δ X and Δ y, equation (9).

Figure CN103793920BD00091

[0100] RAN SAC算法:在我们的应用里,由于需要跟踪的特征点较少(少于100个),另外避免随机数产生,我们去掉了RAN SAC中随机取光流特征矢量求解模型,而采用遍历所有的光流特征矢量求解模型。 [0100] RAN SAC algorithm: In our application, because of the need to track the feature point is less (less than 100), to avoid the additional random number generation, we removed the RAN SAC randomly taken to solve the model the optical flow of feature vectors, and using optical flows through all feature vectors to solve the model. 即对应每一个特征矢量点对(Xl,yi),(Xl+!,y1+1),我们通过公式(9)得到一个(Δχ,Ay),然后遍历其他所有的特征点对,记录符合该模型的点对个数。 I.e. corresponding to each feature vector points (Xl, yi), (Xl + !, y1 + 1), we obtain a (Δχ, Ay) by the equation (9), then traverse all the other feature points, recorded in line with the model the number of points. 这样我们就能找到一个模型,其中特征点矢量符合该模型的个数达到最大。 So that we can find a model in which the feature point vector in line with the number of the model is maximized.

[0101] 迭代RAN SAC具体做法:首先对于某一运动区域的光流特征矢量做一次上面提到的RAN SAC算法,模型选择公式(9),得到两组点集,一组是属于该运动模式的点集,另一组是不属于该运动模式的点集,当然我们可以称后者为第一种运动模式的“局外点”,然后在第二组点集中重复做RANSAC算法,得到对应第二种运动模式对应的点集和其对应的“局外点”,以此类推做第三次。 [0101] DETAILED RAN SAC iterative approach: First, a feature vector for optical flow motion area do a RAN SAC algorithm mentioned above, select the model equation (9), to give the two set points, one group belonging to the movement pattern point set point set, the other group is not part of the movement pattern, of course, we can call the latter the first "outliers" sport mode, and the second set point set RANSAC algorithm repeated, to give the corresponding the second movement pattern corresponding to the set point and its corresponding "outliers", and so did the third. 这样我们就得到了该运动区域的三种运动模式以及其对应的点集,然后对每一个点集对应的运动矢量做平均,就得到了该点集也就是运动模式的平均速度的大小以及方向。 Thus we get the three motion mode motion region and its corresponding set of points, and then averaging the set of points each corresponding to a motion vector, to obtain the set point is the average velocity magnitude and direction of the movement pattern .

[0102] 此外,可以理解,一个运动区域中可能有多个运动物体,所以可能存在多种不同速度和方向的平移运动,本发明是找前三种最主要的运动模式。 [0102] Further, it is understood, a motion region may have multiple moving objects, so that there may be multiple different translational motion speed and direction, the present invention is to find the first three main movement patterns. 当然,如果在某次迭代时所剩的光流特征点矢量小于门限,则不进行下一种运动模式的分解了。 Of course, if the time remaining in an iteration optical flow vector of the feature point is less than the threshold, the decomposition is not a sport mode is carried out.

[0103] 本发明的各方法实施方式均可以以软件、硬件、固件等方式实现。 [0103] Each embodiment of the method of the present invention may be implemented in software, hardware, firmware, or the like. 不管本发明是以软件、硬件、还是固件方式实现,指令代码都可以存储在任何类型的计算机可访问的存储器中(例如永久的或者可修改的,易失性的或者非易失性的,固态的或者非固态的,固定的或者可更换的介质等等)。 Regardless of the present invention in software, hardware, or firmware manner, the instruction codes stored in the memory can be any type of computer accessible (e.g., permanent or may be modified, volatile or non-volatile, solid-state or non-solid, fixed or replaceable media, etc.). 同样,存储器可以例如是可编程阵列逻辑(Programmable Array Logic,简称“ PAL”)、随机存取存储器(Random Access Memory,简称“RAM”)、可编程只读存储器(Programmable Read Only Memory,简称“PROM”)、只读存储器(Read-Only Memory,简称“ROM”)、电可擦除可编程只读存储器(Electrically Erasable Programmable ROM,简称“EEPR0M”)、磁盘、光盘、数字通用光盘①igital Versatile Disc,简称“DVD”)等等。 Similarly, for example, memory may be programmable array logic (Programmable Array Logic, referred to as "PAL"), a random access memory (Random Access Memory, referred to as "RAM"), programmable read-only memories (Programmable Read Only Memory, referred to as "PROM "), read-only memory (Read-Only memory, referred to as" a ROM "), electrically erasable programmable read only memory (electrically erasable programmable ROM, referred to as" EEPR0M "), magnetic disk, optical disc, digital versatile disc ①igital versatile disc, referred to as "DVD") and so on.

[0104] 本发明第三实施方式涉及一种基于视频的逆行检测系统。 [0104] The third embodiment of the present invention relates to a video-based detection system retrograde. 图3是该基于视频的逆行检测系统的结构示意图。 FIG 3 is a schematic view of the video-based detection system is retrograde.

[0105] 具体地说,如图3所示,该基于视频的逆行检测系统包括: [0105] Specifically, as shown in FIG. 3, the detection system based on retrograde video comprises:

[0106] 帧差图像获取单元,用于对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像。 [0106] The frame difference image acquiring unit for an original image pixels adjacent video frames corresponding differencing, and the difference binarization, to obtain the difference image frame.

[0107] 运动区域提取单元,用于根据帧差图像获取单元所得到的侦查图像提取运动区域。 [0107] the motion region extraction unit, an image detection unit for acquiring the extracted motion area according to the obtained difference frame image.

[0108] 角点选取单元,用于在帧差图像获取单元所得到的帧差图像中选取角点。 [0108] corner selecting means for selecting a corner point in the difference image frame a difference frame image obtaining unit obtained.

[0109] 光流跟踪单元,用于以角点选取单元所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量。 [0109] wherein the optical flow vector of the light flow tracking unit for selecting corner points corner unit selected as the initial tracking point of the light flow, the optical flow tracking angular point in the original image of the video obtain.

[0110] 运动信息获取单元,用于根据光流的初始跟踪点与运动区域的位置关系,获得运动区域与光流特征矢量的对应关系。 [0110] the motion information obtaining unit, according to the positional relationship between the initial tracking point to the motion optical flow area, and obtain the corresponding relationship between the moving region optical flow feature vectors.

[0111] 运动模式分解单元,用于对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式。 [0111] the motion mode decomposition unit for moving, for each region, respectively, according to the motion pattern vector decomposition characteristics of the optical flow motion area.

[0112] 逆行检测单元,用于根据运动模式分解单元所分解的运动模式的平均速度的大小和方向判断是否有逆行发生。 [0112] retrograde detecting means, for determining the average velocity magnitude and direction of the motion pattern decomposition unit decomposing retrograde motion mode has occurred.

[0113] 此外,还包括:摄像机,用于获取视频图像,摄像机的安装位置要保证人的高度占图像高度的I /3至I /2,摄像机焦距选择3至9毫米。 [0113] Additionally, further comprising: a camera for acquiring video images, the installation position of the camera to account for the height of the guarantor image height I / 3 to I / 2, the focal length of the camera selected 3-9 mm.

[0114] 摄像机高度要适中,摄像头太低,人占的面积太大容易误报,太高容易漏报,焦距不能过大,不然人在视野内较大容易误报。 [0114] camera height should be moderate, the camera is too low, people occupy an area much easier to false positives, false negative too easy, the focal length can not be too large, or larger people easily false positives in the visual field. 摄像机角度尽量垂直,即视野范围内的景深不要过大,这样可以减少透视效应带来的影响,方便算法阈值的设定。 Camera angle as perpendicular as possible, i.e. the depth of field range of vision is not too large, thus reducing the impact of effects of perspective, the threshold setting algorithm easily.

[0115] 第一和第二实施方式是与本实施方式相对应的方法实施方式,本实施方式可与第一和第二实施方式互相配合实施。 [0115] The first embodiment and the second embodiment is the method according to the present embodiment corresponding to the embodiment, the present embodiment can be used with another embodiment of the first embodiment and the second embodiment. 第一和第二实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。 Technical details of the first embodiment and the second embodiment are still effective in the present embodiment, in order to reduce duplication, will not be repeated here. 相应地,本实施方式中提到的相关技术细节也可应用在第一和第二实施方式中。 Accordingly, the technical details related to the present embodiment mentioned embodiment can also be applied in the first embodiment and the second embodiment.

[0116] 需要说明的是,本发明各系统实施方式中提到的各单元都是逻辑单元,在物理上, 一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现,这些逻辑单元本身的物理实现方式并不是最重要的,这些逻辑单元所实现的功能的组合才是解决本发明所提出的技术问题的关键。 [0116] Incidentally, each of the units are logical unit system according to the present invention mentioned embodiment, physically, a logic unit may be a physical unit, or may be part of one physical unit, but also to combining a plurality of physical units of implementation, the logic unit itself, the physical implementation is not the most important, the combination of these functional units are implemented logic is the key technical problem of the proposed solution according to the present invention. 此外,为了突出本发明的创新部分,本发明上述各系统实施方式并没有将与解决本发明所提出的技术问题关系不太密切的单元引入,这并不表明上述设备实施方式并不存在其它的单元。 Further, in order to highlight the innovative part of the present invention, the system according to the present invention, the above-described embodiment is not introduced into the cells and not close relationship solve the technical problem proposed by the invention, it does not show the above-described embodiments are not present apparatus further unit.

[0117] 需要说明的是,在本专利的权利要求和说明书中,诸如第一和第二等之类的关系术语或者A、B、C等的字母仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。 [0117] Incidentally, in the claims and specification of this patent, such as first and second and the like or a relational terms A, B, C and other letters are only used to distinguish one entity or operation from another separate entity or operation, without necessarily requiring or implying any such actual relationship or order between these entities or operations. 而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。 Further, the term "comprising", "containing" or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, article, article, or apparatus not include only those elements but not expressly listed further comprising the other elements, or further comprising such process, method, article, or apparatus inherent elements. 在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。 Without additional restrictions, by the wording "include a" defining element does not exclude the existence of additional identical elements in the process comprises the element, method, article or apparatus.

[0118] 虽然通过参照本发明的某些优选实施方式,已经对本发明进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。 [0118] While the invention has been shown and described with reference to certain preferred embodiments of the present invention, but those of ordinary skill in the art should be understood that various changes may be made thereto in form and details without departing from the spirit and scope of the invention.

Claims (10)

1. 一种基于视频的逆行检测方法,其特征在于,包括以下步骤: 对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像; 根据所述帧差图像提取运动区域; 在所述帧差图像中选取角点; 以所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量; 根据所述光流的初始跟踪点与所述运动区域的位置关系,获得所述运动区域与所述光流特征矢量的对应关系; 对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式; 根据所述运动模式的平均速度的大小和方向判断是否有逆行发生; 在所述对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式的步骤中,包括以下子步骤: A根据运动区域中的光流特征矢量与对应的光流的初始跟踪点,计算出几何 A detection method based on reverse video, characterized by comprising the steps of: differencing the pixels of the original image adjacent video frames corresponding to, and the difference binarization give frame difference image; The the frame difference motion region extraction image; selecting the difference image frame corners; at selected corners as an initial tracking point light flow, the optical flow tracking angular points of a video in the original image, the optical flow obtained feature vector; initial tracking point from the positional relationship of the optical flow of the moving area, obtain the corresponding relationship of the motion area and the optical flow of feature vectors; for each motion region, respectively, based on the motion area wherein the optical flow motion vector resolution mode; determining whether retrograde depending average velocity magnitude and direction of the motion pattern; for each region a motion, the motion mode are decomposed according to the optical flow motion vector for the feature region of step comprises the sub-steps of: a region according to the motion of the optical flow of feature vectors corresponding to the initial tracking point optical flows calculated geometry 变换模型的平移参数; B利用得到的几何变换模型去套用其它的光流特征矢量,计算符合这个模型的光流特征矢量的个数,符合是指某个光流的初始跟踪点根据所述几何变换模型计算得到的点与利用角点光流点跟踪得到的点足够近; C得到第一几何变换模型,其中,光流特征矢量符合该模型的个数达到最大,符合该第一几何变换模型的光流特征矢量为第一运动模式。 Translation parameter transformation model; B geometric transformation model to apply to the other using the obtained feature vector of the optical flow, in line with the number of optical flow calculation model feature vector, refers to the initial tracking point is in line with an optical flow based on the geometric transformation model using a point and the calculated optical flow corner point tracking point obtained sufficiently close; C to give a first geometric transformation model, wherein the optical flow of the feature vector meet the model reaches the maximum number, in line with the first geometric transformation model wherein the optical flow motion vectors for the first mode.
2. 根据权利要求1所述的基于视频的逆行检测方法,其特征在于,在所述对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式的步骤中,还包括以下子步骤: 在不属于第一运动模式的光流特征矢量中,按照上述步骤A、B和C,得到第二几何变换模型,符合该第二几何变换模型的光流特征矢量为第二运动模式; 在不属于第一和第二运动模式的光流特征矢量中,按照上述步骤A、B和C,得到第三几何变换模型,符合该第三几何变换模型的光流特征矢量为第三运动模式。 The detection method based on reverse video, wherein the 1, claim said step for each motion region, respectively, according to the motion pattern decomposition feature vector of the optical flow in the motion area, further comprising sub-steps: in the optical flow feature vector does not belong to the first mode of motion, according to the above steps a, B and C, giving the second geometric transformation model, wherein optical flow vector in line with the second geometric transformation model is the second motion mode ; optical flow feature vector does not belong to the first and second modes of motion, according to the above steps a, B and C, giving a third geometric transformation model, wherein optical flow vector in line with the third geometric transformation model for the third motion mode.
3. 根据权利要求2所述的基于视频的逆行检测方法,其特征在于,采用的是遍历所有的光流特征矢量求解几何变换模型。 The detection method based on reverse video, wherein the 2, using the optical flows through all feature vectors to solve geometric transformation model claims.
4. 根据权利要求3所述的基于视频的逆行检测方法,其特征在于,所述几何变换模型为仿射变换模型。 4. The detection method based on retrograde video, wherein according to claim 3, wherein the geometric transformation model affine transformation model.
5. 根据权利要求4所述的基于视频的逆行检测方法,其特征在于,在所述根据所述帧差图像提取运动区域的步骤中,包括以下子步骤: 将帧差图像的像素做双向投影,得到水平投影直方图和垂直投影直方图; 通过对所述水平投影直方图和垂直投影直方图进行自适应阈值,得到运动区域。 The detection method based on reverse video, wherein according to claim 4, wherein, in the step of the difference image frame according to the motion region extraction, comprising the following sub-steps: a pixel difference image frame projection do bidirectional to give vertical and horizontal projection histograms projection histogram; adaptive threshold by performing the vertical projection and horizontal projection histograms histogram, to obtain a motion area.
6. 根据权利要求5所述的基于视频的逆行检测方法,其特征在于,在所述以所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量的步骤中,采用的是卢卡斯•卡纳德光流算法。 The detection method based on reverse video, wherein the 5, at the corner points selected as the initial tracking point of the light flow, the optical flow tracking angular point in the original image of the video claims, a step of obtaining feature vectors of the optical flow, using an optical flow algorithm Lucas • canard.
7. 根据权利要求6所述的基于视频的逆行检测方法,其特征在于,选择Harris角点作为光流的初始跟踪点。 The detection method based on reverse video, wherein said 6, selected Harris corner point of the light flow as the initial tracking claims.
8. 根据权利要求7所述的基于视频的逆行检测方法,其特征在于,在所述以所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量的步骤中,利用的是金字塔分层的思想,保存原始图像的1/4图像与1/16图像,角点每次迭代寻找先在图像1/16处寻找,然后在图像1/4处寻找,最后在原始图像寻找匹配特征块。 8. The method of claim detecting retrograde based video, wherein said 7 to the corner points selected as the initial tracking point of the light flow, the optical flow tracking angular points of a video in the original image, step obtain the optical flow of feature vectors, the use of hierarchical pyramid is thought, to save the original image and the image 1/4 1/16 image, each iteration Looking first corner looking at 1/16 the image, then the image 1/4 find, find the last block matching features in the original image.
9. 一种基于视频的逆行检测系统,其特征在于,包括: 帧差图像获取单元,用于对视频中相邻帧的原始图像中对应的像素点求差,并对差值二值化,得到帧差图像; 运动区域提取单元,用于根据所述帧差图像获取单元所得到的侦查图像提取运动区域; 角点选取单元,用于在所述帧差图像获取单元所得到的帧差图像中选取角点; 光流跟踪单元,用于以所述角点选取单元所选取的角点作为光流的初始跟踪点,在视频的原始图像中进行角点光流跟踪,得到光流特征矢量; 运动信息获取单元,用于根据所述光流的初始跟踪点与所述运动区域的位置关系,获得所述运动区域与所述光流特征矢量的对应关系; 运动模式分解单元,用于对于每一个运动区域,分别根据该运动区域中的光流特征矢量分解运动模式; 逆行检测单元,用于根据所述运动模式分解单元 A detection system based reverse video, characterized by comprising: a frame difference image acquiring unit, for adjacent pixels in the video frames corresponding to the original image differencing, and the difference binarization, to obtain a frame difference image; moving region extraction means for extracting an image according to the motion detection region of the difference frame image obtaining means obtained; corner selecting means to frame the difference image frame difference image acquiring unit obtained selected corner points; optical flow tracking unit for selecting the corner point unit selected corners as an initial tracking point light flow, the optical flow tracking angular point in the original image of the video, the feature vector of the optical flow obtained ; motion information obtaining unit, according to the initial tracking point of the positional relationship with the optical flow motion area, a corresponding relationship of the motion region and the optical flow of feature vectors; motion mode decomposition unit configured to each motion region, respectively, according to the optical flow motion mode decomposition feature vector of the moving region; retrograde detection unit for decomposing unit in accordance with the movement pattern 所分解的运动模式的平均速度的大小和方向判断是否有逆行发生; 所述运动模式分解单元在分解运动模式的过程中,包括以下子步骤: A根据运动区域中的光流特征矢量与对应的光流的初始跟踪点,计算出几何变换模型的平移参数; B利用得到的几何变换模型去套用其它的光流特征矢量,计算符合这个模型的光流特征矢量的个数,符合是指某个光流的初始跟踪点根据所述几何变换模型计算得到的点与利用角点光流点跟踪得到的点足够近; C得到第一几何变换模型,其中,光流特征矢量符合该模型的个数达到最大,符合该第一几何变换模型的光流特征矢量为第一运动模式。 The average velocity magnitude and direction of the decomposed motion pattern is determined whether retrograde occurred; the motion pattern decomposition process decomposing unit in the motion mode, comprising the sub-steps of: A motion vector according to the optical flow feature region corresponding to the initial tracking point optical flows calculated translation parameters of the geometric transformation model; B geometric transformation model to apply to the other using the obtained feature vector of the optical flow, in line with the number of optical flow calculation model feature vector, refers to a line with the initial tracking point based on optical flow and utilization corner point point-point transform model to calculate optical flows obtained the geometric track obtained sufficiently close; C to give a first geometric transformation model, wherein the optical flow of the feature vector count of the model reaches a maximum, corresponds to the first optical flow feature geometric transformation model is a first motion vector mode.
10. 根据权利要求9所述的基于视频的逆行检测系统,其特征在于,还包括:摄像机,用于获取视频图像,摄像机的安装位置要保证人的高度占图像高度的1/3至1/2,摄像机焦距选择3至9毫米。 10. The detection system as claimed in claim retrograde based video, wherein said 9, further comprising: a camera for acquiring video images, the installation position of the camera to account for the height of the guarantor 1/3 to 1/2 of image height , a focal length of the camera selected 3-9 mm.
CN201210419365.9A 2012-10-26 2012-10-26 Detection method and system based on retrograde video CN103793920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210419365.9A CN103793920B (en) 2012-10-26 2012-10-26 Detection method and system based on retrograde video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210419365.9A CN103793920B (en) 2012-10-26 2012-10-26 Detection method and system based on retrograde video

Publications (2)

Publication Number Publication Date
CN103793920A CN103793920A (en) 2014-05-14
CN103793920B true CN103793920B (en) 2017-10-13

Family

ID=50669543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210419365.9A CN103793920B (en) 2012-10-26 2012-10-26 Detection method and system based on retrograde video

Country Status (1)

Country Link
CN (1) CN103793920B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426811B (en) * 2015-09-28 2019-03-15 高新兴科技集团股份有限公司 A kind of crowd's abnormal behaviour and crowd density recognition methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN102184547A (en) * 2011-03-28 2011-09-14 长安大学 Video-based vehicle reverse driving event detecting method
CN102708573A (en) * 2012-02-28 2012-10-03 西安电子科技大学 Group movement mode detection method under complex scenes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101792501B1 (en) * 2011-03-16 2017-11-21 한국전자통신연구원 Method and apparatus for feature-based stereo matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN102184547A (en) * 2011-03-28 2011-09-14 长安大学 Video-based vehicle reverse driving event detecting method
CN102708573A (en) * 2012-02-28 2012-10-03 西安电子科技大学 Group movement mode detection method under complex scenes

Also Published As

Publication number Publication date
CN103793920A (en) 2014-05-14

Similar Documents

Publication Publication Date Title
Kong et al. A viewpoint invariant approach for crowd counting
Senior et al. Appearance models for occlusion handling
Jafari et al. Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras
Zhao et al. Tracking multiple humans in crowded environment
Mei et al. Robust visual tracking and vehicle classification via sparse representation
Ihaddadene et al. Real-time crowd motion analysis
EP1836683B1 (en) Method for tracking moving object in video acquired of scene with camera
Smeulders et al. Visual tracking: An experimental survey
US7447337B2 (en) Video content understanding through real time video motion analysis
Roshtkhari et al. An on-line, real-time learning method for detecting anomalies in videos using spatio-temporal compositions
Zhao et al. Segmentation and tracking of multiple humans in crowded environments
Benfold et al. Guiding visual surveillance by tracking human attention.
Delannay et al. Detection and recognition of sports (wo) men from multiple views
JP2014504410A (en) Detection and tracking of moving objects
Herrero et al. Background subtraction techniques: Systematic evaluation and comparative analysis
Bugeau et al. Detection and segmentation of moving objects in complex scenes
Wang Real-time moving vehicle detection with cast shadow removal in video based on conditional random field
Wang et al. Review on vehicle detection based on video for traffic surveillance
Venkatesh et al. Efficient object-based video inpainting
Badenas et al. Motion-based segmentation and region tracking in image sequences
Zhao et al. Stochastic human segmentation from a static camera
CN101141633A (en) Moving object detecting and tracing method in complex scene
CN101729872B (en) Video monitoring image based method for automatically distinguishing traffic states of roads
Aggarwal et al. Object tracking using background subtraction and motion estimation in MPEG videos
Del Bimbo et al. Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
GR01