CN105046285B - A kind of abnormal behaviour discrimination method based on kinematic constraint - Google Patents
A kind of abnormal behaviour discrimination method based on kinematic constraint Download PDFInfo
- Publication number
- CN105046285B CN105046285B CN201510551661.8A CN201510551661A CN105046285B CN 105046285 B CN105046285 B CN 105046285B CN 201510551661 A CN201510551661 A CN 201510551661A CN 105046285 B CN105046285 B CN 105046285B
- Authority
- CN
- China
- Prior art keywords
- optical flow
- optical
- svm
- kinematic constraint
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于运动约束的异常行为辨识方法,属于视频监控领域,包括训练过程,训练过程包括以下步骤:1、对输入的第一组视频帧图像采用快速标定方法进行标定;2、对标定过的输入视频帧图像进行背景建模,同时用光流法从帧图像中提取出光流特征;3、用行人的运动约束对前景中的光流特征进行分析和选取;4、用SVM对所选的光流特征进行学习,得到可用于异常行为辨识的SVM分类器模型参数;还包括检测过程,检测过程输入的是第二组视频,使用的SVM参数是训练过程中得到的,用于对模型参数的检测和判断。本发明优化了光流特征计算,消除了噪声光流特征,行人身上的光流特征更加显著。并且能对奔跑等异常行为作出判断。
The invention relates to a method for identifying abnormal behavior based on motion constraints, which belongs to the field of video monitoring, and includes a training process, which includes the following steps: 1. Using a fast calibration method to calibrate the input first group of video frame images; 2. The calibrated input video frame image is used for background modeling, and the optical flow feature is extracted from the frame image by the optical flow method; 3. The optical flow feature in the foreground is analyzed and selected by the pedestrian’s motion constraints; 4. The SVM is used to analyze the The selected optical flow features are learned to obtain the parameters of the SVM classifier model that can be used for abnormal behavior identification; it also includes the detection process, the input of the detection process is the second group of videos, and the SVM parameters used are obtained during the training process and used for Detect and judge model parameters. The invention optimizes the calculation of optical flow features, eliminates noise optical flow features, and makes the optical flow features on pedestrians more prominent. And it can make judgments on abnormal behaviors such as running.
Description
技术领域technical field
本发明涉及一种基于运动约束的异常行为辨识方法。The invention relates to an abnormal behavior identification method based on motion constraints.
背景技术Background technique
当监控环境有些拥挤但不严重时,可以基于前人的工作对基于支持向量机(SVM)的异常行为辨识方法进行改进,如中采用鲁棒的跟踪方法,可在行人之间互相遮挡不严重时的环境下跟踪20人左右,中提出对于缺损的人体图像进行跟踪的方法。When the monitoring environment is somewhat crowded but not serious, the abnormal behavior identification method based on support vector machine (SVM) can be improved based on the previous work. For example, the robust tracking method is used in the middle, and the mutual occlusion between pedestrians is not serious. Tracking about 20 people in the real-time environment, a method for tracking the defective human body image is proposed in the paper.
但在特别拥挤的环境中,行人之间的相互遮挡程度超过一定界限时,无法获得完整的或者清晰的行人轮廓特征时,以上方法在理解人群行为时就会出现困难,此时需要新的方法来解决这个问题。为此,提出了运动约束的概念,并将其与光流特征的学习结合起来,实现了拥挤环境下的异常行为辨识与监控。However, in a particularly crowded environment, when the degree of mutual occlusion between pedestrians exceeds a certain limit, and it is impossible to obtain complete or clear pedestrian outline features, the above methods will have difficulties in understanding crowd behavior, and new methods are needed. to solve this problem. To this end, the concept of motion constraints is proposed and combined with the learning of optical flow features to realize abnormal behavior identification and monitoring in crowded environments.
发明内容Contents of the invention
本发明解决上述技术问题的技术方案如下:The technical scheme that the present invention solves the problems of the technologies described above is as follows:
一种基于运动约束的异常行为辨识方法,包括训练过程,训练过程包括以下步骤:A method for identifying abnormal behavior based on motion constraints, including a training process, the training process includes the following steps:
(1)、输入第一组视频,对输入视频帧图像进行预处理,包括采用快速标定方法进行标定、去噪等;(1), input the first group of videos, and preprocess the input video frame images, including calibration and denoising by using a fast calibration method;
(2)、对标定过的输入视频帧图像进行背景建模,然后用光流法从帧图像中提取出光流特征;(2), carry out background modeling to the calibrated input video frame image, then use the optical flow method to extract the optical flow feature from the frame image;
(3)、用行人的运动约束对前景中的光流特征进行分析和选取;(3) Analyze and select the optical flow features in the foreground with the pedestrian's motion constraints;
(4)、用SVM对所选的光流特征进行学习,得到可用于异常行为辨识的SVM分类器。(4) Use SVM to learn the selected optical flow features to obtain an SVM classifier that can be used for abnormal behavior identification.
(5)、记下SVM分类器模型的参数。(5) Write down the parameters of the SVM classifier model.
在上述技术方案的基础上,本发明还可以做如下改进。On the basis of the above technical solutions, the present invention can also be improved as follows.
进一步的,所述从帧图像中提取出光流特征采取的是Lucas-Kanade算法。Further, the extraction of the optical flow feature from the frame image adopts the Lucas-Kanade algorithm.
进一步的,所述运动约束模型是通过贝叶斯公式建立的,公式为:Further, the motion constraint model is established by Bayesian formula, the formula is:
P(u,v|I(x,y,t))=αP(u,v)ΠP(I(xi,yi,ti)|u,v)P(u,v|I(x,y,t))=αP(u,v)ΠP(I(x i ,y i ,t i )|u,v)
其中,P(u,v)是先验概率,代表物体的二维运动速度概率,α是正比例因子,P(u,v|I(x,y,t))是空间(x,y)处在时刻t时的灰度值条件下其二维运动速度概率,P(I(x,y,t)|u,v)是物体在运动时的灰度变化的概率,即后验概率。Among them, P(u,v) is the prior probability, which represents the probability of the two-dimensional motion speed of the object, α is the proportional factor, P(u,v|I(x,y,t)) is the space (x,y) The probability of its two-dimensional motion velocity under the condition of the gray value at time t, P(I(x,y,t)|u,v) is the probability of the gray change of the object when it is moving, that is, the posterior probability.
进一步的,所述步骤3中的用行人的运动约束对前景中的光流特征进行分析和选取,具体包括以下步骤:Further, the analysis and selection of the optical flow features in the foreground with the pedestrian's motion constraints in the step 3 specifically includes the following steps:
(1)、从背景检测之后的前景图像中,从行人的图像区域中选择光流特征;(1), from the foreground image after the background detection, select the optical flow feature from the image area of the pedestrian;
(2)、通过运动约束模型筛选出符合阈值要求的光流特征,所述的阈值即为对后验概率P(I(x,y,t)|u,v)进行筛选使用的阈值。(2) Select the optical flow features that meet the threshold requirement through the motion constraint model, and the threshold is the threshold used for screening the posterior probability P(I(x,y,t)|u,v).
进一步的,所述α的范围在0和1之间,Further, the range of α is between 0 and 1,
进一步的,所述阈值是在SVM建模过程的迭代求解过程中进行优化,得到最优值。Further, the threshold is optimized during the iterative solution process of the SVM modeling process to obtain an optimal value.
进一步的,本发明还包括检测过程,所述检测过程的步骤为:Further, the present invention also includes a detection process, the steps of which are:
(1)、输入第二组视频,对输入视频帧图像进行预处理,包括采用快速标定方法进行标定、去噪等;(1), input the second group of videos, and preprocess the input video frame images, including calibration and denoising by using a fast calibration method;
(2)、对标定过的输入视频帧图像进行背景建模,然后用光流法从帧图像中提取出光流特征;(2), carry out background modeling to the calibrated input video frame image, then use the optical flow method to extract the optical flow feature from the frame image;
(3)、用行人的运动约束,按照训练阶段得到的阈值,对前景中的光流特征进行分析和选取;(3) Using the motion constraints of pedestrians, analyze and select the optical flow features in the foreground according to the threshold obtained in the training stage;
(4)、用训练过程得到的SVM分类器对所选的光流特征进行判断,若对异常行为(事先已知道该输入是属于异常行为)判断正确,则可证明该分类器是可靠的。(4) Use the SVM classifier obtained in the training process to judge the selected optical flow features. If the abnormal behavior (knowing that the input is an abnormal behavior) is judged correctly, it can be proved that the classifier is reliable.
本发明的有益效果是:本发明通过将光流特征选取与运动约束结合,在训练过程获得可以用于异常行为判断的SVM分类器,在检测过程通过输入的是行为类别已知的第二组视频,对训练得到的模型进行测试,检验模型的正确性和可靠性,同时也可以起到对模型进一步泛化的作用。本发明首次将行人运动约束用来优化光流特征选择,有效的消除了噪声光流特征和增强了感兴趣的光流特征(flow features of interest),并且能对奔跑等异常行为作出准确判断。The beneficial effects of the present invention are: the present invention combines optical flow feature selection with motion constraints to obtain an SVM classifier that can be used for abnormal behavior judgment in the training process, and the second group with known behavior categories is input in the detection process The video is used to test the trained model to verify the correctness and reliability of the model, and it can also play a role in further generalizing the model. The present invention uses pedestrian motion constraints to optimize optical flow feature selection for the first time, effectively eliminates noise optical flow features and enhances flow features of interest, and can make accurate judgments on abnormal behaviors such as running.
附图说明Description of drawings
图1为本发明方法示意图;Fig. 1 is a schematic diagram of the method of the present invention;
图2为利用贝叶斯建模的示意图;Fig. 2 is the schematic diagram utilizing Bayesian modeling;
图3为采用了运动约束的光流特征计算结果和未采用运动约束的光流特征的计算结果对比图。Fig. 3 is a comparison chart of calculation results of optical flow features using motion constraints and calculation results of optical flow features not using motion constraints.
具体实施方式Detailed ways
以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention are described below in conjunction with the accompanying drawings, and the examples given are only used to explain the present invention, and are not intended to limit the scope of the present invention.
一种基于运动约束的异常行为辨识系统,包括预处理模块、背景建模模块、光流提取模块、流场选择模块、运动约束模块和SVM分类器模块。An abnormal behavior identification system based on motion constraints includes a preprocessing module, a background modeling module, an optical flow extraction module, a flow field selection module, a motion constraint module and an SVM classifier module.
所述预处理模块用于对输入视频帧图像进行预处理,包括标定、去噪等;The preprocessing module is used to preprocess the input video frame image, including calibration, denoising, etc.;
所述背景建模模块用于对预处理后的输入视频帧图像进行背景建模;The background modeling module is used to carry out background modeling to the preprocessed input video frame image;
所述光流提取模块用于从背景建模后帧图像中提取出光流特征;The optical flow extraction module is used to extract optical flow features from frame images after background modeling;
所述运动约束模块用于用行人的运动约束,按照阈值筛选出符合条件的光流特征,The motion constraint module is used to use the motion constraints of pedestrians to filter out qualified optical flow features according to the threshold,
所述SVM分类器模块用于对所选的光流特征进行学习。在训练阶段,通过SVM算法的迭代和优化求解,获得最优的模型参数和阈值;在测试阶段,用训练获得的模型对已知样本进行测试,检测模型的准确性和可靠性,是否可以对行人的行为(包括异常行为)作出准确的判断。The SVM classifier module is used to learn the selected optical flow features. In the training phase, the optimal model parameters and thresholds are obtained through the iterative and optimized solution of the SVM algorithm; in the testing phase, the model obtained by training is used to test the known samples to check the accuracy and reliability of the model, whether it can Pedestrian behavior (including abnormal behavior) to make accurate judgments.
实施例1Example 1
如图1所示,一种基于运动约束的异常行为辨识方法,包括训练过程和检测过程;As shown in Figure 1, an abnormal behavior identification method based on motion constraints, including a training process and a detection process;
所述训练过程包括以下步骤:The training process includes the following steps:
(1)、输入第一组视频,对输入视频帧图像进行预处理,包括采用快速标定方法进行标定、去噪等;(1), input the first group of videos, and preprocess the input video frame images, including calibration and denoising by using a fast calibration method;
(2)、对标定过的输入视频帧图像进行背景建模,然后用光流法从帧图像中提取出光流特征;(2), carry out background modeling to the calibrated input video frame image, then use the optical flow method to extract the optical flow feature from the frame image;
(3)、用行人的运动约束,按照一定的阈值对前景中的光流特征进行分析和选取;(3) Using the motion constraints of pedestrians, analyze and select the optical flow features in the foreground according to a certain threshold;
(4)、用SVM对所选的光流特征进行学习,得到可用于异常行为辨识的SVM分类器。(4) Use SVM to learn the selected optical flow features to obtain an SVM classifier that can be used for abnormal behavior identification.
(5)、记下SVM分类器模型的参数。(5) Write down the parameters of the SVM classifier model.
1、光流特征定义1. Optical flow feature definition
光流被广泛用于计算机视觉任务中,如人脸识别、步态建模、物体跟踪等,目前所用的两种光流算法是Horn-Schunck算法和Lucas-Kanade算法。Horn-Schunck算法有更光滑的光流、全局信息和精确的时间微分,但是相对较慢而且有不光滑的边界轮廓。Lucas-Kanade算法的边界错误较多,但算法较简单可以快速计算。考虑到计算速度,我们采用的是Lucas-Kanade算法。Optical flow is widely used in computer vision tasks, such as face recognition, gait modeling, object tracking, etc. The two optical flow algorithms currently used are the Horn-Schunck algorithm and the Lucas-Kanade algorithm. The Horn-Schunck algorithm has smoother optical flow, global information and accurate temporal differentiation, but is relatively slow and has rough boundary contours. The Lucas-Kanade algorithm has more boundary errors, but the algorithm is relatively simple and can be calculated quickly. Considering the calculation speed, we use the Lucas-Kanade algorithm.
不考虑运动约束时,获取光流特征步骤包括光流计算和光流提取步骤。此时得到的光流特征向量,是帧图像中隐含了特征点的运动信息的向量,不含有任何行人运动约束的先验知识。When motion constraints are not considered, the steps of obtaining optical flow features include optical flow calculation and optical flow extraction steps. The optical flow feature vector obtained at this time is a vector that contains the motion information of the feature points in the frame image, and does not contain any prior knowledge of pedestrian motion constraints.
光流特征向量的产生分成两步:The generation of optical flow feature vector is divided into two steps:
第一步,找到人体图像块的特征向量点,例如某个行人的头部和左肩的夹角顶点。The first step is to find the feature vector points of the human body image block, such as the angle vertex between the head and the left shoulder of a certain pedestrian.
第二步,将所找到的特征向量点所对应的前一帧图像的特征向量点,例如同一个行人的头部和左肩的夹角顶点。In the second step, the feature vector points of the previous frame image corresponding to the found feature vector points, for example, the vertex of the angle between the head and the left shoulder of the same pedestrian.
2、光流特征提取2. Optical flow feature extraction
在选定前景图像中的所有光流可以写作P=[p(1),p(2),...,p(i)],i=1..N,其中p(i)=[Lx,Ly,Mx,My,α],光流的数目是N(可以从光流算法中算出),Lx和Ly分别表示图像块中一个光流位置的横坐标值和纵坐标值,Mx和My分别表示相应光流幅值(相当于速度值)的横坐标值和纵坐标值,α表示方向角度值。由于在每个图像块中的光流数是不同的,所以还需要对每个图像块进行标准化处理,处理后的光流特征可以写作Q=[p(x1),p(x2),...,p(xi)],其中p(xi)是从p(i)中选出来的。输入参数最终可以用公式(2.1)描述:All optical flows in the selected foreground image can be written as P=[p(1),p(2),...,p(i)], i=1..N, where p(i)=[Lx ,Ly,Mx,My,α], the number of optical flow is N (can be calculated from the optical flow algorithm), Lx and Ly represent the abscissa value and ordinate value of an optical flow position in the image block, Mx and My Respectively represent the abscissa value and the ordinate value of the corresponding optical flow amplitude (equivalent to the velocity value), and α represents the direction angle value. Since the number of optical flows in each image block is different, it is necessary to standardize each image block, and the processed optical flow features can be written as Q=[p(x 1 ),p(x 2 ), ...,p(xi ) ], where p( xi ) is selected from p(i). The input parameters can finally be described by formula (2.1):
(2.1) I=[p(x1),p(x2),...,p(xi)],i=1..N(2.1) I=[p(x 1 ),p(x 2 ),...,p(x i )], i=1..N
在选择了光流特征作为分类器的输入以后,问题可以归结为两类或多类的分类问题。由于SVM被用作行人运动行为的分类器,所选取的输入参数是图像块中的光流特征位置、方向和速度等。After selecting the optical flow features as the input of the classifier, the problem can be reduced to a classification problem of two or more classes. Since SVM is used as a classifier for pedestrian motion behavior, the selected input parameters are the position, direction and velocity of optical flow features in the image block, etc.
3、运动约束的引入3. The introduction of motion constraints
不考虑运动约束时,行人行为的光流特征被直接作为SVM训练分类器的输入。尽管光流特征对行为辨识可以起到作用,但由于光流法在复杂环境下的鲁棒性不高,需要引入运动约束。When the motion constraints are not considered, the optical flow features of pedestrian behavior are directly used as the input of the SVM to train the classifier. Although optical flow features can play a role in behavior recognition, due to the low robustness of the optical flow method in complex environments, motion constraints need to be introduced.
在对拥挤情况下行人的光流特征进行分析后发现,行人的运动模式符合与人有关的运动规则,与其他的运动物体如行驶的车辆等相比,有着明显的不同(车的运动是一种刚体运动,而且运动速度和轨迹都和行人不同);而行人的运动模式,特别是正常情况下行走的速度特征,是非常有有规律的。使用贝叶斯准则来对行人运动约束建模,如图2所示。After analyzing the optical flow characteristics of pedestrians in crowded conditions, it is found that the movement pattern of pedestrians conforms to the movement rules related to people, and is significantly different from other moving objects such as driving vehicles (the movement of vehicles is a a rigid body motion, and the speed and trajectory are different from pedestrians); while the motion pattern of pedestrians, especially the speed characteristics of walking under normal circumstances, is very regular. The pedestrian motion constraints are modeled using Bayesian criteria, as shown in Figure 2.
建模过程有两个步骤:The modeling process has two steps:
第一步,确保只从行人的图像区域中选择光流特征(在拥挤环境中,很难判断一个像素点是否位于人的图像区域之中,这个问题的一种解决方法是采用一种鲁棒的和快速的检测人的算法,或者是从背景检测之后的前景图像中选择点,后者是本方法采用的方法),选择出来的光流特征占光流特征总数的比例为P(A);The first step is to ensure that optical flow features are only selected from the image area of pedestrians (in a crowded environment, it is difficult to judge whether a pixel is located in the image area of a person, a solution to this problem is to use a robust A fast and fast algorithm for detecting people, or selecting points from the foreground image after background detection, the latter is the method used in this method), the proportion of the selected optical flow features to the total number of optical flow features is P(A) ;
第二步,通过运动约束模型筛选出符合阈值要求的光流特征,以使得光流特征的计算更加精确,从而判断所选像素点是否符合人体运动约束,筛选出的光流特征占筛选前的光流特征的比例为P(B),在训练过程中,P(A)和P(B)会达到一个动态的平衡;The second step is to filter out the optical flow features that meet the threshold requirements through the motion constraint model, so as to make the calculation of the optical flow features more accurate, so as to judge whether the selected pixels meet the human body motion constraints. The ratio of optical flow features is P(B), and during the training process, P(A) and P(B) will reach a dynamic balance;
第二步中,本方法采用贝叶斯方法(Bayesian)来对运动约束建模。贝叶斯方法用于人类运动分析的目的是计算给定图像中的后验概率。后验概率是通过计算t时刻位于(x,y)处的灰度I(x,y,t)和物体的二维运动速度(u,v)得到的。In the second step, the method adopts Bayesian method (Bayesian) to model the motion constraints. The purpose of Bayesian methods for human motion analysis is to compute the posterior probability in a given image. The posterior probability is obtained by calculating the gray level I(x,y,t) at (x,y) at time t and the two-dimensional motion velocity (u,v) of the object.
(3.1) P(u,v|I(x,y,t))=αP(u,v)P(I(x,y,t)|u,v)(3.1) P(u,v|I(x,y,t))=αP(u,v)P(I(x,y,t)|u,v)
公式(3.1)中,如果假设图像是从不同的位置观察的和时间是连续独立的,则有:In formula (3.1), if it is assumed that the images are observed from different positions and the time is continuous and independent, then there are:
(3.2) P(u,v|I(x,y,t))=αP(u,v)ΠP(I(xi,yi,ti)|u,v)(3.2) P(u,v|I(x,y,t))=αP(u,v)ΠP(I(x i ,y i ,t i )|u,v)
其中,P(u,v)是先验概率,代表物体的二维运动速度概率,α是正比例因子,P(u,v|I(x,y,t))是空间(x,y)处在时刻t时的灰度值条件下其二维运动速度概率,可在光流计算中得到,P(I(x,y,t)|u,v)是物体在运动时的灰度变化的概率,即后验概率,然后按照阈值对P(I(x,y,t)|u,v)进行筛选出符合条件的光流特征,阈值在初始阶段为任意值;Among them, P(u,v) is the prior probability, which represents the probability of the two-dimensional motion speed of the object, α is the proportional factor, P(u,v|I(x,y,t)) is the space (x,y) The two-dimensional motion speed probability under the condition of the gray value at time t can be obtained in the optical flow calculation, P(I(x,y,t)|u,v) is the gray value of the object when it is moving Probability, that is, the posterior probability, and then filter out the optical flow features that meet the conditions according to the threshold value P(I(x,y,t)|u,v), and the threshold value is any value in the initial stage;
通过运动约束模型筛选出符合阈值要求的光流特征后,所选的光流特征作为SVM算法的输入并学习,在SVM算法的迭代求解中,阈值和SVM模型参数会不断进行优化,最终获得最优的阈值和SVM模型参数。After filtering out the optical flow features that meet the threshold requirements through the motion constraint model, the selected optical flow features are used as the input of the SVM algorithm and learned. In the iterative solution of the SVM algorithm, the threshold and SVM model parameters will be continuously optimized, and finally the best Optimal threshold and SVM model parameters.
本发明的方法还包括检测过程,所述检测过程包括以下步骤:Method of the present invention also comprises detection process, and described detection process comprises the following steps:
(1)、输入第二组视频,对输入视频帧图像进行预处理,包括采用快速标定方法进行标定、去噪等;(1), input the second group of videos, and preprocess the input video frame images, including calibration and denoising by using a fast calibration method;
(2)、对标定过的输入视频帧图像进行背景建模,然后用光流法从帧图像中提取出光流特征;(2), carry out background modeling to the calibrated input video frame image, then use the optical flow method to extract the optical flow feature from the frame image;
(3)、用行人的运动约束,按照训练阶段得到的阈值,对前景中的光流特征进行分析和选取;(3) Using the motion constraints of pedestrians, analyze and select the optical flow features in the foreground according to the threshold obtained in the training stage;
(4)、将训练过程得到的SVM分类器对所选的光流特征进行判断,若对异常行为(事先已知道该输入是属于异常行为)判断正确,则可证明该分类器是可靠的。(4) The SVM classifier obtained in the training process is used to judge the selected optical flow features. If the abnormal behavior (knowing that the input is an abnormal behavior) is judged correctly, it can be proved that the classifier is reliable.
4、实验结果4. Experimental results
本实验展示将运动约束用于拥挤环境下光流特征计算和一类异常行为监控的结果。实验中,监控视频的解析度是320×240,平均时间长度是30分钟。对于每段30分钟的图像,其中,前10分钟用于模型训练,后20分钟用于模型测试。This experiment demonstrates the results of using motion constraints for optical flow feature computation and a class of abnormal behavior monitoring in crowded environments. In the experiment, the resolution of the surveillance video is 320×240, and the average duration is 30 minutes. For each 30-minute image, the first 10 minutes are used for model training, and the last 20 minutes are used for model testing.
考虑运动约束的光流特征计算结果。Computational results of optical flow features considering motion constraints.
图3说明了一个采用了运动约束的光流特征计算结果,左图为未采用运动约束的计算结果,右图为采用运动约束的计算结果。从图3中可以看到,运动约束优化了光流特征计算,消除了噪声光流特征,行人身上的光流特征更加显著。同时从图中还可以看到,车的运动约束和行人的运动约束有着明显的差别。Figure 3 illustrates the calculation results of an optical flow feature using motion constraints. The left figure shows the calculation results without motion constraints, and the right figure shows the calculation results with motion constraints. It can be seen from Figure 3 that the motion constraint optimizes the calculation of optical flow features, eliminates the noise optical flow features, and the optical flow features on pedestrians are more significant. At the same time, it can also be seen from the figure that there are obvious differences between the motion constraints of the car and the motion constraints of pedestrians.
以上所述仅为本发明的较佳实施案例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred implementation cases of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within range.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510551661.8A CN105046285B (en) | 2015-08-31 | 2015-08-31 | A kind of abnormal behaviour discrimination method based on kinematic constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510551661.8A CN105046285B (en) | 2015-08-31 | 2015-08-31 | A kind of abnormal behaviour discrimination method based on kinematic constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105046285A CN105046285A (en) | 2015-11-11 |
CN105046285B true CN105046285B (en) | 2018-08-17 |
Family
ID=54452814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510551661.8A Expired - Fee Related CN105046285B (en) | 2015-08-31 | 2015-08-31 | A kind of abnormal behaviour discrimination method based on kinematic constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105046285B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105469054B (en) * | 2015-11-25 | 2019-05-07 | 天津光电高斯通信工程技术股份有限公司 | The model building method of normal behaviour and the detection method of abnormal behaviour |
CN110222616B (en) * | 2019-05-28 | 2021-08-31 | 浙江大华技术股份有限公司 | Pedestrian abnormal behavior detection method, image processing device and storage device |
CN111046797A (en) * | 2019-12-12 | 2020-04-21 | 天地伟业技术有限公司 | Oil pipeline warning method based on personnel and vehicle behavior analysis |
CN113762027B (en) * | 2021-03-15 | 2023-09-08 | 北京京东振世信息技术有限公司 | Abnormal behavior identification method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271527A (en) * | 2008-02-25 | 2008-09-24 | 北京理工大学 | An Abnormal Behavior Detection Method Based on Local Statistical Feature Analysis of Sports Field |
CN102663429A (en) * | 2012-04-11 | 2012-09-12 | 上海交通大学 | Method for motion pattern classification and action recognition of moving target |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101733131B1 (en) * | 2010-12-14 | 2017-05-10 | 한국전자통신연구원 | 3D motion recognition method and apparatus |
-
2015
- 2015-08-31 CN CN201510551661.8A patent/CN105046285B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271527A (en) * | 2008-02-25 | 2008-09-24 | 北京理工大学 | An Abnormal Behavior Detection Method Based on Local Statistical Feature Analysis of Sports Field |
CN102663429A (en) * | 2012-04-11 | 2012-09-12 | 上海交通大学 | Method for motion pattern classification and action recognition of moving target |
Also Published As
Publication number | Publication date |
---|---|
CN105046285A (en) | 2015-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dib et al. | A review on negative road anomaly detection methods | |
CN104361332B (en) | A kind of face eye areas localization method for fatigue driving detection | |
CN111191667B (en) | Crowd counting method based on multiscale generation countermeasure network | |
CN111881750A (en) | Crowd abnormity detection method based on generation of confrontation network | |
CN105513354A (en) | Video-based urban road traffic jam detecting system | |
JP6185919B2 (en) | Method and system for improving person counting by fusing human detection modality results | |
CN104281853A (en) | Behavior identification method based on 3D convolution neural network | |
CN104200494A (en) | Real-time visual target tracking method based on light streams | |
CN111553214B (en) | Method and system for detecting smoking behavior of driver | |
CN107133610B (en) | Visual detection and counting method for traffic flow under complex road conditions | |
Espinosa et al. | Motorcycle detection and classification in urban Scenarios using a model based on Faster R-CNN | |
CN103150572A (en) | On-line type visual tracking method | |
CN105046285B (en) | A kind of abnormal behaviour discrimination method based on kinematic constraint | |
Kumtepe et al. | Driver aggressiveness detection via multisensory data fusion | |
Min et al. | COEB-SLAM: A robust VSLAM in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering | |
CN111274862B (en) | Device and method for generating a tag object of a vehicle's surroundings | |
CN114419547B (en) | Vehicle detection method and system based on monocular vision and deep learning | |
CN108537138A (en) | A kind of eyes closed degree computational methods based on machine vision | |
Santos et al. | Car recognition based on back lights and rear view features | |
Liu et al. | PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8 | |
CN107886060A (en) | Pedestrian's automatic detection and tracking based on video | |
Bernuy et al. | Adaptive and real-time unpaved road segmentation using color histograms and RANSAC | |
Ollick | Camera-based Anomaly Detection with Generative World Models | |
Huang et al. | Fall detection using modular neural networks with back-projected optical flow | |
Gu et al. | Real-time vehicle passenger detection through deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200709 Address after: 350015 No. 99 Denglong Road, Fuzhou Economic and Technological Development Zone, Fujian Province Patentee after: YANGO University Address before: 430000, No. 31, Wuhuan Avenue, Dongxihu District, Hubei, Wuhan Patentee before: WUHAN YINGSHI INTELLIGENT SCIENCE & TECHNOLOGY Co.,Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180817 |