CN103324913B - A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis - Google Patents

A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis Download PDF

Info

Publication number
CN103324913B
CN103324913B CN201310208226.6A CN201310208226A CN103324913B CN 103324913 B CN103324913 B CN 103324913B CN 201310208226 A CN201310208226 A CN 201310208226A CN 103324913 B CN103324913 B CN 103324913B
Authority
CN
China
Prior art keywords
point
target
threshold value
pedestrian
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310208226.6A
Other languages
Chinese (zh)
Other versions
CN103324913A (en
Inventor
宋焕生
崔华
付洋
张骁
王国锋
李东方
李建成
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Changan University
Original Assignee
CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd, Changan University filed Critical CHINA HIGHWAY ENGINEERING CONSULTING GROUP Co Ltd
Priority to CN201310208226.6A priority Critical patent/CN103324913B/en
Publication of CN103324913A publication Critical patent/CN103324913A/en
Application granted granted Critical
Publication of CN103324913B publication Critical patent/CN103324913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供了一种基于形状特征和轨迹分析的行人事件检测方法,通过使用背景差分法的目标分割获得前景目标,对同一目标的连通域采用基于块的方法标记,同时记录该连通域的外接矩形并提取其几何形状特征完成目标识别,当识别出类行人目标后提取目标的角点,利用角点位置信息对角点跟踪匹配,重复上述过程,可以得到目标的运动轨迹,对该轨迹求分段拐点,在拐点形成的每个分段内分别做线性分析,实现目标速度的求取,在此基础上分析行人事件状态信息,完成交通安全预警。本发明的检测方法适用于复杂多变的交通场景,能对监控视频范围内出现的行人准确识别、跟踪并预警,实用价值高,具有广阔的应用前景。

The present invention provides a pedestrian event detection method based on shape features and trajectory analysis. The foreground target is obtained by target segmentation using the background difference method. The connected domain of the same target is marked with a block-based method, and the circumscribed area of the connected domain is recorded at the same time. Rectangle and extract its geometric shape features to complete the target recognition. When the pedestrian-like target is identified, the corner point of the target is extracted, and the corner point position information is used to track and match the corner points. Repeat the above process to obtain the target’s trajectory. Calculate the trajectory Segmented inflection point, linear analysis is performed in each segment formed by the inflection point to realize the calculation of the target speed, and on this basis, the status information of pedestrian events is analyzed to complete the traffic safety warning. The detection method of the invention is suitable for complex and changeable traffic scenes, can accurately identify, track and warn pedestrians appearing in the monitoring video range, has high practical value, and has broad application prospects.

Description

一种基于形状特征和轨迹分析的行人事件检测方法A Pedestrian Incident Detection Method Based on Shape Feature and Trajectory Analysis

技术领域technical field

本发明属于视频检测领域,具体涉及一种基于形状特征和轨迹分析的行人事件检测方法。The invention belongs to the field of video detection, and in particular relates to a pedestrian event detection method based on shape feature and trajectory analysis.

背景技术Background technique

随着道路交通建设的发展,行人与车辆的矛盾越来越突出,导致交通事故不断发生。行人违章事件是引起交通事故的一个重要原因,例如闯红灯、横穿马路、闯入高速公路等,因此对行人违章事件的监控成为交通监控的一个重要部分。目前的交通监控主要通过人工查看监控视频和道路巡视实现,这种方式效率低,不能做到实时监控,对资源也造成极大浪费。在智能交通领域,传统的行人检测方法主要包括温度检测、感应线圈检测、声音检测等。温度检测易受交通场景众多热源目标的干扰,造成误检。感应线圈检测灵敏度不高、安装不便、可维护性差。声音检测也因交通场景噪声很多而检测准确率不高。With the development of road traffic construction, the contradiction between pedestrians and vehicles has become more and more prominent, resulting in frequent occurrence of traffic accidents. Pedestrian violations are an important cause of traffic accidents, such as running red lights, crossing the road, breaking into expressways, etc. Therefore, the monitoring of pedestrian violations has become an important part of traffic monitoring. The current traffic monitoring is mainly realized by manually checking the monitoring video and road inspection. This method is inefficient, cannot achieve real-time monitoring, and causes a great waste of resources. In the field of intelligent transportation, traditional pedestrian detection methods mainly include temperature detection, induction coil detection, sound detection, etc. Temperature detection is susceptible to interference from many heat sources in traffic scenes, resulting in false detections. The detection sensitivity of the induction coil is not high, the installation is inconvenient, and the maintainability is poor. The detection accuracy of sound detection is not high due to the noise in the traffic scene.

近年来,基于视频的检测技术由于其检测范围大、能满足实时性、可以提供很多辅助信息等优点广泛应用。但是由于交通场景比较复杂,背景及运动目标易因光线、天气等因素发生变化,虽然有很多行人检测的方法,如基于人体参数模型的方法、基于人体局部特征的方法等能实现行人事件报警,但无法满足对环境因素良好适应性以及获取实时、准确的交通检测信息的要求。In recent years, video-based detection technology has been widely used due to its large detection range, real-time performance, and a lot of auxiliary information. However, due to the complexity of the traffic scene, the background and moving objects are easy to change due to factors such as light and weather. Although there are many pedestrian detection methods, such as methods based on human body parameter models and methods based on local characteristics of the human body, pedestrian event alarms can be realized. However, it cannot meet the requirements of good adaptability to environmental factors and obtaining real-time and accurate traffic detection information.

发明内容Contents of the invention

针对现有技术的不足和缺陷,本发明的目的在于,提供一种基于形状特征和轨迹分析的行人事件检测方法,该方法能够克服交通道路场景复杂多变的因素,对检测区域内的行人事件实现实时、准确的检测,并对其危险等级预警。In view of the deficiencies and defects of the prior art, the object of the present invention is to provide a pedestrian event detection method based on shape features and trajectory analysis, which can overcome the complex and changeable factors of traffic road scenes, and detect pedestrian events in the detection area. Real-time and accurate detection and early warning of its danger level.

为了实现上述任务,本发明采用如下技术方案予以实现:In order to realize above-mentioned task, the present invention adopts following technical scheme to realize:

一种基于形状特征和轨迹分析的行人事件检测方法,该方法按照以下步骤进行:A pedestrian event detection method based on shape features and trajectory analysis, the method is carried out according to the following steps:

步骤一,建立图像像素到路面实际距离的映射关系,即映射表,同时将道路图像划分为道路内和路肩两部分;Step 1: Establish the mapping relationship between the image pixels and the actual distance of the road surface, that is, the mapping table, and divide the road image into two parts: the road interior and the road shoulder;

步骤二,将第1帧图像和背景图像在相同的块坐标系下都划分成多个块区域,背景的大小为W*H,划分的块大小为w*h,划分的块区域个数T为T=(W/w)*(H/h),用当前第1帧图像和背景图像对应像素相减,得到大小都为W*H的帧差图像,将帧差图像划分为T个大小都为w*h的块,记第j个块内大于灰度阈值A的像素个数为Nj,若Nj大于阈值B,则该块内所有像素值赋为255,否则赋为0,其中:Step 2: Divide the first frame image and the background image into multiple block areas in the same block coordinate system, the size of the background is W*H, the size of the divided blocks is w*h, and the number of divided block areas is T For T=(W/w)*(H/h), subtract the corresponding pixels of the current first frame image and the background image to obtain a frame difference image with a size of W*H, and divide the frame difference image into T sizes For all blocks of w*h, record the number of pixels greater than the gray threshold A in the jth block as N j , if N j is greater than the threshold B, then assign the value of all pixels in this block to 255, otherwise assign it to 0, in:

W为图像水平方向像素个数;W is the number of pixels in the horizontal direction of the image;

H为图像竖直方向像素个数;H is the number of pixels in the vertical direction of the image;

w为块的像素宽度;w is the pixel width of the block;

h为块的像素高度;h is the pixel height of the block;

j=1,2,3...T;j=1,2,3...T;

所述的阈值A的取值为30;The value of the threshold A is 30;

所述的阈值B的取值范围为块内像素总数的0.5~0.75倍;The value range of the threshold B is 0.5 to 0.75 times the total number of pixels in the block;

步骤三,以块为单位对二值化图像从左到右,从上到下依次扫描,对同一个目标的连通域标以相同的标号,同时获得该连通域的最小外接矩形,计算该外接矩形的高度Rh、宽度Rw、高宽比Ra和矩形度Rj,当Ra的值在阈值C范围内,并且Rj的值在阈值D范围内时,保留该目标,当Ra或Rj不在阈值C和D范围内时,去除该目标,其中:Step 3: scan the binarized image from left to right and from top to bottom in units of blocks, mark the connected domain of the same target with the same label, and obtain the smallest circumscribing rectangle of the connected domain at the same time, and calculate the circumscribing rectangle The height R h , width R w , aspect ratio R a and rectangularity R j of the rectangle, when the value of R a is within the range of threshold C, and the value of R j is within the range of threshold D, keep the target, when R When a or R j is not within the range of thresholds C and D, remove the target, where:

所述的阈值C范围为1.5~8;The threshold C ranges from 1.5 to 8;

所述的阈值D范围为0.5~1;The threshold D ranges from 0.5 to 1;

步骤四,对第1帧图像上标记的第j个前景目标寻找最佳角点,选取该目标某像素点Pi(m,n)为中心,建立一个大小为a*a的窗口,分别计算过中心像素点Pi(m,n)的横向、纵向以及两个对角线方向上相邻像素灰度差的平方和,取其结果中的最小值gmin,若gmin大于阈值E,则该点为角点,若gmin小于等于阈值E,则该点不为角点并舍去,其中:Step 4: Find the best corner point for the jth foreground object marked on the first frame of the image, select a certain pixel point P i (m,n) of the object as the center, establish a window with a size of a*a, and calculate The sum of the squares of the gray level differences of adjacent pixels in the horizontal, vertical and two diagonal directions passing through the center pixel P i (m,n) is taken as the minimum value g min in the result. If g min is greater than the threshold E, Then this point is a corner point, if g min is less than or equal to the threshold E, then this point is not a corner point and discarded, where:

所述的a为窗口边长的像素宽度;The a is the pixel width of the side length of the window;

所述的阈值E的取值范围为180~220;The value range of the threshold E is 180-220;

步骤五,将角点的位置信息以及匹配跟踪次数信息记录在一个新建的空的结构体中,目标匹配跟踪次数初始化为零;Step 5, record the position information of the corner point and the number of matching tracking times in a newly created empty structure, and initialize the number of matching tracking times of the target to zero;

步骤六,对第二帧、第三帧、…、第i帧图像,重复步骤二、步骤三和步骤四的方法得到当前帧中目标的角点位置,然后以前一帧记录的角点位置为依据,与当前帧中的记录的目标的角点位置做比较,则有:Step 6, for the second frame, the third frame, ..., the i-th frame image, repeat the method of step 2, step 3 and step 4 to obtain the corner position of the target in the current frame, and then the corner position recorded in the previous frame is According to the comparison with the corner position of the recorded target in the current frame, there are:

当两者位置绝对值差大于阈值F,就确定当前帧中该角点所在的目标是当前帧中新的目标,再按照步骤五的方法进行处理;When the absolute value difference between the two positions is greater than the threshold F, it is determined that the target where the corner point is located in the current frame is a new target in the current frame, and then process according to the method of step five;

当两者位置绝对值差小于等于阈值F,则用当前帧的角点位置信息替换前一帧的角点位置信息作为新的比较基准依据,匹配跟踪次数加1,其中:When the absolute value difference between the two positions is less than or equal to the threshold F, replace the corner position information of the previous frame with the corner position information of the current frame as a new basis for comparison, and add 1 to the number of matching tracking, where:

i为正整数;i is a positive integer;

所述的阈值F的取值范围为1~5;The value range of the threshold F is 1-5;

步骤七,当匹配次数大于等于阈值G时,则跟踪完毕,得到跟踪轨迹为:Track={Pi,Pi+1,...Pi+n},执行步骤八,其中:Step 7, when the number of matches is greater than or equal to the threshold G, the tracking is completed, and the tracking track is obtained as: Track={P i ,P i+1 ,...P i+n }, and step 8 is performed, wherein:

所述的阈值G的取值范围是50~70;The value range of the threshold G is 50-70;

步骤八,查找映射表,得到轨迹Track={Pi,Pi+1,...Pi+n}中每个角点对应的实际距离,即实际运动轨迹Track,={(si,0),(si+1,1),...,(si+n,n)},其中:Step eight, look up the mapping table to obtain the actual distance corresponding to each corner point in the track Track={P i ,P i+1 ,...P i+n }, that is, the actual motion track Track , ={(s i , 0),(s i+1 ,1),...,(s i+n ,n)}, where:

si+n表示像素点Pi+n对应的实际距离,n表示点的下标;s i+n represents the actual distance corresponding to the pixel point P i+n , and n represents the subscript of the point;

步骤九,由实际运动轨迹曲线首点Pi和尾点Pi+n得到经过这两点的一条直线的直线方程:y=kx+b,该轨迹上任意一点到这条直线的距离为:Step 9, obtain the linear equation of a straight line passing through these two points by the first point P i of the actual motion track curve and the tail point P i+n : y=kx+b, the distance from any point on this track to this straight line is:

dd rr == || kk SS ii ++ rr -- rr ++ bb || kk 22 ++ 11

式中:k为直线的斜率,b为截距,(x,y)表示该直线上的任意一点,(si+r,r)表示该轨迹上任意一点,dr表示(si+r,r)点到直线的距离;In the formula: k is the slope of the straight line, b is the intercept, (x, y) means any point on the line, (s i+r , r) means any point on the track, d r means (s i+r ,r) the distance from the point to the straight line;

对所有的dr进行排序找出最大的dmax,若大于阈值H,则该点为目标运动轨迹的拐点,保存该点(si+r,r),执行步骤十,其中:Sort all d r to find the largest d max , if it is greater than the threshold H, then this point is the inflection point of the target trajectory, save this point (s i+r , r), and perform step ten, where:

所述的阈值H的取值为70cm;The value of the threshold H is 70cm;

步骤十,拐点(si+r,r)将运动轨迹曲线分段,以(s0,0)和(si+r,r)为首尾点的曲线段,以及以点(si+r,r)和(si+n,n)为首尾点的曲线段,在这两条曲线段内分别执行步骤九,继续求取各段轨迹的拐点,直到这条轨迹上所有点到直线距离dr≤H时为止,这样得到一组拐点{(si+r0,r0),(si+r1,r1),...,(si+rm,rm)};Step ten, the inflection point (s i+r , r) divides the motion track curve into segments, the curve segment with (s 0 , 0) and (s i+r , r) as the first and last point, and the point (s i+r ,r) and (s i+n ,n) are the curve segments of the first and last points, respectively execute step 9 in these two curve segments, and continue to find the inflection points of each segment of the trajectory until the distance between all points on this trajectory and the straight line d r ≤ H, so get a set of inflection points {(s i+r0 ,r 0 ),(s i+r1 ,r 1 ),...,(s i+rm ,r m )};

步骤十一,拐点将运动轨迹分割成运动轨迹曲线段,对每一段运动轨迹曲线段使用最小二乘法进行线性拟合得到相关系数r,则有:Step eleven, the inflection point divides the motion trajectory into motion trajectory curve segments, and uses the least square method to perform linear fitting on each motion trajectory curve segment to obtain the correlation coefficient r, then:

当r≥0.5时,保留该段运动轨迹曲线段;When r ≥ 0.5, keep the segment of the trajectory curve segment;

当r<0.5时,去除该段运动轨迹曲线段;When r<0.5, remove the segment of the motion trajectory curve segment;

最终得到一组运动轨迹曲线段;Finally, a set of motion trajectory curve segments is obtained;

步骤十二,利用经过步骤十一筛选后得到的每个运动轨迹曲线段的首点和尾点的实际距离和时间差求取分段内目标速度v,表达式为:Step 12, use the actual distance and time difference between the first point and the end point of each motion trajectory curve segment obtained after step 11 screening to obtain the target velocity v in the segment, the expression is:

vv == || sthe s ff -- sthe s sthe s || N&Delta;tN&Delta;t

式中:In the formula:

N表示一段运动轨迹曲线段中轨迹点的间隔段数;N represents the interval number of track points in a section of motion track curve segment;

sf表示一段运动轨迹曲线段的尾点实际距离;s f represents the actual distance of the end point of a section of motion trajectory curve segment;

ss表示一段运动轨迹曲线段的首点实际距离;s s represents the actual distance of the first point of a section of motion trajectory curve segment;

Δt表示一段运动轨迹曲线段中相邻两个轨迹点的时间间隔;Δt represents the time interval between two adjacent trajectory points in a segment of motion trajectory;

当所有分段内的速度都满足0.3m/s<v<2m/s时,即可确定该目标为行人;When the speed in all segments satisfies 0.3m/s<v<2m/s, the target can be determined to be a pedestrian;

步骤十三,根据目标在当前帧的坐标点Pi+n位置判断行人事件危险等级:Step 13, judge the danger level of the pedestrian event according to the position of the coordinate point P i+n of the target in the current frame:

(1)当点Pi+n处于道路内部时,该行人事件危险等级为高;(1) When the point P i+n is inside the road, the danger level of the pedestrian event is high;

(2)当点Pi+n处于路肩时,如果行人运动轨迹矢量的方向与道路正确行使方向夹角大于30度,则该行人事件危险等级为中;(2) When the point P i+n is on the shoulder, if the angle between the direction of the pedestrian trajectory vector and the correct driving direction of the road is greater than 30 degrees, the hazard level of the pedestrian incident is medium;

(3)当点Pi+n处于路肩时,如果行人运动轨迹矢量的方向与道路正确行使方向夹角小于等于30度,则该行人事件危险等级为低。(3) When the point P i+n is on the road shoulder, if the angle between the direction of the pedestrian trajectory vector and the correct driving direction of the road is less than or equal to 30 degrees, the hazard level of the pedestrian incident is low.

本发明的基于形状特征和轨迹分析的行人事件检测方法,与现有技术相比,本方法针对交通场景监控视频视距较远以及行人姿态的多变性造成的目标分割困难的特点,结合背景差分法利用行人的几何形状特征完成目标分割和初步识别,并对目标提取稳定、单一的角点,利用匹配算法获得目标跟踪轨迹。由于行人运动的随意性,所得行人轨迹具有非线性的特点,如果直接对整段轨迹曲线线性拟合,将有较大的误差,故本发明采用分段的方法,对目标轨迹曲线寻找拐点,这些拐点能将轨迹曲线分为若干个具有线性关系的线段,然后对每个相邻拐点间的线段使用最小二乘法进行线性拟合。经过该种处理后,能够获取更加准确的目标速度信息,提高行人事件检测的准确率。此外,本发明通过计算并分析行人位置信息以及运动方向,可以实现行人事件危险等级的判断,完成提前预警功能。Compared with the prior art, the pedestrian event detection method based on the shape feature and trajectory analysis of the present invention is aimed at the characteristics of difficult target segmentation caused by the long sight distance of the traffic scene monitoring video and the variability of pedestrian postures, combined with the background difference The method uses the pedestrian's geometric shape features to complete target segmentation and preliminary recognition, and extracts a stable and single corner point for the target, and uses a matching algorithm to obtain the target tracking trajectory. Due to the arbitrariness of pedestrian movement, the obtained pedestrian trajectory has nonlinear characteristics. If the entire trajectory curve is directly fitted linearly, there will be a large error. Therefore, the present invention uses a segmented method to find the inflection point for the target trajectory curve. These inflection points can divide the trajectory curve into several line segments with a linear relationship, and then use the least square method to perform linear fitting on the line segments between each adjacent inflection point. After this kind of processing, more accurate target speed information can be obtained, and the accuracy rate of pedestrian event detection can be improved. In addition, the present invention can realize the judgment of the danger level of pedestrian incidents and complete the early warning function by calculating and analyzing pedestrian position information and movement direction.

附图说明Description of drawings

图1为第1帧图像。Figure 1 is the first frame image.

图2为道路不同区域划分,白色表示道路内。Figure 2 is the division of different areas of the road, and the white color represents the inside of the road.

图3为基于形状特征识别出的类行人,图中白色方框为目标外接矩形。Figure 3 shows the pedestrians identified based on shape features, and the white box in the figure is the bounding rectangle of the target.

图4为目标特征点示意图,其中白色圆点为该目标的特征角点。Figure 4 is a schematic diagram of target feature points, where the white dots are feature corner points of the target.

图5为模版匹配搜索方法,角点所在图像中黑色圆点表示特征角点,黑点外侧小正方形为模板,待搜索图像中阴影部分为搜索区域,在搜索区域中用模板遍历搜索区域,找到使MAD值最小的匹配块,作为新的模版,其中心为新的角点。Figure 5 shows the template matching search method. The black dot in the image where the corner point is located represents the feature corner point, the small square outside the black dot is the template, and the shaded part in the image to be searched is the search area. In the search area, use the template to traverse the search area and find The matching block with the minimum MAD value is used as a new template, and its center is a new corner point.

图6为视频中目标跟踪轨迹示意图,其中白色线表示行人的特征角点在图像中的运动轨迹。Fig. 6 is a schematic diagram of the target tracking trajectory in the video, where the white line represents the movement trajectory of the pedestrian's characteristic corners in the image.

图7为目标实际运动轨迹图,每一时刻的位置用白色圆点表示,图中横坐标为时间,单位为0.04s;纵坐标为实际距离,单位为cm。Figure 7 is a diagram of the actual movement trajectory of the target. The position at each moment is represented by a white dot. The abscissa in the figure is time, and the unit is 0.04s; the ordinate is the actual distance, and the unit is cm.

图8为对实际轨迹曲线寻找分段拐点,拐点处用灰色十字标出。Figure 8 is to find segmental inflection points for the actual trajectory curve, and the inflection points are marked with gray crosses.

图9为行人事件危险等级预警,显示行人位置及运动方向,并判断行人事件危险等级。Fig. 9 is an early warning of the danger level of the pedestrian event, showing the position and direction of movement of the pedestrian, and judging the danger level of the pedestrian event.

以下结合附图和实施例对本发明的内容作进一步详细说明。The content of the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.

具体实施方式detailed description

本实施例给出一种基于形状特征和轨迹分析的行人事件检测方法,通过使用背景差分法的目标分割、基于块的连通域标记、基于几何形状特征的目标识别、角点提取,目标轨迹跟踪以及寻找轨迹分段拐点、线性分析实现目标速度得求取,在此基础上完成行人事件状态分析,完成交通安全预警。This embodiment provides a pedestrian event detection method based on shape features and trajectory analysis, through the use of background difference method for target segmentation, block-based connected domain marking, target recognition based on geometric shape features, corner point extraction, and target trajectory tracking As well as finding the segmental inflection point of the trajectory, linear analysis to achieve the target speed, and on this basis, the analysis of the status of pedestrian events is completed, and the traffic safety warning is completed.

需要说明的是,本发明的方法过程中所处理的图像是视频中的沿正时间序列的第一帧图像、第二帧图像、第三帧图像、…、第i(i为正整数)帧图像。It should be noted that the images processed in the method of the present invention are the first frame image, the second frame image, the third frame image, ..., the i (i is a positive integer) frame along the positive time sequence in the video image.

需要说明的是本实施例中的映射表采用发明专利“一种线性模型下的摄像机几何标定方法”(公开(公告)号:CN102222332A)中所述的摄像机几何标定方法得到。It should be noted that the mapping table in this embodiment is obtained by using the camera geometric calibration method described in the invention patent "a camera geometric calibration method under linear model" (publication (publication) number: CN102222332A).

设每一帧视频图像的大小为W*H,每个块的面积大小为w*h,其中W为每一帧视频图像水平方向的像素,H为每一帧视频图像垂直方向的像素,w为每个块区域的宽度,h为每个块区域的高度。Suppose the size of each frame of video image is W*H, and the area size of each block is w*h, wherein W is the pixel in the horizontal direction of each frame of video image, H is the pixel in the vertical direction of each frame of video image, and w is the width of each block area, and h is the height of each block area.

本实施例的方法具体采用以下步骤实现:The method of this embodiment specifically adopts the following steps to realize:

步骤一,建立图像像素到路面实际距离的映射关系,即映射表,同时将道路图像划分为道路内和路肩两部分;Step 1: Establish the mapping relationship between the image pixels and the actual distance of the road surface, that is, the mapping table, and divide the road image into two parts: the road interior and the road shoulder;

步骤二,将第1帧图像和背景图像在相同的块坐标系下都划分成多个块区域,背景的大小为W*H,划分的块大小为w*h,划分的块区域个数T为T=(W/w)*(H/h),用当前第1帧图像和背景图像对应像素相减,得到大小都为W*H的帧差图像,将帧差图像划分为T个大小都为w*h的块,记第j个块内大于灰度阈值A的像素个数为Nj,若Nj大于阈值B,则该块内所有像素值赋为255,否则赋为0,其中:Step 2: Divide the first frame image and the background image into multiple block areas in the same block coordinate system, the size of the background is W*H, the size of the divided blocks is w*h, and the number of divided block areas is T For T=(W/w)*(H/h), subtract the corresponding pixels of the current first frame image and the background image to obtain a frame difference image with a size of W*H, and divide the frame difference image into T sizes For all blocks of w*h, record the number of pixels greater than the gray threshold A in the jth block as N j , if N j is greater than the threshold B, then assign the value of all pixels in this block to 255, otherwise assign it to 0, in:

W为图像水平方向像素个数;W is the number of pixels in the horizontal direction of the image;

H为图像竖直方向像素个数;H is the number of pixels in the vertical direction of the image;

w为块的像素宽度;w is the pixel width of the block;

h为块的像素高度;h is the pixel height of the block;

j=1,2,3...T;j=1,2,3...T;

所述的阈值A的取值为30;The value of the threshold A is 30;

所述的阈值B的取值范围为块内像素总数的0.5~0.75倍;The value range of the threshold B is 0.5 to 0.75 times the total number of pixels in the block;

步骤三,以块为单位对二值化图像从左到右,从上到下依次扫描,对同一个目标的连通域标以相同的标号,同时获得该连通域的最小外接矩形,计算该外接矩形的高度Rh、宽度Rw、高宽比Ra和矩形度Rj,当Ra的值在阈值C范围内,并且Rj的值在阈值D范围内时,保留该目标,当Ra或Rj不在阈值C和D范围内时,去除该目标,其中:Step 3: scan the binarized image from left to right and from top to bottom in units of blocks, mark the connected domain of the same object with the same label, and obtain the smallest circumscribing rectangle of the connected domain at the same time, and calculate the circumscribing rectangle The height R h , width R w , aspect ratio R a and rectangularity R j of the rectangle, when the value of R a is within the range of threshold C, and the value of R j is within the range of threshold D, keep the target, when R When a or R j is not within the range of thresholds C and D, remove the target, where:

所述的阈值C范围为1.5~8;The threshold C ranges from 1.5 to 8;

所述的阈值D范围为0.5~1;The threshold D ranges from 0.5 to 1;

步骤四,对第1帧图像上标记的第j个前景目标寻找最佳角点,选取该目标某像素点Pi(m,n)为中心,建立一个大小为a*a的窗口,分别计算过中心像素点Pi(m,n)的横向、纵向以及两个对角线方向上相邻像素灰度差的平方和,取其结果中的最小值gmin,若gmin大于阈值E,则该点为角点,若gmin小于等于阈值E,则该点不为角点并舍去,其中:Step 4: Find the best corner point for the jth foreground object marked on the first frame of the image, select a certain pixel point P i (m,n) of the object as the center, establish a window with a size of a*a, and calculate The sum of the squares of the gray level differences of adjacent pixels in the horizontal, vertical and two diagonal directions passing through the center pixel P i (m,n) is taken as the minimum value g min in the result. If g min is greater than the threshold E, Then this point is a corner point, if g min is less than or equal to the threshold E, then this point is not a corner point and discarded, where:

所述的a为窗口边长的像素宽度;The a is the pixel width of the side length of the window;

所述的阈值E的取值范围为180~220;The value range of the threshold E is 180-220;

步骤五,将角点的位置信息以及匹配跟踪次数信息记录在一个新建的空的结构体中,目标匹配跟踪次数初始化为零;Step 5, record the position information of the corner point and the number of matching tracking times in a newly created empty structure, and initialize the number of matching tracking times of the target to zero;

步骤六,对第二帧、第三帧、…、第i帧图像,重复步骤二、步骤三和步骤四的方法得到当前帧中目标的角点位置,然后以前一帧记录的角点位置为依据,与当前帧中的记录的目标的角点位置做比较,则有:Step 6, for the second frame, the third frame, ..., the i-th frame image, repeat the method of step 2, step 3 and step 4 to obtain the corner position of the target in the current frame, and then the corner position recorded in the previous frame is According to the comparison with the corner position of the recorded target in the current frame, there are:

当两者位置绝对值差大于阈值F,就确定当前帧中该角点所在的目标是当前帧中新的目标,再按照步骤五的方法进行处理;When the absolute value difference between the two positions is greater than the threshold F, it is determined that the target where the corner point is located in the current frame is a new target in the current frame, and then process according to the method of step five;

当两者位置绝对值差小于等于阈值F,则用当前帧的角点位置信息替换前一帧的角点位置信息作为新的比较基准依据,匹配跟踪次数加1,其中:When the absolute value difference between the two positions is less than or equal to the threshold F, replace the corner position information of the previous frame with the corner position information of the current frame as a new basis for comparison, and add 1 to the number of matching tracking, where:

i为正整数;i is a positive integer;

所述的阈值F的取值范围为1~5;The value range of the threshold F is 1-5;

步骤七,当匹配次数大于等于阈值G时,则跟踪完毕,得到跟踪轨迹为:Track={Pi,Pi+1,...Pi+n},执行步骤八,其中:Step 7, when the number of matches is greater than or equal to the threshold G, the tracking is completed, and the tracking track is obtained as: Track={P i ,P i+1 ,...P i+n }, and step 8 is performed, wherein:

所述的阈值G的取值范围是50~70;The value range of the threshold G is 50-70;

步骤八,查找映射表,得到轨迹Track={Pi,Pi+1,...Pi+n}中每个角点对应的实际距离,即实际运动轨迹Track’={(si,0),(si+1,1),...,(si+n,n)},其中:Step eight, look up the mapping table to obtain the actual distance corresponding to each corner point in the track Track={P i ,P i+1 ,...P i+n }, that is, the actual motion track Track'={(s i , 0),(s i+1 ,1),...,(s i+n ,n)}, where:

si+n表示像素点Pi+n对应的实际距离,n表示点的下标;s i+n represents the actual distance corresponding to the pixel point P i+n , and n represents the subscript of the point;

步骤九,由实际运动轨迹曲线首点Pi和尾点Pi+n得到经过这两点的一条直线的直线方程:y=kx+b,该轨迹上任意一点到这条直线的距离为:Step 9, obtain the linear equation of a straight line passing through these two points by the first point P i of the actual motion track curve and the tail point P i+n : y=kx+b, the distance from any point on this track to this straight line is:

dd rr == || kk SS ii ++ rr -- rr ++ bb || kk 22 ++ 11

式中:k为直线的斜率,b为截距,(x,y)表示该直线上的任意一点,(si+r,r)表示该轨迹上任意一点,dr表示(si+r,r)点到直线的距离;In the formula: k is the slope of the straight line, b is the intercept, (x, y) means any point on the line, (s i+r , r) means any point on the track, d r means (s i+r ,r) the distance from the point to the straight line;

对所有的dr进行排序找出最大的dmax,若大于阈值H,则该点为目标运动轨迹的拐点,保存该点(si+r,r),执行步骤十,其中:Sort all d r to find the largest d max , if it is greater than the threshold H, then this point is the inflection point of the target trajectory, save this point (s i+r , r), and perform step ten, where:

所述的阈值H的取值为70cm;The value of the threshold H is 70cm;

步骤十,拐点(si+r,r)将运动轨迹曲线分段,以(s0,0)和(si+r,r)为首尾点的曲线段,以及以点(si+r,r)和(si+n,n)为首尾点的曲线段,在这两条曲线段内分别执行步骤九,继续求取各段轨迹的拐点,直到这条轨迹上所有点到直线距离dr≤H时为止,这样得到一组拐点{(si+r0,r0),(si+r1,r1),...,(si+rm,rm)};Step ten, the inflection point (s i+r , r) divides the motion track curve into segments, the curve segment with (s 0 , 0) and (s i+r , r) as the first and last point, and the point (s i+r ,r) and (s i+n ,n) are the curve segments of the first and last points, respectively execute step 9 in these two curve segments, and continue to find the inflection points of each segment of the trajectory until the distance between all points on this trajectory and the straight line d r ≤ H, so get a set of inflection points {(s i+r0 ,r 0 ),(s i+r1 ,r 1 ),...,(s i+rm ,r m )};

步骤十一,拐点将运动轨迹分割成运动轨迹曲线段,对每一段运动轨迹曲线段使用最小二乘法进行线性拟合得到相关系数r,则有:Step eleven, the inflection point divides the motion trajectory into motion trajectory curve segments, and uses the least square method to perform linear fitting on each motion trajectory curve segment to obtain the correlation coefficient r, then:

当r≥0.5时,保留该段运动轨迹曲线段;When r ≥ 0.5, keep the segment of the trajectory curve segment;

当r<0.5时,去除该段运动轨迹曲线段;When r<0.5, remove the segment of the motion trajectory curve segment;

最终得到一组运动轨迹曲线段;Finally, a set of motion trajectory curve segments is obtained;

步骤十二,利用经过步骤十一筛选后得到的每个运动轨迹曲线段的首点和尾点的实际距离和时间差求取分段内目标速度v,表达式为:Step 12, using the actual distance and time difference between the first point and the end point of each motion trajectory curve segment obtained after step 11 screening to obtain the target velocity v in the segment, the expression is:

vv == || sthe s ff -- sthe s sthe s || N&Delta;tN&Delta;t

式中:In the formula:

N表示一段运动轨迹曲线段中轨迹点的间隔段数;N represents the interval number of track points in a section of motion track curve segment;

sf表示一段运动轨迹曲线段的尾点实际距离;s f represents the actual distance of the end point of a section of motion trajectory curve segment;

ss表示一段运动轨迹曲线段的首点实际距离;s s represents the actual distance of the first point of a section of motion trajectory curve segment;

Δt表示一段运动轨迹曲线段中相邻两个轨迹点的时间间隔;Δt represents the time interval between two adjacent trajectory points in a segment of motion trajectory;

当所有分段内的速度都满足0.3m/s<v<2m/s时,即可确定该目标为行人;When the speed in all segments satisfies 0.3m/s<v<2m/s, the target can be determined to be a pedestrian;

步骤十三,根据目标在当前帧的坐标点Pi+n位置判断行人事件危险等级:Step 13, judge the danger level of the pedestrian event according to the position of the coordinate point P i+n of the target in the current frame:

(2)当点Pi+n处于道路内部时,该行人事件危险等级为高;(2) When the point P i+n is inside the road, the hazard level of the pedestrian event is high;

(2)当点Pi+n处于路肩时,如果行人运动轨迹矢量的方向与道路正确行使方向夹角大于30度,则该行人事件危险等级为中;(2) When the point P i+n is on the shoulder, if the angle between the direction of the pedestrian trajectory vector and the correct driving direction of the road is greater than 30 degrees, the hazard level of the pedestrian incident is medium;

(3)当点Pi+n处于路肩时,如果行人运动轨迹矢量的方向与道路正确行使方向夹角小于等于30度,则该行人事件危险等级为低。(3) When the point P i+n is on the road shoulder, if the angle between the direction of the pedestrian trajectory vector and the correct driving direction of the road is less than or equal to 30 degrees, the hazard level of the pedestrian incident is low.

以下给出本发明的具体实施例,需要说明的是本发明并不局限于以下具体实施例,凡在本申请技术方案基础上做的等同变换均落入本发明的保护范围。Specific embodiments of the present invention are provided below, and it should be noted that the present invention is not limited to the following specific embodiments, and all equivalent transformations done on the basis of the technical solutions of the present application all fall within the scope of protection of the present invention.

实施例:Example:

本实施例的处理过程中视频的采样频率是25帧每秒,每帧图像大小为720×288,帧差图像进行块处理的块大小为8×6,将图像分成了90×48个块区域,进行背景差分法时的灰度阈值A为30,阈值B为36,符合行人特征的长宽比阈值C的范围为1.5~8,矩形度阈值D范围为0.5~1,选取角点的阈值E的取值为180~220,角点匹配距离的阈值F为3,角点匹配次数阈值G取50,对实际运动轨迹寻找分段拐点时的判断距离阈值H的取70cm,如图1至图9所示,使用上述方法依次从第一帧开始遵从上述方法对视频图像进行处理。In the processing of this embodiment, the video sampling frequency is 25 frames per second, and the image size of each frame is 720×288. The block size of the frame difference image for block processing is 8×6, and the image is divided into 90×48 block regions. , when performing the background subtraction method, the grayscale threshold A is 30, the threshold B is 36, the aspect ratio threshold C that conforms to the characteristics of pedestrians ranges from 1.5 to 8, the rectangle threshold D ranges from 0.5 to 1, and the threshold for corner points is selected The value of E is 180-220, the threshold F of the corner matching distance is 3, the threshold G of the number of corner matching times is 50, and the threshold H of the judgment distance when looking for the segmental inflection point of the actual motion track is 70cm, as shown in Figure 1 to As shown in FIG. 9 , the video image is processed sequentially from the first frame by using the above method according to the above method.

从图6可以看出图中白色线为行人运动轨迹,当视频图像运行到第51帧时角点匹配次数达到50次,故轨迹线从第1帧到第51帧截止。该轨迹的下端为行人第一次进入场景,找到的特征点位置,最上端点为在第50帧找到的特征点。It can be seen from Figure 6 that the white line in the figure is the pedestrian trajectory. When the video image runs to the 51st frame, the number of corner point matching reaches 50 times, so the trajectory line ends from the 1st frame to the 51st frame. The lower end of the trajectory is the position of the feature point found when the pedestrian enters the scene for the first time, and the uppermost end point is the feature point found in the 50th frame.

图7为目标跟踪轨迹对应的实际距离曲线图,使用步骤九和步骤十中的方法对该轨迹曲线求取拐点,结果如图8所示,拐点处用十字符号标记出来。然后采用最小二乘法对每个分段内的轨迹拟合,可求得行人的实际运动速度0.71m/s,所以可判断该目标为行人。此时根据该行人的位置和方向,判断该行人事件对交通安全造成的危险等级,实现交通安全预警。Figure 7 is the actual distance curve corresponding to the target tracking trajectory. Use the methods in steps 9 and 10 to obtain the inflection point of the trajectory curve. The result is shown in Figure 8, and the inflection point is marked with a cross. Then, the least square method is used to fit the trajectory in each segment, and the actual moving speed of the pedestrian can be obtained as 0.71m/s, so it can be judged that the target is a pedestrian. At this time, according to the position and direction of the pedestrian, the danger level caused by the pedestrian incident to traffic safety is judged, and traffic safety early warning is realized.

Claims (1)

1. a pedestrian event detection method for Shape-based interpolation characteristic sum trajectory analysis, is characterized in that, the method is carried out according to following steps:
Step one, sets up the mapping relations of image pixel to road surface actual range, i.e. mapping table, is divided into by road image in road and curb two parts simultaneously;
Step 2,1st two field picture is all divided into multiple pieces of regions with background image under identical block coordinate system, the size of background is W*H, the block size divided is w*h, the block areal T divided is T=(W/w) * (H/h), subtracts each other, obtain the frame difference image that size is all W*H with current 1st two field picture and background image respective pixel, frame difference image is divided into the block that T size is all w*h, the number of pixels being greater than gray threshold A in a note jth block is N jif, N jbe greater than threshold value B, then in this block, all pixel values are composed is 255, otherwise tax is 0, wherein:
W is image level direction number of pixels;
H is image vertical direction number of pixels;
W is the pixel wide of block;
H is the pixels tall of block;
j=1,2,3...T;
The value of described threshold value A is 30;
The span of described threshold value B is 0.5 ~ 0.75 times of sum of all pixels in block;
Step 3, scans in units of block to binary image from left to right from top to bottom successively, is marked with identical label to the connected domain of same target, obtains the minimum enclosed rectangle of this connected domain simultaneously, calculates the height R of this boundary rectangle h, width R w, depth-width ratio R awith rectangular degree R j, work as R avalue within the scope of threshold value C, and R jvalue within the scope of threshold value D time, retain this target, work as R aor R jtime not within the scope of threshold value C and D, remove this target, wherein:
Described threshold value C scope is 1.5 ~ 8;
Described threshold value D scope is 0.5 ~ 1;
Step 4, finds best angle point to the jth foreground target that the 1st two field picture marks, and chooses this target pixel P icentered by (m, n), setting up a size is the window of a*a, respectively calculated central pixel point P iin horizontal, longitudinal and two diagonals of (m, n), the quadratic sum of neighbor gray scale difference, gets the minimum value g in its result minif, the g of certain point in window minbe greater than threshold value E, then this point in window is angle point, if g minbe less than or equal to threshold value E, then this point is not angle point and casts out, wherein:
Described a is the pixel wide of the window length of side;
The span of described threshold value E is 180 ~ 220;
Step 5, the positional information of angle point and matched jamming number information are recorded in the structure of a newly-built sky, object matching is followed the tracks of number of times and is initialized as zero;
Step 6, to the second frame, the 3rd frame ..., the i-th two field picture, the method for step 2, step 3 and step 4 of repetition obtains the corner location of target in present frame, then with the corner location of last frame recording for foundation, compare with the corner location of the target of the record in present frame, then have:
When both positions, absolute value difference is greater than threshold value F, just determines that the target at this angle point place in present frame is target new in present frame, then process according to the method for step 5;
When both positions, absolute value difference is less than or equal to threshold value F, then replace the corner location information of former frame as new benchmark foundation by the corner location information of present frame, matched jamming number of times adds 1, wherein:
I is positive integer;
The span of described threshold value F is 1 ~ 5;
Step 7, when matching times is more than or equal to threshold value G, then follows the tracks of complete, obtains pursuit path and is: Track={P i, P i+1... P i+n, perform step 8, wherein:
The span of described threshold value G is 50 ~ 70;
Step 8, searches mapping table, obtains track Track={P i, P i+1... P i+nin actual range corresponding to each angle point, i.e. actual motion track Track '={ (s i, 0), (s i+1, 1) ..., (s i+n, n) }, wherein:
S i+nrepresent pixel P i+ncorresponding actual range, n represents subscript a little;
Step 9, by actual motion geometric locus first point P iwith tail point P i+nobtain the straight-line equation through this straight line of 2: y=kx+b, on this track, any point to the distance of this straight line is:
d r = | kS i + r - r + b | k 2 + 1
In formula: k is the slope of straight line, b is intercept, and (x, y) represents any point on this straight line, (s i+r, r) represent any point on this track, d rrepresent (s i+r, r) to the distance of straight line;
To all d rcarrying out sorts finds out maximum d maxif be greater than threshold value H, then this point on actual motion geometric locus is the flex point of target trajectory, preserves this point (s i+r, r), perform step 10, wherein:
The value of described threshold value H is 70cm;
Step 10, flex point (s i+r, r) by path curves segmentation, with (s 0, 0) and (s i+r, be r) segment of curve that head and the tail are put, and with point (s i+r, r) with (s i+n, be n) segment of curve that head and the tail are put, in these two segment of curve, perform step 9 respectively, continue the flex point asking for each section of track, until all distance between beeline and dot d on this track rtill during≤H, H is threshold value, and the value of described threshold value H is 70cm, obtains one group of flex point { (s like this i+r0, r 0), (s i+r1, r 1) ..., (s i+rm, r m);
Step 11, movement locus is divided into path curves section by flex point, uses least square method to carry out linear fit obtain correlation coefficient r, then have each section of path curves section:
When r >=0.5, retain this section of path curves section;
As r < 0.5, remove this section of path curves section;
Finally obtain one group of path curves section;
Step 12, utilize the first point of each path curves section that obtains and the actual range of tail point and mistiming after step 11 screening to ask for segmentation internal object speed v, expression formula is:
v = | s f - s s | N &Delta; t
In formula:
N represents the interval hop count of tracing point in one section of path curves section;
S frepresent the tail point actual range of one section of path curves section;
S srepresent the first point actual range of one section of path curves section;
Δ t represents the time interval of adjacent two tracing points in one section of path curves section;
When the speed in all segmentations all meets 0.3m/s < v < 2m/s, can determine that this target is for pedestrian;
Step 13, according to the coordinate points P of target at present frame i+nposition judgment pedestrian event danger classes:
(1) as a P i+nwhen being in road inside, this pedestrian's event danger classes is high;
(2) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be greater than 30 degree, then during this pedestrian's event danger classes is;
(3) as a P i+nwhen being in curb, if the direction of pedestrian movement's track vector and road correctly exercise angular separation be less than or equal to 30 degree, then this pedestrian's event danger classes is low.
CN201310208226.6A 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis Active CN103324913B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310208226.6A CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310208226.6A CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Publications (2)

Publication Number Publication Date
CN103324913A CN103324913A (en) 2013-09-25
CN103324913B true CN103324913B (en) 2016-03-30

Family

ID=49193644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310208226.6A Active CN103324913B (en) 2013-05-29 2013-05-29 A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis

Country Status (1)

Country Link
CN (1) CN103324913B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469084A (en) * 2015-11-20 2016-04-06 中国科学院苏州生物医学工程技术研究所 Rapid extraction method and system for target central point
CN105741321B (en) * 2016-01-31 2018-12-11 华南理工大学 Video object movement tendency analysis method based on trace point distribution
CN105959639B (en) * 2016-06-06 2019-06-14 南京工程学院 Pedestrian monitoring method in urban street area based on ground calibration
CN106127826B (en) * 2016-06-27 2019-01-22 安徽慧视金瞳科技有限公司 It is a kind of for projecting the connected component labeling method of interactive system
CN106341263B (en) * 2016-09-05 2019-06-14 南通大学 Personnel status information detection method based on time accumulation model
CN107330919B (en) * 2017-06-27 2020-07-10 中国科学院成都生物研究所 The method of obtaining the movement track of the stamen
CN109445587A (en) * 2018-10-22 2019-03-08 北京顺源开华科技有限公司 Kinematic parameter determines method and device
CN109670419B (en) * 2018-12-04 2023-05-23 天津津航技术物理研究所 Pedestrian detection method based on perimeter security video monitoring system
CN111447562B (en) * 2020-03-02 2021-12-24 北京梧桐车联科技有限责任公司 Vehicle travel track analysis method and device and computer storage medium
CN111914699B (en) * 2020-07-20 2023-08-08 同济大学 Pedestrian positioning and track acquisition method based on video stream of camera
CN111811567B (en) * 2020-07-21 2022-03-01 北京中科五极数据科技有限公司 Equipment detection method based on curve inflection point comparison and related device
CN112016409B (en) * 2020-08-11 2024-08-02 艾普工华科技(武汉)有限公司 Deep learning-based process specification visual identification judging method and system
CN112288975A (en) * 2020-11-13 2021-01-29 珠海大横琴科技发展有限公司 Event early warning method and device
CN112613365B (en) * 2020-12-11 2024-09-17 北京影谱科技股份有限公司 Pedestrian detection and behavior analysis method and device and computing equipment
CN114758147B (en) * 2020-12-29 2024-11-22 南宁富联富桂精密工业有限公司 Human body abnormal posture recognition method, device and computer readable storage medium
CN113392723A (en) * 2021-05-25 2021-09-14 珠海市亿点科技有限公司 Unmanned aerial vehicle forced landing area screening method, device and equipment based on artificial intelligence
CN113221926B (en) * 2021-06-23 2022-08-02 华南师范大学 Line segment extraction method based on angular point optimization
CN113537035A (en) * 2021-07-12 2021-10-22 宁波溪棠信息科技有限公司 Human body target detection method, device, electronic device and storage medium
CN113705355A (en) * 2021-07-30 2021-11-26 汕头大学 Real-time detection method for abnormal behaviors
CN113869166A (en) * 2021-09-18 2021-12-31 沈阳帝信人工智能产业研究院有限公司 Substation outdoor operation monitoring method and device
CN115049654B (en) * 2022-08-15 2022-12-06 成都唐源电气股份有限公司 Method for extracting reflective light bar of steel rail
CN116958189B (en) * 2023-09-20 2023-12-12 中国科学院国家空间科学中心 Moving point target time-space domain track tracking method based on line segment correlation
CN118364418B (en) * 2024-06-20 2024-10-29 无锡中基电机制造有限公司 Intelligent corrosion resistance detection method and system for bearing pedestal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
付洋等.《一种基于视频的道路行人检测方法》.《电视技术:视频应用与工程》.2012,第36卷(第13期),正文第140-144页. *
崔华.《基于小波阈值去噪方法的一种改进方案》.《测控技术》.2005,正文第8-10页. *
郭永涛等.《视频交通监控系统中背景提取算法》.《视频技术应用与工程》.2006,(第5期),正文第91-93页. *

Also Published As

Publication number Publication date
CN103324913A (en) 2013-09-25

Similar Documents

Publication Publication Date Title
CN103324913B (en) A kind of pedestrian event detection method of Shape-based interpolation characteristic sum trajectory analysis
CN108596129B (en) Vehicle line-crossing detection method based on intelligent video analysis technology
CN102096821B (en) Number plate identification method under strong interference environment on basis of complex network theory
CN101957920B (en) License plate search method based on digital video
CN103268489B (en) Automotive number plate recognition methods based on sliding window search
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN105551264A (en) Speed detection method based on license plate characteristic matching
CN103150549B (en) A kind of road tunnel fire detection method based on the early stage motion feature of smog
CN102324183B (en) Method for detecting and shooting vehicle based on composite virtual coil
CN102880863B (en) Method for positioning license number and face of driver on basis of deformable part model
CN107491753A (en) A kind of parking offense detection method based on background modeling
CN104063882B (en) Vehicle video speed measuring method based on binocular camera
CN111860509B (en) A two-stage method for accurate extraction of unconstrained license plate regions from coarse to fine
CN104658011A (en) Intelligent transportation moving object detection tracking method
CN104134079A (en) Vehicle license plate recognition method based on extremal regions and extreme learning machine
CN101470807A (en) Accurate detection method for highroad lane marker line
CN103902981A (en) Method and system for identifying license plate characters based on character fusion features
CN108388871B (en) Vehicle detection method based on vehicle body regression
CN103150550B (en) A kind of road pedestrian event detection method based on gripper path analysis
CN110321855A (en) A kind of greasy weather detection prior-warning device
CN104143077B (en) Pedestrian target search method and system based on image
CN107862341A (en) A kind of vehicle checking method
CN103413439A (en) Method for sorting passenger vehicles and goods vehicles based on videos
CN107578048A (en) A vehicle detection method in far-sighted scenes based on rough classification of vehicle types
CN105243354A (en) Vehicle detection method based on target feature points

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant