CN118155293B - A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking - Google Patents
A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking Download PDFInfo
- Publication number
- CN118155293B CN118155293B CN202410567563.2A CN202410567563A CN118155293B CN 118155293 B CN118155293 B CN 118155293B CN 202410567563 A CN202410567563 A CN 202410567563A CN 118155293 B CN118155293 B CN 118155293B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- pedestrians
- traffic participants
- crossing
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000006399 behavior Effects 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 230000005021 gait Effects 0.000 claims abstract description 24
- 230000008859 change Effects 0.000 claims abstract description 14
- 230000001133 acceleration Effects 0.000 claims abstract description 9
- 238000001228 spectrum Methods 0.000 claims abstract description 8
- 240000004050 Pentaglottis sempervirens Species 0.000 claims abstract description 7
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims abstract description 7
- 238000011156 evaluation Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000002123 temporal effect Effects 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000007620 mathematical function Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000003321 amplification Effects 0.000 claims description 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 2
- 238000007619 statistical method Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于小目标追踪的行人过街危险行为判别方法及系统,方法包括:步骤1、在指定过街区域利用摄像机采集俯视视角下的视频流;步骤2、逐帧读取采集得到的视频图像,对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹;步骤3、利用步骤2中得到的行人轨迹,对时间一阶差分得到行人瞬时速度,对瞬时速度曲线进行时间一阶差分得到加速度变化曲线,利用功率谱密度计算行人行走步频;步骤4、计算行人与其他交通参与者冲突指标;步骤5、通过步骤3计算得出的行人过街步态参数和步骤4计算得出的行人与其他交通参与者的冲突指标,判别行人行为是否为危险过街行为。本发明的方法能够提升行人危险过街行为的判别精度。
The present invention discloses a method and system for distinguishing dangerous behaviors of pedestrians crossing the street based on small target tracking, and the method comprises: step 1, using a camera to collect a video stream under a bird's-eye view in a designated crossing area; step 2, reading the collected video images frame by frame, performing frame-by-frame target detection on the images, and extracting the target trajectories of pedestrians and other traffic participants; step 3, using the pedestrian trajectory obtained in step 2, performing first-order time difference to obtain the instantaneous speed of the pedestrian, performing first-order time difference on the instantaneous speed curve to obtain the acceleration change curve, and using the power spectrum density to calculate the walking frequency of the pedestrian; step 4, calculating the conflict index between the pedestrian and other traffic participants; step 5, judging whether the pedestrian behavior is dangerous crossing behavior by using the pedestrian crossing gait parameters calculated in step 3 and the conflict index between the pedestrian and other traffic participants calculated in step 4. The method of the present invention can improve the discrimination accuracy of dangerous pedestrian crossing behavior.
Description
技术领域Technical Field
本发明涉及一种行人过街危险行为判别方法,属于图像识别与交通安全管理技术领域。The invention relates to a method for distinguishing dangerous behaviors of pedestrians crossing the street, and belongs to the technical field of image recognition and traffic safety management.
背景技术Background technique
步行是居民短距离出行的重要方式,在城市交通中占据重要地位。然而近年来行人安全事故造成的人身安全与财产损失愈发严重。行人的移动具有较强的随机性与明显地周期性,行人步频、步幅、步速等步行特征参数对于分析行人行为具有重要作用,交通领域对行人安全方面的研究愈发重视,然而目前的行人危险行为判别方法准确率和可靠性较低,现有的方法包括基于行人所处位置和标定危险区域重合程度识别、基于车辆行驶图像的识别以及基于单张图像画面行人危险特征的识别。Walking is an important way for residents to travel short distances and occupies an important position in urban transportation. However, in recent years, the personal safety and property losses caused by pedestrian safety accidents have become increasingly serious. Pedestrian movement is highly random and obviously periodic. Pedestrian walking characteristic parameters such as cadence, stride length, and speed play an important role in analyzing pedestrian behavior. The transportation field pays more and more attention to the research on pedestrian safety. However, the current pedestrian dangerous behavior identification methods have low accuracy and reliability. Existing methods include recognition based on the degree of overlap between the pedestrian's position and the calibrated dangerous area, recognition based on vehicle driving images, and recognition of pedestrian dangerous features based on a single image.
基于所处位置和标定危险区域的判定虽然在一定程度上能够反应行人危险程度,但是判别过于宽泛,对实际没有危险的行人容易产生误判,而对非危险区域内遭遇危险的行人也无法准确识别。基于车辆行驶图像的识别方法适用于智慧驾驶领域,对区域交通行为分析和重点区域监管无法起到有效作用。基于单张图像画面行人危险特征识别忽略了危险行为的时空关系以及与其他道路参与者间的交互作用,准确度和精确性不高。Although the judgment based on the location and the marked danger zone can reflect the degree of danger of pedestrians to a certain extent, the judgment is too broad and it is easy to misjudge pedestrians who are not actually dangerous. It is also impossible to accurately identify pedestrians who are in danger in non-dangerous areas. The recognition method based on vehicle driving images is suitable for the field of intelligent driving, but it cannot play an effective role in regional traffic behavior analysis and key area supervision. The recognition of pedestrian danger features based on a single image ignores the spatiotemporal relationship of dangerous behaviors and the interaction with other road participants, and its accuracy and precision are not high.
发明内容Summary of the invention
本发明所要解决的技术问题是克服现有技术的不足,提供一种行人过街危险行为判别方法,能够准确地评估是否发生了行人过街危险行为。The technical problem to be solved by the present invention is to overcome the deficiencies of the prior art and provide a method for distinguishing dangerous pedestrian crossing behaviors, which can accurately assess whether dangerous pedestrian crossing behaviors have occurred.
为解决上述技术问题,本发明采用以下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:
一种基于小目标追踪的行人过街危险行为判别方法,包括以下步骤:A method for identifying dangerous pedestrian crossing behaviors based on small target tracking includes the following steps:
步骤1、在指定过街区域利用摄像机采集俯视视角下的指定过街区域视频流;Step 1: Using a camera to collect a video stream of the designated crossing area from a bird's-eye view in the designated crossing area;
步骤2、逐帧读取步骤1中采集得到的视频图像,对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹,包括以下步骤:Step 2: Read the video images acquired in step 1 frame by frame, perform frame-by-frame target detection on the images, and extract the target trajectories of pedestrians and other traffic participants, including the following steps:
步骤21)求取目标真实框面积与小目标面积参数的比值,通过所述比值计算目标锚框匹配交并比IoU阈值以及针对小目标物体的损失扩张系数;Step 21) Calculate the ratio of the target real frame area to the small target area parameter, and calculate the target anchor frame matching intersection over union ratio (IoU) threshold and the loss expansion coefficient for the small target object through the ratio;
步骤22)逐帧获取视频图像中的行人和其他各类交通参与者位置后,利用DeepSORT目标关联跟踪算法对前后帧的目标进行关联,利用EfficientDet卷积目标检测网络提取视频中行人与其他各类交通参与者的运动轨迹;所述其他各类交通参与者包括汽车、卡车、自行车、摩托车;Step 22) After obtaining the positions of pedestrians and other types of traffic participants in the video image frame by frame, the DeepSORT target association tracking algorithm is used to associate the targets of the previous and next frames, and the EfficientDet convolutional target detection network is used to extract the motion trajectories of pedestrians and other types of traffic participants in the video; the other types of traffic participants include cars, trucks, bicycles, and motorcycles;
步骤23)选用一个已知大小和形状的棋盘格标定板,固定摄像机,拍摄若干张包含标定板的图像,使用OpenCV图像处理算法提取出每张图像标定板上的角点坐标,将角点坐标与真实世界坐标对应,建立像素坐标与真实坐标之间的映射关系;使用张正友相机标定算法计算相机的内外参数,得到像素坐标系与真实坐标系间的转换矩阵;Step 23) Select a checkerboard calibration plate of known size and shape, fix the camera, take several images containing the calibration plate, use OpenCV image processing algorithm to extract the coordinates of the corner points on the calibration plate in each image, correspond the corner point coordinates to the real world coordinates, and establish a mapping relationship between pixel coordinates and real coordinates; use Zhang Zhengyou camera calibration algorithm to calculate the internal and external parameters of the camera, and obtain the transformation matrix between the pixel coordinate system and the real coordinate system;
利用坐标系间的对应关系,将行人和其他各类交通参与者的轨迹点坐标转换为世界坐标系下的轨迹点坐标,按每秒n个轨迹点对车辆行驶轨迹进行均匀采样,并对每辆车的轨迹位置序列坐标进行卡尔曼滤波去噪平滑,得到真实世界下的行人和其他各类交通参与者平滑轨迹;By using the correspondence between coordinate systems, the trajectory coordinates of pedestrians and other types of traffic participants are converted into trajectory coordinates in the world coordinate system. The vehicle trajectories are uniformly sampled at n trajectory points per second, and the trajectory position sequence coordinates of each vehicle are denoised and smoothed by Kalman filtering to obtain the smooth trajectories of pedestrians and other types of traffic participants in the real world.
步骤3、利用步骤2中得到的行人轨迹,对时间一阶差分得到行人瞬时速度,对瞬时速度曲线进行时间一阶差分得到加速度变化曲线,利用功率谱密度计算行人行走步频;Step 3: Using the pedestrian trajectory obtained in step 2, the instantaneous speed of the pedestrian is obtained by first-order time difference, the acceleration change curve is obtained by first-order time difference of the instantaneous speed curve, and the walking frequency of the pedestrian is calculated by using the power spectrum density;
步骤4、计算行人与其他交通参与者冲突指标TTC;Step 4: Calculate the conflict index TTC between pedestrians and other traffic participants;
步骤5、通过步骤3计算得出的行人过街步态参数和步骤4计算得出的行人与其他交通参与者的冲突指标TTC,判别行人行为是否为危险过街行为。Step 5: Determine whether the pedestrian's behavior is dangerous crossing behavior by using the pedestrian crossing gait parameters calculated in step 3 and the conflict index TTC between pedestrians and other traffic participants calculated in step 4.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤2中,使用预训练后的带有特征金字塔的EfficientDet卷积目标检测网络对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹。In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 2, the pre-trained EfficientDet convolutional target detection network with feature pyramid is used to perform frame-by-frame target detection on the image to extract the target trajectories of pedestrians and other traffic participants.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤21)中,具体方法为:In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 21), the specific method is:
将目标真实框面积与小目标面积参数的比值投影至给定的基础锚框阈值与小目标锚框阈值构成的区间内,计算目标锚框匹配交并比IoU阈值,将目标真实框面积与小目标面积参数的比值倒数投影至0-1区间后乘以一个系数后作为对小目标物体损失扩张系数:Project the ratio of the target real box area to the small target area parameter into the interval formed by the given basic anchor box threshold and the small target anchor box threshold, calculate the target anchor box matching intersection over union ratio IoU threshold, project the inverse of the ratio of the target real box area to the small target area parameter into the interval of 0-1 and multiply it by a coefficient as the loss expansion coefficient for the small target object:
, ,
, ,
其中,为计算得到的目标锚框匹配交并比IoU阈值,为针对正常大小物体的目标锚框匹配交并比IoU阈值,为设置好的小目标锚框匹配交并比IoU阈值下限,为真实框面积,为设置好的小目标框面积参数,a为放大参数,为损失扩张系数,b为扩张参数,c为放大参数,sigmoid是数学函数,用于将任意实数映射到(0, 1)之间。in, is the calculated target anchor box matching intersection over union (IoU) threshold, is the IoU threshold for matching target anchor boxes of normal-sized objects. The lower limit of the IoU threshold for matching the small target anchor box is set. is the real frame area, is the set small target box area parameter, a is the magnification parameter, is the loss expansion coefficient, b is the expansion parameter, c is the amplification parameter, and sigmoid is a mathematical function used to map any real number to between (0, 1).
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤23)中,像素坐标系与真实坐标系间具有如下关系:In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 23), the relationship between the pixel coordinate system and the real coordinate system is as follows:
, ,
其中为比例因子, 和 分别为像素坐标系的横纵坐标,,,分别为世界坐标系的水平横坐标、水平纵坐标和竖直坐标,和分别是相机在x和y方向上的焦距,是像素间的非正交因子,其中分别表示相机感光板中心在像素坐标系下的坐标,为旋转矩阵, 为平移向量;表示一个全0矩阵,大小为1*3,公式中的3代表3行,T代表转置。in is the scale factor, and are the horizontal and vertical coordinates of the pixel coordinate system, , , are the horizontal abscissa, horizontal ordinate and vertical coordinate of the world coordinate system respectively. and are the focal lengths of the camera in the x and y directions, respectively. is the non-orthogonal factor between pixels, where They represent the coordinates of the center of the camera's photosensitive plate in the pixel coordinate system, is the rotation matrix, is the translation vector; Represents an all-0 matrix of size 1*3. The 3 in the formula represents 3 rows and T represents transpose.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤3中,计算行人行走步频的公式如下:In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 3, the formula for calculating the pedestrian's walking frequency is as follows:
, ,
其中,为各频率的功率谱密度估计值,为采样频率,为行人步行过程的离散时间序列,为行人瞬时速度值,双竖线表示计算向量的大小,即瞬时速度的大小值;是信号的长度,指行人步行时间序列的长度;in, For each frequency The power spectral density estimate of is the sampling frequency, is the discrete time series of the pedestrian walking process, is the instantaneous speed of the pedestrian , the double vertical lines indicate the magnitude of the calculated vector, i.e. the magnitude of the instantaneous velocity; is the length of the signal, which refers to the length of the pedestrian walking time series;
选择具有最大功率密度的频率代表步频,利用行人步速除以步频得到行人行走步幅,每秒取出秒内的行人步态参数以及对应行人轨迹和交通参与者轨迹用于步骤4的计算。The frequency with the maximum power density is selected to represent the step frequency, and the pedestrian's walking stride is obtained by dividing the pedestrian's walking speed by the step frequency. Take out in seconds The pedestrian gait parameters within seconds and the corresponding pedestrian trajectory and traffic participant trajectory are used for the calculation in step 4.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤4中,在取出的秒轨迹片段内,计算行人与N个距离最近的其他交通参与者之间的冲突指标TTC,将其中最小的冲突指标TTC作为步骤5使用的冲突指标TTC,冲突指标TTC的计算方法如下式:In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 4, In the second trajectory segment, the conflict index TTC between the pedestrian and the N closest other traffic participants is calculated, and the smallest conflict index TTC is used as the conflict index TTC used in step 5. The calculation method of the conflict index TTC is as follows:
, ,
其中,是行人i和交通参与者j之间的TTC,是行人与其他交通参与者间的相对距离,是行人与其他交通参与者相对速度在位置连线上的投影,相对距离通过轨迹点的现实世界坐标求得,是对行人与其他交通参与者的轨迹进行对时间的一阶差分,求出行人与其他交通参与者的速度,对速度进行矢量合成后,在行人与其他交通参与者连线上投影得到。in, is the TTC between pedestrian i and traffic participant j , is the relative distance between pedestrians and other traffic participants, is the projection of the relative speed of pedestrians and other traffic participants on the position line, and the relative distance Obtained through the real world coordinates of the trajectory points, It is to take the first-order difference of time of the trajectories of pedestrians and other traffic participants, calculate the speed of pedestrians and other traffic participants, perform vector synthesis of the speed, and project it onto the line connecting pedestrians and other traffic participants.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤5中,将秒内的行人过街步态参数输入预训练好的Transformer时序注意力神经网络分类器,判别行人过街步态参数变化规律是否符合危险行为特征,形成评估指标一R1;另外对行人过街步态参数进行统计分析,判断是否存在步频、步幅或步速大于阈值或设定时间内变化幅度大于阈值的情况,形成评估指标二R2;利用评估指标一R1、评估指标二R2和冲突指标TTC构建判定表,通过判定表判别出是否发生行人过街危险行为。In the aforementioned pedestrian crossing dangerous behavior identification method based on small target tracking, in step 5, The gait parameters of pedestrians crossing the street within seconds are input into the pre-trained Transformer temporal attention neural network classifier to determine whether the change pattern of the gait parameters of pedestrians crossing the street conforms to the characteristics of dangerous behavior, forming an evaluation index R1; in addition, a statistical analysis is performed on the gait parameters of pedestrians crossing the street to determine whether there is a situation where the step frequency, stride or speed is greater than the threshold or the change amplitude within the set time is greater than the threshold, forming an evaluation index R2; the evaluation index R1, the evaluation index R2 and the conflict index TTC are used to construct a judgment table, and the judgment table is used to determine whether a dangerous pedestrian crossing behavior occurs.
前述的基于小目标追踪的行人过街危险行为判别方法,在步骤5中,当判定危险行为发生时,截取行为发生前后设定时间段的视频留作判据,并输出对应的指标,包括评估指标一R1、评估指标二R2和冲突指标TTC。In the aforementioned method for identifying dangerous pedestrian crossing behaviors based on small target tracking, in step 5, when it is determined that a dangerous behavior occurs, the video of a set time period before and after the behavior occurs is captured as a criterion, and corresponding indicators are output, including evaluation indicator 1 R1, evaluation indicator 2 R2 and conflict indicator TTC.
一种计算机系统,包括存储器、处理器及存储在存储器上的计算机程序,其特征在于,所述处理器执行所述计算机程序以实现上述方法的步骤。A computer system comprises a memory, a processor and a computer program stored in the memory, wherein the processor executes the computer program to implement the steps of the above method.
本发明达到的有益效果:本发明的基于小目标追踪的行人过街危险行为判别方法,通过卷积目标检测网络进行目标检测,利用多尺度学习方法将FPN较浅层的特征赋予更高的权重以增强对小目标特征的保留,并辅以更小的锚框和更高的小目标样本损失权重以提升对行人小目标的检测,能够有效识别画面中的行人,通过跟踪行人与其他交通参与者提取轨迹。结合行人目标的加速度变化规律提取行人移动的步态特征,包括了步频步幅和步速,同时利用行人和其他交通参与者之间的轨迹交互关系计算交通冲突指标。结合行人本身行为以及与交通参与者的交互行为判别行为的危险性,能够提升行人危险过街行为的判别精度,在交通安全管理控制领域具有实际工程价值。The beneficial effects achieved by the present invention are as follows: The method for distinguishing dangerous pedestrian crossing behaviors based on small target tracking of the present invention performs target detection through a convolutional target detection network, and uses a multi-scale learning method to assign higher weights to the shallower features of FPN to enhance the retention of small target features, and is supplemented by a smaller anchor frame and a higher small target sample loss weight to improve the detection of small pedestrian targets. It can effectively identify pedestrians in the picture, and extract trajectories by tracking pedestrians and other traffic participants. The gait characteristics of pedestrian movement are extracted in combination with the acceleration change law of pedestrian targets, including the step frequency, stride length and speed, and the traffic conflict index is calculated using the trajectory interaction relationship between pedestrians and other traffic participants. The dangerousness of the behavior can be judged by combining the pedestrian's own behavior and the interactive behavior with traffic participants, which can improve the accuracy of distinguishing dangerous pedestrian crossing behaviors and has practical engineering value in the field of traffic safety management and control.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明实施例1中基于小目标追踪的行人过街危险行为判别方法流程图。FIG1 is a flow chart of a method for identifying dangerous pedestrian crossing behaviors based on small target tracking in Embodiment 1 of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例对本发明的技术方案作进一步的说明。The technical solution of the present invention is further described below in conjunction with the accompanying drawings and embodiments.
实施例1Example 1
如图1所示,本实施例提供一种基于小目标追踪的行人过街危险行为判别方法,包括以下步骤:As shown in FIG1 , this embodiment provides a method for identifying dangerous pedestrian crossing behaviors based on small target tracking, comprising the following steps:
步骤1、在指定过街区域利用摄像机采集俯视视角下的指定过街区域视频流,将采集得到的视频流传入以下步骤执行操作;Step 1: Use a camera to collect a video stream of the designated crossing area from a bird's-eye view in the designated crossing area, and input the collected video stream into the following steps to perform operations;
摄像机采集得到的视频格式与帧数如下表1所示;The video format and frame number captured by the camera are shown in Table 1 below;
表1 摄像机视频流采集表Table 1 Camera video stream collection table
利用路侧摄像头可以持续获取视频,且高度适中,可以稳定拍摄行人,易于在现有的网络权重上进行预训练。Roadside cameras can be used to continuously acquire video at a moderate height, so pedestrians can be captured stably, making it easy to pre-train on existing network weights.
步骤2、逐帧读取步骤1中采集得到的视频图像,使用预训练后的带有特征金字塔(FPN)的EfficientDet卷积目标检测网络对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹。Step 2: Read the video images collected in step 1 frame by frame, use the pre-trained EfficientDet convolutional target detection network with feature pyramid (FPN) to perform frame-by-frame target detection on the image, and extract the trajectories of pedestrians and other traffic participants.
在预训练的数据集中,对俯视视角的行人标注大量锚框样本,同时为了保证对于行人小目标的识别率,在EfficientDet卷积目标检测网络设计中对FPN浅层特征的融合权重进行限制,使小目标语义特征不会过度丢失,同时对网络检测锚框进行特别设计,用更小的锚框配合针对小目标对象更低的锚框匹配交并比(IoU)阈值,并对小目标设置更高的损失,进一步保证对小目标的检测准确度,包括以下步骤:In the pre-trained dataset, a large number of anchor box samples are annotated for pedestrians in a bird's-eye view. At the same time, in order to ensure the recognition rate of small pedestrian targets, the fusion weight of FPN shallow features is restricted in the design of the EfficientDet convolutional target detection network, so that the semantic features of small targets will not be lost excessively. At the same time, the network detection anchor box is specially designed, using a smaller anchor box with a lower anchor box matching intersection over union (IoU) threshold for small target objects, and setting a higher loss for small targets to further ensure the detection accuracy of small targets. The following steps are included:
步骤21)求取目标真实框面积与小目标面积参数的比值,通过所述比值计算目标锚框匹配交并比IoU阈值以及针对小目标物体的损失扩张系数,具体方法为:Step 21) Calculate the ratio of the target real frame area to the small target area parameter, and calculate the target anchor frame matching intersection over union ratio (IoU) threshold and the loss expansion coefficient for the small target object by the ratio. The specific method is as follows:
将目标真实框面积与小目标面积参数的比值投影至给定的基础锚框阈值与小目标锚框阈值构成的区间内,计算目标锚框匹配交并比IoU阈值,将目标真实框面积与小目标面积参数的比值倒数投影至0-1区间后乘以一个系数后作为对小目标物体损失扩张系数:Project the ratio of the target real box area to the small target area parameter into the interval formed by the given basic anchor box threshold and the small target anchor box threshold, calculate the target anchor box matching intersection over union ratio IoU threshold, project the inverse of the ratio of the target real box area to the small target area parameter into the interval of 0-1 and multiply it by a coefficient as the loss expansion coefficient for the small target object:
其中,为计算得到的目标锚框匹配交并比IoU阈值,为针对正常大小物体的目标锚框匹配交并比IoU阈值,为设置好的小目标锚框匹配交并比IoU阈值下限,为真实框面积,为设置好的小目标框面积参数,a为放大参数,为损失扩张系数,b为扩张参数,c为放大参数,sigmoid是数学函数,用于将任意实数映射到(0, 1)之间,图像呈现出"S"型曲线;in, is the calculated target anchor box matching intersection over union (IoU) threshold, is the IoU threshold for matching target anchor boxes of normal-sized objects. The lower limit of the IoU threshold for matching the small target anchor box is set. is the real frame area, is the set small target box area parameter, a is the magnification parameter, is the loss expansion coefficient, b is the expansion parameter, c is the magnification parameter, sigmoid is a mathematical function used to map any real number to between (0, 1), and the image presents an "S"curve;
EfficientDet卷积目标检测网络的训练流程中有两个关键步骤,分别是检测目标匹配和网络权重更新。There are two key steps in the training process of the EfficientDet convolutional target detection network, namely detection target matching and network weight updating.
其中检测目标匹配过程中有一个关键参数即为IoU阈值(交并比阈值),只有当检测框和目标框的交并比大于阈值时才认为匹配成功,针对小目标使用相对正常目标更小的交并比阈值,使网络能够更好的匹配到小目标。A key parameter in the target matching process is the IoU threshold (intersection over union threshold). The match is considered successful only when the IoU of the detection box and the target box is greater than the threshold. For small targets, a smaller IoU threshold is used than for normal targets, so that the network can better match small targets.
其中网络权重更新的依据为损失函数,对小目标赋予更高的损失,使网络在检测小目标失败后得到更多的惩罚,尽可能的检测出小目标,以对网络参数进行优化。The basis for updating the network weights is the loss function, which assigns higher losses to small targets so that the network receives more penalties after failing to detect small targets. Small targets are detected as much as possible to optimize the network parameters.
步骤22)逐帧获取视频图像中的行人和其他各类交通参与者位置后,利用DeepSORT目标关联跟踪算法对前后帧的目标进行关联,利用EfficientDet卷积目标检测网络提取视频中行人与其他各类交通参与者的运动轨迹;所述其他各类交通参与者包括汽车、卡车、自行车、摩托车等;Step 22) After obtaining the positions of pedestrians and other types of traffic participants in the video image frame by frame, the DeepSORT target association tracking algorithm is used to associate the targets of the previous and next frames, and the EfficientDet convolutional target detection network is used to extract the motion trajectories of pedestrians and other types of traffic participants in the video; the other types of traffic participants include cars, trucks, bicycles, motorcycles, etc.;
步骤23)选用一个已知大小和形状的棋盘格标定板,固定摄像机,拍摄若干张包含标定板的图像,使用OpenCV图像处理算法提取出每张图像标定板上的角点坐标,将角点坐标与真实世界坐标对应,建立像素坐标与真实坐标之间的映射关系;使用张正友相机标定算法计算相机的内外参数,得到像素坐标系与真实坐标系间的转换矩阵,像素坐标系与真实坐标系间具有如下关系:Step 23) Select a checkerboard calibration plate of known size and shape, fix the camera, take several images containing the calibration plate, use OpenCV image processing algorithm to extract the coordinates of the corner points on the calibration plate of each image, correspond the corner point coordinates to the real world coordinates, and establish the mapping relationship between pixel coordinates and real coordinates; use Zhang Zhengyou camera calibration algorithm to calculate the internal and external parameters of the camera, and obtain the transformation matrix between the pixel coordinate system and the real coordinate system. The pixel coordinate system and the real coordinate system have the following relationship:
其中为比例因子, 和 分别为像素坐标系的横纵坐标,,,分别为世界坐标系的水平横坐标、水平纵坐标和竖直坐标,和分别是相机在x和y方向上的焦距,是像素间的非正交因子,其中分别表示相机感光板中心在像素坐标系下的坐标,为旋转矩阵, 为平移向量;表示一个全0矩阵,大小为1*3,公式中的3代表3行,T代表转置。in is the scale factor, and are the horizontal and vertical coordinates of the pixel coordinate system, , , are the horizontal abscissa, horizontal ordinate and vertical coordinate of the world coordinate system respectively. and are the focal lengths of the camera in the x and y directions, respectively. is the non-orthogonal factor between pixels, where They represent the coordinates of the center of the camera's photosensitive plate in the pixel coordinate system, is the rotation matrix, is the translation vector; Represents an all-0 matrix of size 1*3. The 3 in the formula represents 3 rows and T represents transpose.
对于固定摄像机标定转换矩阵,包括相机的内参矩阵M1和外参矩阵M2:For a fixed camera, the calibration transformation matrix includes the camera's intrinsic parameter matrix M1 and extrinsic parameter matrix M2:
利用坐标系间的对应关系,将行人和其他各类交通参与者的轨迹点坐标转换为世界坐标系下的轨迹点坐标,按每秒n个轨迹点对车辆行驶轨迹进行均匀采样,并对每辆车的轨迹位置序列坐标进行卡尔曼滤波去噪平滑,得到真实世界下的行人和其他各类交通参与者平滑轨迹。By utilizing the correspondence between coordinate systems, the trajectory point coordinates of pedestrians and other types of traffic participants are converted into trajectory point coordinates in the world coordinate system. The vehicle trajectories are uniformly sampled at n trajectory points per second, and the trajectory position sequence coordinates of each vehicle are denoised and smoothed by Kalman filtering to obtain the smooth trajectories of pedestrians and other types of traffic participants in the real world.
如果不使用转换矩阵建立起像素坐标系与真实坐标系的映射关系,运行跟踪算法后得到的坐标将是像素坐标,与其现实意义存在差异,对于后续运算精度存在一定影响,因此在提取出交通参与者运动轨迹后通过转换矩阵将其轨迹投影到真实世界坐标系中,保证后续运算精度。If the transformation matrix is not used to establish the mapping relationship between the pixel coordinate system and the real coordinate system, the coordinates obtained after running the tracking algorithm will be pixel coordinates, which are different from their actual meanings and have a certain impact on the accuracy of subsequent calculations. Therefore, after extracting the motion trajectory of the traffic participant, its trajectory is projected into the real-world coordinate system through the transformation matrix to ensure the accuracy of subsequent calculations.
提取得到行人与其他交通参与者轨迹如表2所示:The extracted trajectories of pedestrians and other traffic participants are shown in Table 2:
表2 行人与其他交通参与者轨迹表Table 2 Trajectory table of pedestrians and other traffic participants
得到的轨迹表如表3所示:The obtained trajectory table is shown in Table 3:
表3轨迹表Table 3 Trajectory table
步骤3、计算行人步态参数。行人运动具有周期性,运动加速度根据行走脚步呈周期性变化,利用步骤2中得到的行人轨迹,对时间一阶差分得到行人瞬时速度,对瞬时速度曲线进行时间一阶差分可以得到加速度变化曲线,利用功率谱密度计算行人行走步频,公式如下:Step 3: Calculate the pedestrian gait parameters. Pedestrian movement is periodic, and the movement acceleration changes periodically according to the walking steps. Using the pedestrian trajectory obtained in step 2, the instantaneous speed of the pedestrian is obtained by the first-order time difference. The acceleration change curve can be obtained by taking the first-order time difference of the instantaneous speed curve. The pedestrian walking frequency is calculated using the power spectrum density. The formula is as follows:
其中,为各频率的功率谱密度估计值,为采样频率,为行人步行过程的离散时间序列,为行人瞬时速度值,双竖线表示计算向量的大小,即瞬时速度的大小值;是信号的长度,指行人步行时间序列的长度;in, For each frequency The power spectral density estimate of is the sampling frequency, is the discrete time series of the pedestrian walking process, is the instantaneous speed of the pedestrian , the double vertical lines indicate the magnitude of the calculated vector, i.e. the magnitude of the instantaneous velocity; is the length of the signal, which refers to the length of the pedestrian walking time series;
选择具有最大功率密度的频率代表步频,利用行人步速除以步频得到行人行走步幅,每t1秒取出t2秒内的行人步态参数以及对应行人轨迹和交通参与者轨迹用于步骤4的计算,本实施例中,t1为2秒,t2为4秒。The frequency with the maximum power density is selected to represent the cadence, and the pedestrian's walking stride is obtained by dividing the pedestrian's walking speed by the cadence. The pedestrian's gait parameters within t2 seconds and the corresponding pedestrian trajectory and traffic participant trajectory are taken out every t1 seconds for the calculation of step 4. In this embodiment, t1 is 2 seconds and t2 is 4 seconds.
本实施例中使用了功率谱密度方法计算步频,再利用步频步幅步速关系计算步幅,所述功率谱密度方法利用了行人加速度变化的波性质,通过功率波来获取步频较为准确,同时可以过滤一些噪声的影响。In this embodiment, the power spectrum density method is used to calculate the cadence, and then the stride is calculated using the relationship between the cadence, stride length and speed. The power spectrum density method utilizes the wave nature of the pedestrian's acceleration change. It is more accurate to obtain the cadence through the power wave, and at the same time, it can filter out the influence of some noise.
得到的行人步态参数表如表4所示。The obtained pedestrian gait parameter table is shown in Table 4.
表4行人步态参数表Table 4 Pedestrian gait parameters
表中,表示步频,表示步长, 表示步速。 In the table, Indicates the step frequency, represents the step length, Indicates pace.
步骤4、计算行人与其他交通参与者冲突指标TTC;Step 4: Calculate the conflict index TTC between pedestrians and other traffic participants;
在取出的t2秒轨迹片段内,计算行人与N个距离最近的其他交通参与者之间的冲突指标TTC,将其中最小的冲突指标TTC作为步骤5使用的冲突指标TTC,冲突指标TTC的计算方法如下所示:In the extracted t2-second trajectory segment, the conflict index TTC between the pedestrian and the N closest other traffic participants is calculated, and the smallest conflict index TTC is used as the conflict index TTC used in step 5. The calculation method of the conflict index TTC is as follows:
其中,是行人i和交通参与者j之间的TTC(Time to Time To Collision,碰撞时间),是行人与其他交通参与者间的相对距离,是行人与其他交通参与者相对速度在位置连线上的投影,相对距离可以通过轨迹点的现实世界坐标求得,是对行人与其他交通参与者的轨迹进行对时间的一阶差分,求出行人与其他交通参与者的速度,对速度进行矢量合成后,在行人与其他交通参与者连线上投影得到。in, is the TTC (Time to Time To Collision) between pedestrian i and traffic participant j , is the relative distance between pedestrians and other traffic participants, is the projection of the relative speed of pedestrians and other traffic participants on the position line, and the relative distance It can be obtained by the real world coordinates of the trajectory points. It is to take the first-order difference of time of the trajectories of pedestrians and other traffic participants, calculate the speed of pedestrians and other traffic participants, perform vector synthesis of the speed, and project it onto the line connecting pedestrians and other traffic participants.
所述N≤3,数量越大对算力要求越高,因为画面中行人可能和很多交通参与者都能构成交互关系,通过距离筛选出距离较近的单位进行计算,可以提高计算效率。The N≤3, the larger the number, the higher the computing power requirement, because pedestrians in the picture may have interactive relationships with many traffic participants. By filtering out units with closer distances for calculation, the computing efficiency can be improved.
步骤5、通过步骤3计算得出的行人过街步态参数和步骤4计算得出的行人与其他交通参与者的冲突指标TTC,判别行人行为是否为危险过街行为,包括以下步骤:Step 5: using the pedestrian crossing gait parameters calculated in step 3 and the conflict index TTC between pedestrians and other traffic participants calculated in step 4, determining whether the pedestrian's behavior is a dangerous crossing behavior, including the following steps:
将t2秒内的行人过街步态参数输入预训练好的Transformer时序注意力神经网络分类器,判别行人过街步态参数变化规律是否符合危险行为特征,形成评估指标一R1,另外对行人过街步态参数进行统计分析,判断是否存在步频、步幅或步速大于阈值或设定时间内(如1s)内变化幅度大于阈值的情况,形成评估指标二R2;利用评估指标一R1、评估指标二R2和冲突指标TTC构建判定表,通过判定表判别出是否发生行人过街危险行为,所述判定表如表5所:The pedestrian crossing gait parameters within t2 seconds are input into the pre-trained Transformer temporal attention neural network classifier to determine whether the change pattern of the pedestrian crossing gait parameters conforms to the characteristics of dangerous behavior, forming an evaluation index R1. In addition, the pedestrian crossing gait parameters are statistically analyzed to determine whether there is a situation where the step frequency, stride or speed is greater than the threshold or the change amplitude within a set time (such as 1s) is greater than the threshold, forming an evaluation index R2; the evaluation index R1, the evaluation index R2 and the conflict index TTC are used to construct a judgment table, and the judgment table is used to determine whether a pedestrian crossing dangerous behavior occurs. The judgment table is shown in Table 5:
表5危险行为判别表Table 5 Dangerous behavior identification table
表中,Thred_R_1、Thred_R_2、、分别表示评估指标阈值一、评估指标阈值二、冲突指标TTC阈值一、冲突指标TTC阈值二。In the table, Thred_R_1, Thred_R_2, , They respectively represent evaluation index threshold 1, evaluation index threshold 2, conflict index TTC threshold 1, and conflict index TTC threshold 2.
当判定危险行为发生时,截取行为发生前后设定时间段(如10s)的视频留作判据,并输出对应的指标,包括评估指标一R1、评估指标二R2和冲突指标TTC。When a dangerous behavior is determined to have occurred, the video of a set time period (such as 10 seconds) before and after the behavior is captured as a criterion, and the corresponding indicators are output, including evaluation indicator 1 R1, evaluation indicator 2 R2 and conflict indicator TTC.
本实施例中,利用Transformer时序注意力神经网络分类器判别行人过街步态参数是否符合危险行为特征,Transformer网络能够自适应从数据中学习规律,鲁棒性和精确度高,同时结合了冲突替代指标和步态客观规律来进行判定,综合性强,鲁棒性高,受到环境影响小。In this embodiment, the Transformer temporal attention neural network classifier is used to determine whether the gait parameters of pedestrians crossing the street meet the characteristics of dangerous behavior. The Transformer network can adaptively learn rules from data, with high robustness and accuracy. At the same time, it combines conflict substitution indicators and objective gait rules to make judgments, with strong comprehensiveness, high robustness, and little impact from the environment.
评估指标一R1、评估指标二R2和冲突指标TTC这三个指标可以根据工程实际进行拆解判定,三个指标都可以用来单独判断是否危险,三者结合判断精确度进一步提高,也可以用一些其他网络来判定,如利用LSTM来代替Transformer,指标上可以利用PET代替TTC。Evaluation index 1 R1, evaluation index 2 R2 and conflict index TTC can be disassembled and judged according to the actual project. All three indicators can be used to judge whether it is dangerous separately. The judgment accuracy can be further improved by combining the three. Some other networks can also be used for judgment, such as using LSTM to replace Transformer, and PET can be used instead of TTC in terms of indicators.
实施例2Example 2
本实施例提供一种基于小目标追踪的行人过街危险行为判别方法,包括以下步骤:This embodiment provides a method for identifying dangerous pedestrian crossing behaviors based on small target tracking, including the following steps:
步骤1、在指定过街区域利用摄像机采集俯视视角下的指定过街区域视频流;Step 1: Using a camera to collect a video stream of the designated crossing area from a bird's-eye view in the designated crossing area;
步骤2、逐帧读取步骤1中采集得到的视频图像,使用预训练后的带有特征金字塔(FPN)的EfficientDet卷积目标检测网络对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹;Step 2: Read the video images collected in step 1 frame by frame, use the pre-trained EfficientDet convolutional target detection network with feature pyramid (FPN) to perform frame-by-frame target detection on the images, and extract the trajectories of pedestrian targets and other traffic participants;
步骤21)求取目标真实框面积与小目标面积参数的比值,通过所述比值计算目标锚框匹配交并比IoU阈值以及针对小目标物体的损失扩张系数;Step 21) Calculate the ratio of the target real frame area to the small target area parameter, and calculate the target anchor frame matching intersection over union ratio (IoU) threshold and the loss expansion coefficient for the small target object through the ratio;
步骤22)逐帧获取视频图像中的行人和其他各类交通参与者位置后,利用最近邻算法、概率数据关联算法或SORT算法对前后帧的目标进行关联,利用EfficientDet卷积目标检测网络提取视频中行人与其他各类交通参与者的运动轨迹;Step 22) After obtaining the positions of pedestrians and other types of traffic participants in the video image frame by frame, the nearest neighbor algorithm, probabilistic data association algorithm or SORT algorithm are used to associate the targets of the previous and next frames, and the motion trajectories of pedestrians and other types of traffic participants in the video are extracted using the EfficientDet convolutional target detection network;
步骤23)选用一个已知大小和形状的棋盘格标定板,固定摄像机,拍摄若干张包含标定板的图像,使用OpenCV图像处理算法提取出每张图像标定板上的角点坐标,将角点坐标与真实世界坐标对应,建立像素坐标与真实坐标之间的映射关系;使用张正友相机标定算法计算相机的内外参数,得到像素坐标系与真实坐标系间的转换矩阵,Step 23) Select a checkerboard calibration plate of known size and shape, fix the camera, take several images containing the calibration plate, use OpenCV image processing algorithm to extract the coordinates of the corner points on the calibration plate in each image, correspond the corner point coordinates to the real world coordinates, and establish the mapping relationship between pixel coordinates and real coordinates; use Zhang Zhengyou camera calibration algorithm to calculate the internal and external parameters of the camera, and obtain the transformation matrix between the pixel coordinate system and the real coordinate system.
步骤3、利用步骤2中得到的行人轨迹,对时间一阶差分得到行人瞬时速度,对瞬时速度曲线进行时间一阶差分可以得到加速度变化曲线,利用功率谱密度计算行人行走步频;Step 3: Using the pedestrian trajectory obtained in step 2, the instantaneous speed of the pedestrian is obtained by first-order time difference. The acceleration change curve can be obtained by first-order time difference of the instantaneous speed curve, and the walking frequency of the pedestrian is calculated using the power spectrum density;
步骤4、计算行人与其他交通参与者冲突指标TTC;Step 4: Calculate the conflict index TTC between pedestrians and other traffic participants;
步骤5、通过步骤3计算得出的行人过街步态参数和步骤4计算得出的行人与其他交通参与者的冲突指标TTC,判别行人行为是否为危险过街行为。Step 5: Determine whether the pedestrian's behavior is dangerous crossing behavior by using the pedestrian crossing gait parameters calculated in step 3 and the conflict index TTC between pedestrians and other traffic participants calculated in step 4.
实施例3Example 3
本实施例提供一种基于小目标追踪的行人过街危险行为判别方法,包括以下步骤:This embodiment provides a method for identifying dangerous pedestrian crossing behaviors based on small target tracking, including the following steps:
步骤1、在指定过街区域利用摄像机采集俯视视角下的指定过街区域视频流;Step 1: Using a camera to collect a video stream of the designated crossing area from a bird's-eye view in the designated crossing area;
步骤2、逐帧读取步骤1中采集得到的视频图像,使用利用yolov1、yolov2或fastrcnn方法对图像进行逐帧目标检测,提取行人目标与其他交通参与者目标轨迹;Step 2: Read the video image acquired in step 1 frame by frame, use yolov1, yolov2 or fastrcnn method to perform frame-by-frame target detection on the image, and extract the target trajectories of pedestrians and other traffic participants;
步骤3、利用步骤2中得到的行人轨迹,对时间一阶差分得到行人瞬时速度,对瞬时速度曲线进行时间一阶差分可以得到加速度变化曲线,利用功率谱密度计算行人行走步频;Step 3: Using the pedestrian trajectory obtained in step 2, the instantaneous speed of the pedestrian is obtained by first-order time difference. The acceleration change curve can be obtained by first-order time difference of the instantaneous speed curve, and the walking frequency of the pedestrian is calculated using the power spectrum density;
步骤4、计算行人与其他交通参与者冲突指标TTC;Step 4: Calculate the conflict index TTC between pedestrians and other traffic participants;
步骤5、通过步骤3计算得出的行人过街步态参数和步骤4计算得出的行人与其他交通参与者的冲突指标TTC,判别行人行为是否为危险过街行为。Step 5: Determine whether the pedestrian's behavior is dangerous crossing behavior by using the pedestrian crossing gait parameters calculated in step 3 and the conflict index TTC between pedestrians and other traffic participants calculated in step 4.
实施例4Example 4
一种计算机系统,包括存储器、处理器及存储在存储器上的计算机程序,其特征在于,所述处理器执行所述计算机程序以实现上述方法的步骤。A computer system comprises a memory, a processor and a computer program stored in the memory, wherein the processor executes the computer program to implement the steps of the above method.
实施例5Example 5
一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现上述方法的步骤。A computer-readable storage medium having a computer program/instruction stored thereon, characterized in that the computer program/instruction implements the steps of the above method when executed by a processor.
实施例6Example 6
一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现上述方法的步骤。A computer program product comprises a computer program/instruction, wherein the computer program/instruction implements the steps of the above method when executed by a processor.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may take the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to the flowcharts and/or block diagrams of the methods, devices (systems), and computer program products according to the embodiments of the present application. It should be understood that each process and/or box in the flowchart and/or block diagram, as well as the combination of the processes and/or boxes in the flowchart and/or block diagram, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing device to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing device generate a device for implementing the functions specified in one process or multiple processes in the flowchart and/or one box or multiple boxes in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to operate in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the present invention, and that the scope of the present invention is defined by the appended claims and their equivalents.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410567563.2A CN118155293B (en) | 2024-05-09 | 2024-05-09 | A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410567563.2A CN118155293B (en) | 2024-05-09 | 2024-05-09 | A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118155293A CN118155293A (en) | 2024-06-07 |
CN118155293B true CN118155293B (en) | 2024-07-05 |
Family
ID=91295141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410567563.2A Active CN118155293B (en) | 2024-05-09 | 2024-05-09 | A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118155293B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118864542A (en) * | 2024-09-26 | 2024-10-29 | 天津工业大学 | 3D multi-target tracking method based on improved Kalman filter based on the number of scene targets |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109334563A (en) * | 2018-08-31 | 2019-02-15 | 江苏大学 | An anti-collision warning method based on pedestrians and cyclists in front of the road |
CN115760921A (en) * | 2022-11-28 | 2023-03-07 | 昆明理工大学 | Pedestrian trajectory prediction method and system based on multi-target tracking |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10522040B2 (en) * | 2017-03-03 | 2019-12-31 | Kennesaw State University Research And Service Foundation, Inc. | Real-time video analytics for traffic conflict detection and quantification |
CN109101859A (en) * | 2017-06-21 | 2018-12-28 | 北京大学深圳研究生院 | A Method for Detecting Pedestrians in Images Using Gaussian Penalty |
CN117994987B (en) * | 2024-04-07 | 2024-06-11 | 东南大学 | Traffic parameter extraction method and related device based on target detection technology |
-
2024
- 2024-05-09 CN CN202410567563.2A patent/CN118155293B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109334563A (en) * | 2018-08-31 | 2019-02-15 | 江苏大学 | An anti-collision warning method based on pedestrians and cyclists in front of the road |
CN115760921A (en) * | 2022-11-28 | 2023-03-07 | 昆明理工大学 | Pedestrian trajectory prediction method and system based on multi-target tracking |
Also Published As
Publication number | Publication date |
---|---|
CN118155293A (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110244322B (en) | Multi-source sensor-based environmental perception system and method for pavement construction robot | |
Zhang et al. | A traffic surveillance system for obtaining comprehensive information of the passing vehicles based on instance segmentation | |
CN108053427A (en) | A kind of modified multi-object tracking method, system and device based on KCF and Kalman | |
CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
CN111814621A (en) | A multi-scale vehicle pedestrian detection method and device based on attention mechanism | |
CN107563372A (en) | A kind of license plate locating method based on deep learning SSD frameworks | |
CN108052859A (en) | A kind of anomaly detection method, system and device based on cluster Optical-flow Feature | |
CN112232240A (en) | Road sprinkled object detection and identification method based on optimized intersection-to-parallel ratio function | |
Chao et al. | Multi-lane detection based on deep convolutional neural network | |
CN111767847A (en) | A pedestrian multi-target tracking method integrating target detection and association | |
CN118155293B (en) | A method and system for identifying dangerous pedestrian crossing behaviors based on small target tracking | |
CN111178178B (en) | Multi-scale pedestrian re-identification method, system, medium and terminal combined with region distribution | |
CN112836657A (en) | Pedestrian detection method and system based on lightweight YOLOv3 | |
CN113505638B (en) | Method and device for monitoring traffic flow and computer readable storage medium | |
CN114267082A (en) | Recognition method of bridge side fall behavior based on deep understanding | |
Wu et al. | Vehicle Classification and Counting System Using YOLO Object Detection Technology. | |
CN104318760B (en) | A method and system for intelligent detection of intersection violations based on object-likeness model | |
CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
Zhang et al. | An efficient deep neural network with color-weighted loss for fire detection | |
Arthi et al. | Object detection of autonomous vehicles under adverse weather conditions | |
CN116612450A (en) | Point cloud scene-oriented differential knowledge distillation 3D target detection method | |
Lashkov et al. | Edge-computing-empowered vehicle tracking and speed estimation against strong image vibrations using surveillance monocular camera | |
CN105118073A (en) | Human body head target identification method based on Xtion camera | |
CN114913442A (en) | A kind of abnormal behavior detection method, device and computer storage medium | |
CN114913470A (en) | Event detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |