WO2016107561A1 - 交通事件检测方法以及系统 - Google Patents

交通事件检测方法以及系统 Download PDF

Info

Publication number
WO2016107561A1
WO2016107561A1 PCT/CN2015/099526 CN2015099526W WO2016107561A1 WO 2016107561 A1 WO2016107561 A1 WO 2016107561A1 CN 2015099526 W CN2015099526 W CN 2015099526W WO 2016107561 A1 WO2016107561 A1 WO 2016107561A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
velocity
traffic
motion state
state
Prior art date
Application number
PCT/CN2015/099526
Other languages
English (en)
French (fr)
Inventor
苏国锋
赵英
袁宏永
陈涛
黄全义
孙占辉
陈建国
钟少波
Original Assignee
清华大学
北京辰安科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学, 北京辰安科技股份有限公司 filed Critical 清华大学
Priority to SG11201705390RA priority Critical patent/SG11201705390RA/en
Publication of WO2016107561A1 publication Critical patent/WO2016107561A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Definitions

  • the invention relates to a traffic incident detecting method and system.
  • a multi-level learning of motion patterns is proposed.
  • the method and the Bayesian spatial pattern matching and the anomalous behavior detection method based on the ⁇ point direction pattern matching are constructed.
  • the state attribute of the moving target is combined with the context related information in the traffic scene to define simple events, complex events, etc.
  • Body concept a general form of expression event identification; on this basis, to build a basic event recognition based on Bayes classifier with a combination of logical constraints and complex event recognition based on a hidden Markov model.
  • a semantic-based video event detection and analysis method uses a general semantic expression form to reasonably describe and express traffic events, and then uses pattern recognition to achieve effective automatic recognition of events.
  • the method proposes adaptive combination of video multi-moving object recognition feature description and classification, semantic-based video complex event detection and analysis, semantic event correlation mining and event-level high-level semantic description and understanding. Multi-moving object feature description and classification method with variable moment value, trajectory multi-label hypergraph model detection and analysis complex event method, time series correlation Rule mining event semantic algorithm and grid syntax framework network structure description to understand video multi-thread event technology.
  • the present invention proposes a traffic event detecting technique capable of quickly determining a traffic incident and being simple and easy to use.
  • the present invention provides a traffic event detecting method, including: a motion state quantity acquiring step of acquiring a motion state quantity of a motion change of a feature point at each moment, the feature point representing a vehicle; an entropy value calculation step, based on the acquired moments The value of the motion state quantity is used to calculate an entropy value of the vehicle flow at each moment; and the traffic event detecting step is to fit the calculated entropy value of the vehicle flow at each moment every predetermined time, and determine whether the method is based on the fitting result. There is a traffic incident.
  • the entropy value calculating step a plurality of state sections are divided based on the value of the motion state amount, and the number and total number of feature points in each state section are calculated. The ratio of the number of feature points is used as the probability of each of the state intervals, and the entropy value is calculated based on the probability.
  • the feature point is the vehicle itself, and the motion state amount is acquired by the vehicle inspector in the motion state amount acquiring step.
  • the feature point is a corner point of the vehicle in the video image
  • the motion state quantity obtaining step each of the coordinates in the video image at different times is obtained based on the coordinates of the corner point The moment reflects the amount of motion state of the motion change of each of the corner points.
  • the motion state quantity is any one of a magnitude of a velocity of the feature point, a direction of the velocity of the feature point, an acceleration of the feature point, a position coordinate of the feature point, or a combination of a plurality of.
  • the motion state quantity is a combination of the magnitude of the velocity of the feature point and the direction of the velocity of the feature point, in the entropy value calculation step, based on the velocity of the feature point
  • the size of the feature point and the direction of the speed of the feature point are divided into state sections, and the number of feature points in each state section divided by the magnitude of the velocity of the feature point and the direction of the velocity of the feature point is calculated relative to all
  • the ratio of the number of feature points is taken as the probability of each state interval.
  • each state interval is equally divided when the state interval is divided.
  • the respective state sections are not equally divided.
  • the method further includes: a corner detecting step of detecting a corner point from each pixel point in the video image; and a corner point tracking step of tracking the motion of the corner point by using optical flow tracking, Wherein the feature point is a corner point of the vehicle.
  • the alarm level is calculated based on the slope of the straight line linearly fitted by the entropy value, and the start and end times of the linearly fitted sliding window are used as the start and end time of the alarm.
  • the present invention relates to a traffic event detecting system, comprising: a motion state quantity acquiring unit that acquires a motion state quantity reflecting a motion change of each feature point at each time, the feature point representing a vehicle; entropy value calculation The entropy value calculation unit is based on each time acquired And engraving the entropy value of the traffic flow at each time; and the traffic event detecting unit, the traffic event detecting unit calculates the entropy value of the calculated traffic flow at each time every predetermined time According to the fitting result, it is judged whether or not a traffic event occurs.
  • the entropy value calculating unit divides a plurality of state sections based on the value of the motion state amount, and calculates a ratio of the number of feature points in each state section to the number of all feature points as each The probability of the state interval, which is calculated based on the probability.
  • the feature point is the vehicle itself, and the motion state amount is acquired by the vehicle inspector in the motion state amount acquiring step.
  • the feature point is a corner point of the vehicle in the video image
  • the motion state quantity acquiring unit acquires the current time based on the coordinates of the corner image in the video image at different times. The amount of motion state of the movement of the corner point.
  • the motion state quantity is any one of a magnitude of a velocity of the feature point, a direction of the velocity of the feature point, an acceleration of the feature point, a position coordinate of the feature point, or a combination of a plurality of.
  • the motion state quantity is a combination of a magnitude of a velocity of the feature point and a direction of a velocity of the feature point, the motion state amount being a magnitude and a velocity of the feature point a combination of directions of speeds of the feature points
  • the entropy value calculation unit performs division of the state sections based on the magnitude of the velocity of the feature points and the direction of the velocity of the feature points, and calculates the velocity of the feature points
  • the ratio of the number of feature points in each state section divided by the size and the direction of the velocity of the feature point with respect to the number of all feature points is taken as the probability of each state section.
  • each state interval is equally divided when the state interval is divided.
  • the respective state sections are not equally divided.
  • the method further includes: a corner detecting unit that detects a corner point from each of the pixel points in the video image; and a corner point tracking unit that uses the light Flow tracking tracks the motion of the corner points, wherein the feature points are corner points of the vehicle.
  • the traffic event detecting unit is based on entropy The slope of the straight line of the value linear fit calculates the alarm level, and the start and end times of the linearly fitted sliding window are used as the start and end time of the alarm.
  • the present invention relates to another traffic event detecting system, comprising: a processor; and a memory for storing instructions executable by the processor, the processor being configured to: acquire motion states reflecting movement changes of the respective feature points at respective moments a quantity indicating the vehicle; calculating an entropy value of the vehicle flow at each moment based on the acquired value of the motion state quantity at each moment; and calculating the entropy of the calculated vehicle flow at each moment every predetermined time
  • the values are fitted, and it is judged based on the fitting result whether or not a traffic event occurs.
  • the processing unit may divide a plurality of state sections based on the value of the motion state amount, and calculate a ratio of the number of feature points in each state section to the number of all feature points as each of the states The probability of the interval, the entropy value is calculated based on the probability.
  • the feature point is a corner point of the vehicle in the video image, and the processing part may acquire a motion state that reflects the motion change of each of the corner points at each time based on coordinates of the corner point in the video image at different times. the amount.
  • the motion state amount may be any one of a magnitude of a velocity of the feature point, a direction of a velocity of the feature point, an acceleration of the feature point, a position coordinate of the feature point, or a combination of a plurality of.
  • the motion state quantity may also be a combination of the magnitude of the velocity of the feature point and the direction of the velocity of the feature point, and the processing section may be based on the magnitude of the velocity of the feature point and the velocity of the feature point.
  • the direction is divided into state sections, and the ratio of the number of feature points in each state section divided by the magnitude of the velocity of the feature point and the direction of the velocity of the feature point to the number of all feature points is calculated as the probability of each state section. .
  • each state interval can be equally divided.
  • each state interval may not be equally divided.
  • the processing portion may further detect a corner point from each pixel point in the video image; and track the motion of the corner point by using optical flow tracking, wherein the feature point is The corner point of the vehicle.
  • the processing unit may calculate an alarm level based on a slope of a straight line linearly fitted by the entropy value, and use a time at which the linearly fitted sliding window starts and ends as the start and end time of the alarm.
  • the present invention relates to a traffic event detecting program that causes a computer to perform an operation of acquiring a motion state amount reflecting a change in motion of each feature point at each time, the feature point representing a vehicle, and based on the acquired respective moments
  • the value of the motion state quantity is used to calculate the entropy value of the vehicle flow at each moment; and the calculated entropy value of the vehicle flow at each moment is performed every predetermined time. Fit, according to the fitting result to determine whether there is a traffic incident.
  • a storage medium storing a traffic event detecting program provided by an embodiment of the present invention, the program causing a computer to perform an operation of: acquiring an amount of motion state reflecting a change in motion of each feature point at each time, the feature point indicating a vehicle; Calculating an entropy value of the vehicle flow at each moment based on the obtained value of the motion state quantity at each moment; and fitting the calculated entropy value of the vehicle flow at each moment every predetermined time, according to the fitting result Determine if a traffic incident has occurred.
  • entropy values to detect traffic events reduces the complexity of the system and, in turn, reduces application costs. Additionally, in accordance with the present invention, macroscopic events (i.e., traffic events) can be detected by the microscopic state of the system (i.e., entropy).
  • the corner point is adopted, the corner point is hardly affected by the illumination condition, and the traffic event can be well detected in nighttime scenes, rain and snow, and the like.
  • 1 is a flow chart showing a traffic event detecting method
  • Figure 3 is a graph showing an example of entropy as a function of time
  • FIG. 4 is a diagram showing an example of fitting a curve of entropy
  • Figure 5 is a graph showing an example of an alarm level and an alarm time
  • FIG. 6 is a flow chart showing a specific mode of a traffic event detecting method
  • Figure 7 is a block diagram showing a traffic event detecting system
  • Figure 8 is a structural diagram showing a specific structure of a traffic incident detecting system
  • FIGS. 9A to 9C are diagrams showing detection of a vehicle congestion situation
  • FIG. 9A is a diagram showing detected corner points
  • FIG. 9B is a diagram showing a change in entropy value
  • FIG. 9C is a diagram showing an alarm level a graph of time changes
  • Figure 10 is a block diagram of a traffic incident detecting system.
  • Figure 1 shows a flow chart of a traffic incident detection method. First, acquiring a motion state quantity of a motion change of a feature point at each moment, wherein the feature point represents a vehicle; and then, calculating an entropy value of the vehicle flow at each moment based on the acquired value of the motion state quantity at each moment; And calculating the entropy value of the vehicle flow at each time interval every predetermined time, and determining whether a traffic event occurs according to the fitting result.
  • the traffic incident mentioned here refers to an abnormal event that causes a large change in traffic in a short period of time. Traffic events have characteristics that are significantly different from normal traffic flow, such as traffic accidents, sudden changes in traffic, and so on.
  • the traffic events are, for example, traffic congestion, a collision, a collision, a vehicle abnormal traveling speed, a vehicle merging, a car collision railing, a vehicle collision roadbed, and the like.
  • the sudden change in entropy means that the traffic state has changed suddenly, which means that a new traffic incident has occurred.
  • the amount of motion state of the vehicle is acquired, and the amount of motion state may be the magnitude of the speed, the direction of the velocity, the acceleration, the position coordinates, or a combination of the speed-related quantities thereof.
  • the manner of obtaining the amount of motion state of the vehicle can be obtained by an IoT device terminal such as a car detector or other means. However, as long as the amount of exercise state can be obtained, it is not particularly limited.
  • step S2 of FIG. 1 after acquiring the motion state amount, the entropy value is calculated based on the acquired motion state amount.
  • FIG. 2 shows a diagram of dividing a state interval based on the magnitude and direction of the velocity.
  • N the total number of vehicles tracked
  • N the total number of particles
  • Interval division of the velocity direction of the particles from 0-360 degrees, equally divided into M equal parts.
  • the size of the particle is divided into sections and divided into partitions such as N.
  • the probability P ij of each state interval is calculated.
  • the entropy value is the sum of the probability of each state interval multiplied by the log function of the probability.
  • the amount of motion state is not limited to the speed of the vehicle, as long as it is a state quantity that reflects the motion of the vehicle.
  • the amount of motion state may be only the magnitude of the speed of the vehicle, or may be only the direction of the speed of the vehicle, or a combination thereof.
  • the amount of motion state may also be the acceleration of the vehicle, and may also be the position coordinates of the vehicle.
  • the state section when the state section is divided based on the motion state amount, the state section may be equally divided, or the state section may not be equally divided.
  • FIG. 3 shows an example of a graph of entropy as a function of time.
  • the abscissa of Fig. 3 is time and the ordinate is entropy.
  • the entropy value is low from the 1st to the 10th, and the system is in an ordered state; from 11 to 20, the entropy value is increased and the system is in an unordered state.
  • step S3 of Fig. 1 traffic event detection is performed using the entropy value calculated in the above steps.
  • the sudden change of traffic events and traffic flow is detected.
  • the sudden change in entropy means that the traffic state has changed suddenly, which means that a new traffic incident has occurred.
  • Fig. 4 shows an example of linear fitting of the entropy value.
  • a linear fit is taken as an example for explanation.
  • the entropy is linearly fitted at regular intervals (assuming 2 seconds) for the calculated entropy values at all times.
  • a total of 9 linear fits of the line segments as shown by numerals 1-9 were performed.
  • the level is the slope of the fitted line divided by 90 degrees.
  • the start and end time of the alarm is the time at which the sliding window starts and ends.
  • the abscissa is the alarm time and the ordinate is the alarm level.
  • the alarm level at each moment is calculated from the slope of the linear fit of the entropy.
  • the corner point is the point where the brightness of the two-dimensional image changes sharply or the maximum value of the curvature on the edge curve of the image. It is an important local feature, which determines the shape of the key area in the image and embodies the important image.
  • the feature information has a decisive effect on grasping the contour features of the target. Once the contour features of the target are found, the shape of the target is roughly grasped, so the corner points are of great significance in target recognition, image matching, and image reconstruction.
  • corner points have rotation invariance, the corner points are almost unaffected by the illumination conditions, which plays an important role in computer vision such as 3D scene reconstruction, motion estimation, target tracking, target recognition, image registration and matching. effect.
  • the vehicle can be represented by its own plurality of corner points.
  • the corner point of the vehicle is, for example, the point at which the two contour lines intersect.
  • Detection tracking of the vehicle can be simplified to detect tracking of multiple corner points of the vehicle itself. Based on the characteristics of the above-mentioned corner points, important local features in the video image can be grasped by the detected corner points in the traffic event, thereby avoiding direct identification of vehicles and personnel.
  • corner point detection is performed in step S11, thereby acquiring a corner point indicating the vehicle.
  • corner detection There are various methods for corner detection, but they can be roughly divided into four categories: corner detection based on edge features, corner detection based on gray image, corner detection based on binary image, and angle based on mathematical morphology.
  • Point detection The most intuitive interpretation of the principle of corner detection is that there are large changes in any two mutually perpendicular directions.
  • the corner point can be detected by Harris corner detection technology, and the harris corner has rotation invariance.
  • optical flow tracking is performed for the detected corner points.
  • the optical flow is divided into a sparse optical flow and a dense optical flow.
  • Each pixel of the dense optical flow is related to the velocity or displacement.
  • the method of tracking the motion using the dense optical flow is a Horn-Shrunk method and a block matching method.
  • a description will be given by taking a sparse optical flow as an example. In the calculation, you need to specify a set of points (corner points) before being tracked, and then use the pyramid LK optical flow algorithm to track the motion.
  • the amount of motion state of each corner point is acquired.
  • the time be t and t+1
  • the coordinates of the corner point i are p it (x, y) and p i(t+1) (x, y), respectively.
  • the amount of motion state with respect to the corner point i is calculated based on the coordinates of the time and the corner point.
  • step S14 of FIG. 6 after acquiring the motion state amount, the entropy value is calculated based on the acquired motion state amount. Taking the motion state quantity as the velocity field as an example, the entropy value of the velocity field formed by the human and the vehicle in the video is calculated. Steps S14 and S15 of FIG. 6 are the same as those of Embodiment 1 except that corners are used instead of the vehicles in Embodiment 1 in comparison with Steps S2 and S3 in FIG. 1, and therefore the description thereof will not be repeated.
  • Fig. 7 shows a structural diagram of a traffic event detecting system.
  • the traffic event detecting system 1 shown in FIG. 7 includes a motion state amount acquiring section 11, an entropy value calculating section 12, and a traffic event detecting section 13.
  • the motion state quantity acquisition section 11 acquires the motion state amount reflecting the motion change of each feature point at each time, the feature point indicating the vehicle; the entropy value calculation section 12 calculates the traffic flow based on the acquired value of the motion state amount at each time of each time.
  • the entropy value at each time point; and the traffic event detecting unit 13 fit the entropy value of the calculated vehicle flow at each time interval every predetermined time, and determine whether or not a traffic event occurs based on the fitting result.
  • the entropy value calculation unit 12 divides a plurality of state sections based on the value of the motion state amount, and calculates a ratio of the number of feature points in each state section to the number of all feature points as the probability of the respective state sections, based on the probability The entropy value is calculated.
  • the feature point may be the vehicle itself, and is passed through the motion state quantity acquisition unit 11.
  • the vehicle inspection device acquires the amount of motion state.
  • the feature point may also be a corner point of the vehicle in the video image, and the motion state measuring unit 11 acquires the motion state of each of the corner points reflecting the motion change of each corner point based on the coordinates of the corner point in the video image at different times. the amount.
  • the motion state quantity may be any one of a velocity of the feature point, a direction of the velocity of the feature point, an acceleration of the feature point, a position coordinate of the feature point, or a combination of a plurality of
  • the entropy value calculation section 12 performs based on the magnitude of the velocity of the feature point and the direction of the velocity of the feature point.
  • the division of the state interval, and the two-dimensional frequency which is the number of feature points in each state section divided by the magnitude of the velocity of the feature point and the direction of the velocity of the feature point is calculated for all the feature points.
  • each state section may be equally divided, or each state section may not be equally divided.
  • the traffic event detecting unit 25 calculates an alarm level based on the slope of a straight line linearly fitted by the entropy value, and sets the start and end time of the linearly fitted sliding window as the start and end time of the alarm.
  • the traffic event detecting system of the present invention can calculate the amount of motion state of the vehicle using the corner points of the vehicle in the image for the video image as shown in FIG. 8.
  • the traffic event detecting system shown in FIG. 8 further includes: a corner detecting unit 21 that detects a corner point from each pixel point in the video image; and a corner point tracking unit 22 that tracks the corner point Tracking the motion of the corner points using optical flow tracking, wherein the feature points are corner points of the vehicle.
  • the traffic event detection system 20 of FIG. 8 uses corner detection and tracking to acquire the amount of motion state of the corner point. For example, the velocity of the corner point can be calculated using the coordinates of the corner point and the corresponding time. After the motion state amount of the corner point is acquired, the same processing as that of FIG. 7 is performed, and the entropy value calculation unit 24 and the traffic event detection unit 25 have the same functions as the entropy value calculation unit and the traffic event detection unit 25 of the traffic event detection system 10, The description will not be repeated here.
  • FIG. 9A is a diagram showing the detected corner points, in which a circle in FIG. 9A represents a corner point
  • FIG. 9B is a diagram showing a change in entropy value
  • FIG. 9C is a diagram showing an alarm level change with time.
  • a traffic incident detecting method and system can carry out the following applications.
  • Real time Detecting traffic incidents detecting early-warning traffic events; real-time warning of traffic incidents, pre-judgement of future traffic trends based on real-time calculated entropy trend graphs, early warning of upcoming traffic incidents; real-time alarms for traffic incidents
  • the traffic event is alarmed, and the alarm level is proportional to the sudden change of the entropy value. The larger the mutation, the higher the alarm level.
  • the present invention provides a traffic event detecting method and system capable of quickly determining a traffic incident and being simple and easy to use.
  • FIG. 10 is a structural diagram of a traffic incident detecting system, which may be a workstation.
  • the traffic incident detecting system includes a bus 409, and the components on the bus are connected as follows: the system includes A processor 405, which is a very large scale integrated circuit, is the computing core and control core of a computer. Its function is mainly to explain computer instructions and to process data in computer software.
  • Processor 405 primarily includes an arithmetic unit and cache 406 and a bus that implements the data, control, and status of the connections between them.
  • the processor 405 is configured to: acquire a motion state quantity reflecting a motion change of each feature point at each time, the feature point represents a vehicle; calculate a traffic flow at each moment based on the acquired value of the motion state quantity at each moment Entropy value; and fitting the calculated entropy value of the vehicle flow at each moment every predetermined time, and determining whether a traffic event occurs according to the fitting result.
  • the processing unit 405 may divide a plurality of state sections based on the value of the motion state quantity, and calculate a ratio of the number of feature points in each state section to the number of all feature points as the probability of each of the state sections, based on the Probability calculates the entropy value.
  • the feature point is a corner point of the vehicle in the video image, and the processing unit 405 can acquire the motion state quantity reflecting the motion change of each of the corner points at each time based on the coordinates of the corner point in the video image at different times. .
  • the motion state amount may be any one of a magnitude of a velocity of the feature point, a direction of a velocity of the feature point, an acceleration of the feature point, a position coordinate of the feature point, or a combination of a plurality of.
  • the motion state quantity may also be a combination of the magnitude of the velocity of the feature point and the direction of the velocity of the feature point, and the processing section 405 may be based on the magnitude of the velocity of the feature point and the direction of the velocity of the feature point. get on The division of the state interval calculates the ratio of the number of feature points in each state section divided by the magnitude of the velocity of the feature point and the direction of the velocity of the feature point to the number of all feature points as the probability of each state section.
  • each state interval can be equally divided.
  • each state interval may not be equally divided.
  • the processing unit 405 may further detect a corner point from each pixel point in the video image; and track the motion of the corner point by using optical flow tracking, wherein the feature point is The corner point of the vehicle.
  • the processing unit may calculate an alarm level based on a slope of a straight line linearly fitted by the entropy value, and use a time at which the linearly fitted sliding window starts and ends as the start and end time of the alarm.
  • the traffic event detection system further includes a memory, and the memory in the computer can be divided into a main memory (memory) according to the use, for example, a ROM (Read Only Memory image) 403, a RAM (Random Access Memory). 404 and auxiliary storage (external storage) 402.
  • the memory has a memory space for program code for performing any of the method steps described above.
  • the storage space for the program code may include various program codes for implementing the various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as a hard disk, a compact disk (CD), a memory card, or a floppy disk. Such computer program products are typically portable or fixed storage units.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the memory in the terminal described above.
  • the program code can be compressed, for example, in an appropriate form.
  • a storage unit includes computer readable code, ie, code that can be read by a processor, such as, when run by a search engine program on a server, causing the server to perform various steps in the methods described above.
  • the traffic event detecting system includes at least one input device 401 for interaction between the user and the traffic event detecting system, and the input device 401 can be a keyboard, a mouse, an image capturing component, a gravity sensor, a sound receiving component, a touch screen, etc.
  • the traffic incident detection system further includes at least one output device 408, which may be a speaker, a buzzer, a flash, an image projection unit, a vibration output component, a screen or a touch screen, etc.; the traffic event detection system may further include a wired or A communication interface 407 that performs data communication in a wireless manner.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

交通事件检测方法以及系统。交通事件检测方法,包括:运动状态量获取步骤(S1),获取各时刻特征点的运动变化的运动状态量,所述特征点表示车辆;熵值计算步骤(S2),基于所获取的各时刻的所述运动状态量的值来计算车流在各时刻的熵值;以及交通事件检测步骤(S3),每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。

Description

交通事件检测方法以及系统 技术领域
本发明涉及交通事件检测方法以及系统。
背景技术
以往,在交通事件检测中,往往通过复杂的处理来实现交通事件的识别。例如以交通事件视频识别为例,在该过程中,需要结合图像处理、模式识别、机器学习等理论以及方法,例如存在以下技术:首先针对背景模型的背景初始化、背景表达与更新进行深入探讨,实现了复杂场景下的自适应前景运动区域检测;结合运动目标的形态特性与运动特性设计了基于多类支持向量机的分类算法,实现了混合交通运动目标的类别判断,提出了基于卡尔曼滤波的多特征匹配跟踪算法与基于历史运动信息补偿的遮挡处理方法,保证了复杂遮挡情况下的运动状态准确估计;综合考虑轨迹的空间特性、方向特性、类别特性,提出了运动模式的多层次学习方法,并由此构建了基于Bayes空间模式匹配以及基于起讫点方向模式匹配的异常行为检测方法;将运动目标的状态属性与交通场景中的上下文相关信息相结合,定义了简单事件、复杂事件等具体概念,为事件识别提供了通用的表达形式;在此基础上,构建了基于Bayes分类器与逻辑约束相结合的基本事件识别方法及基于隐马尔可夫模型的复杂事件识别方法。
另外,以往,在针对视频事件进行检测的方法中,提出了基于语义的视频事件检测分析方法。该方法利用一种通用的语义表达形式对交通事件进行合理的描述与表达,进而利用模式识别的方法实现事件的有效自动识别。该方法围绕视频多运动对象识别特征描述与分类、基于语义的视频复杂事件检测与分析、语义事件关联性挖掘及事件级高层语义描述和理解四个方面的关键技术,提出了自适应的组合不变矩量值的多运动对象特征描述与分类方法、轨迹多标签超图模型检测与分析复杂事件方法、时序关联 规则挖掘事件语义算法以及格语法框架网络结构描述理解视频多线程事件技术。
发明内容
但是,在以上的针对交通事件进行检测的装置和方法中,均存在各种问题。
首先,存在成本高的问题。在现实世界的交通运动目标是多种多样的,在识别分类过程中,需要选择海量的特征,进而实现较为精确、细致的目标类别划分。运动目标的交通行为复杂多样,且随机性较强,需要采集海量的轨迹样本,进而建立合理有效的行为模式学习模型。需要构建大规模、覆盖多类型的视频事件数据库,事件类型的定义和标准的统一便于视频内容语义分析相关研究工作的展开。
另外,存在结构复杂的问题。交通事件的有效识别与交通事件的上下文相关信息密切关联,在目标行为分析的基础上,还需要加强上下文的智能辨识。视频文件包含丰富的语义信息,需要描述并提取视频中多属性特征,并找出其中的关联性。视频语义描述标准的研究。分析视频语义之后,还需要全面、准确、而又通用的描述视频语义的标准研究。
并且,存在应用受限的问题。在目前的视频检测技术下,夜间场景、雨雪天气下,识别运动物体并进行分类,系统工作的稳定性仍不高,导致其应用受到很大限制。
因此,本发明提出能够迅速判断交通事件、并且简单易用的交通事件检测技术。
本发明提供一种交通事件检测方法,包括:运动状态量获取步骤,获取各时刻特征点的运动变化的运动状态量,所述特征点表示车辆;熵值计算步骤,基于所获取的各时刻的所述运动状态量的值来计算车流在各时刻的熵值;以及交通事件检测步骤,每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
根据上述的交通事件检测方法,在所述熵值计算步骤中,基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全 部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。
根据上述的交通事件检测方法,所述特征点是所述车辆本身,在所述运动状态量获取步骤中通过车检器获取所述运动状态量。
根据上述的交通事件检测方法,所述特征点是视频图像中所述车辆的角点,在所述运动状态量获取步骤中,基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。
根据上述的交通事件检测方法,所述运动状态量是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。
根据上述的交通事件检测方法,所述运动状态量是所述特征点的速度的大小和所述特征点的速度的方向的组合,在所述熵值计算步骤中,基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。
根据上述的交通事件检测方法,在划分状态区间时,等分各个状态区间。
根据上述的交通事件检测方法,在划分状态区间时,不等分各个状态区间。
根据上述的交通事件检测方法,还包括:角点检测步骤,从视频图像中的各像素点中检测出角点;以及角点跟踪步骤,利用光流跟踪对所述角点的运动进行跟踪,其中,所述特征点是所述车辆的角点。
根据熵述的交通事件检测方法,在交通事件检测步骤中,根据熵值线性拟合的直线的斜率计算报警等级,将线性拟合的滑动窗口开始和结束的时刻作为报警的起止时间。
本发明涉及一种交通事件检测系统,包括:运动状态量获取部,所述运动状态量获取部获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;熵值计算部,所述熵值计算部基于所获取的各个时 刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及交通事件检测部,所述交通事件检测部每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
根据上述的交通事件检测系统,所述熵值计算部基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。
根据上述的交通事件检测系统,所述特征点是所述车辆本身,在所述运动状态量获取步骤中通过车检器获取所述运动状态量。
根据上述的交通事件检测系统,所述特征点是视频图像中所述车辆的角点,所述运动状态量获取部基于所述角点在不同时刻的视频图像中的坐标,获取当前时刻反映各个所述角点的运动变化的运动状态量。
根据上述的交通事件检测系统,所述运动状态量是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。
根据上述的交通事件检测系统,所述运动状态量是所述特征点的速度的大小和所述特征点的速度的方向的组合,所述运动状态量是所述特征点的速度的大小和所述特征点的速度的方向的组合,所述熵值计算部基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。
根据上述的交通事件检测系统,在划分状态区间时,等分各个状态区间。
根据上述的交通事件检测系统,在划分状态区间时,不等分各个状态区间。
根据上述的交通事件检测系统,还包括:角点检测部,所述角点检测部从视频图像中的各像素点中检测出角点;以及角点跟踪部,所述角点跟踪部利用光流跟踪对所述角点的运动进行跟踪,其中,所述特征点是所述车辆的角点。根据上述的交通事件检测系统,所述交通事件检测部根据熵 值线性拟合的直线的斜率计算报警等级,将线性拟合的滑动窗口开始和结束的时刻作为报警的起止时间。
本发明涉及另一种交通事件检测系统,包括:处理器;以及用于存储处理器可执行的指令的存储器,所述处理器被配置为:获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
在该交通事件检测系统中,所述处理部可以基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。所述特征点是视频图像中所述车辆的角点,所述处理部可基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。所述运动状态量可以是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。所述运动状态量也可以是所述特征点的速度的大小和所述特征点的速度的方向的组合,所述处理部可以基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。在划分状态区间时,可以等分各个状态区间。在划分状态区间时,可以不等分各个状态区间。在该交通事件检测系统中,所述处理部可以还从视频图像中的各像素点中检测出角点;以及利用光流跟踪对所述角点的运动进行跟踪,其中,所述特征点是所述车辆的角点。所述处理部可以根据熵值线性拟合的直线的斜率计算报警等级,将线性拟合的滑动窗口开始和结束的时刻作为报警的起止时间。
本发明涉及一种交通事件检测程序,所述程序使得计算机执行如下操作:获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及每隔预定时间对计算出的车流在各时刻的所述熵值进行 拟合,根据拟合结果判断是否有交通事件发生。
本发明的一个实施方式所提供的存储有交通事件检测程序的存储介质,所述程序使得计算机执行如下操作:获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
根据本发明,不需要选择海量的视频特征,不需要实现精确的目标分类和识别,不需要采集海量的轨迹样本,不需要构建复杂的视频事件库,不需要对交通事件的上下文信息进行智能辨识,不需要构建丰富的语义库。利用熵值来检测交通事件降低了系统的复杂度,并进而降低了应用成本。另外,根据本发明,能够通过系统的微观状态(即熵)来检测宏观上的事件(即交通事件)。
另外,在利用视频图像的交通事件检测中,由于采用角点,因此角点几乎不受光照条件的影响,在夜间场景、雨雪等天气下,也可以较好地检测交通事件。
附图说明
图1是示出交通事件检测方法的流程图;
图2示出基于速度的大小和方向对状态区间进行划分的图;
图3是一例示熵随时间变化的曲线图;
图4是一例示对熵的曲线进行拟合的图;
图5是一例示报警等级和报警时间的曲线图;
图6是示出交通事件检测方法的一具体方式的流程图;
图7是示出交通事件检测系统的结构图;
图8是示出交通事件检测系统的一具体结构的结构图;
图9A~图9C是示出对车辆拥堵情况进行检测的图,图9A是示出所检测出的角点的图,图9B是示出熵值的变化的图,图9C是示出报警等级随时间变化的图;
图10是一交通事件检测系统的结构图。
具体实施方式
下面,参考附图对本发明进行详细地说明。
图1示出了交通事件检测方法的流程图。首先,获取各时刻特征点的运动变化的运动状态量,其中所述特征点表示车辆;接着,基于所获取的各时刻的所述运动状态量的值来计算车流在各时刻的熵值;最后,每隔预定时间对计算出车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。这里所说的交通事件是指使交通在短时间内发生较大变化的异常事件。交通事件具有明显区别于正常交通车流的特征,例如交通事故、车流突变等。具体地,交通事件例如是交通拥堵、撞车、撞人、车辆异常行驶速度、车辆并线、车撞栏杆、车撞路基等。熵值的突变,意味着交通状态发生了突变,从而意味着发生了一次新的交通事件。
首先,如图1的步骤S1所示,获取车辆的运动状态量,所述运动状态量可以是速度的大小、速度的方向、加速度、位置坐标、或者它们之间的组合等与速度有关的量。获取车辆的运动状态量的方式是可以通过车检器等物联网设备终端或其他途径来获取。但只要能够获取运动状态量即可,并不进行特别限定。
如图1的步骤S2所示,在获取运动状态量后,基于所获取的运动状态量计算熵值。
以运动状态量是速度场为例,来计算视频中人、车共同构成的速度场的熵值。图2示出基于速度的大小和方向对状态区间进行划分的图。如图2所示,假设跟踪的车辆的总个数为N(即粒子总数为N),根据当前时刻每个粒子的速度方向和大小,计算此时刻的熵值。计算流程如下:
对粒子的速度方向进行区间划分,从0-360度,平均分成M等分。对粒子的速度大小进行区间划分,等分成N等分区间。
计算总的状态区间C(M,N)=M*N。
计算速度的二维频数H(i,j),即落入状态区间Qi*j的粒子个数,获取落入每个状态等分区间的粒子个数,i是大于等于1小于等于M的整数,j是大于等于1小于等于N的整数。
绘制二维图H(i,j),如图2所示,横坐标为速度大小,纵坐标为速度方向。
如式(1)所示,计算每个状态区间的概率Pij
Figure PCTCN2015099526-appb-000001
    式(1)
如式(2)所示,计算每个时刻的熵值S
Figure PCTCN2015099526-appb-000002
    式(2)
熵值为每个状态区间的概率乘以概率的log函数的结果之和。
这里运动状态量并不限于车辆的速度,只要是能反映出车辆的运动的状态量即可。例如运动状态量可以仅是车辆的速度的大小,也可以仅是车辆的速度的方向,还可以是它们的组合。另外,运动状态量也可以是车辆的加速度,还可以是车辆的位置坐标。
另外,在基于运动状态量划分状态区间时,可以等分状态区间,也可以不等分状态区间。
通过重复上述的各个步骤,可以计算出所有时刻的熵值,并绘制熵值随时间变化的曲线。图3示出了熵随时间变化的曲线图的一例。图3的横坐标为时间,纵坐标为熵值。如图3所示,从第1到10时刻熵值低,系统处于有序状态;从11到20时刻熵值增高,系统处于无序状态。
如图1的步骤S3所示,利用在上述步骤中所计算出的熵值进行交通事件检测。根据熵值的变化,检测出交通事件和车流的突变。熵值的突变,意味着交通状态发生了突变,从而意味着发生了一次新的交通事件。
针对各个时刻的熵进行拟合。图4示出对熵值进行线性拟合的一例。这里以线性拟合为例进行说明。对计算出的所有时刻的熵值,每隔一定时间(假设2秒)对熵进行线性拟合。如图4所示,总共进行了如数字1-9所示的线段的9次线性拟合。
根据上面所示的线性拟合的每段线段的斜率,计算报警等级,报警等 级为拟合线的斜率除以90度。报警的起止时间为滑动窗口开始和结束的时刻。如图5所示,横坐标为报警时间,纵坐标为报警等级。根据熵的线性拟合的斜率,计算出每个时刻的报警等级。
在上述基于熵来获取报警等级的过程中,并不限于对熵进行曲线拟合,只要能得到报警等级,可以是任何其他的方式的拟合。
以上对本发明的实施方式进行说明,能够将本发明应用于视频图像的交通事件检测中。具体的方式如下所述。
以通过视频图像来检测交通事件的情况为例进行说明。这里着重介绍利用角点来获取运动状态量进而进行视频图像中的熵值计算。
在图像中,角点是二维图像亮度变化剧烈的点或图像边缘曲线上曲率极大值的点,是一个重要的局部特征,它决定了图像中关键区域的形状,体现了图像中重要的特征信息,对掌握目标的轮廓特征具有决定作用,一旦找到了目标的轮廓特征也就大致掌握了目标的形状,所以角点在目标识别、图像匹配、图像重构方面具有十分重要的意义。
另外,由于角点具有旋转不变性,因此角点几乎不受光照条件的影响,其在三维场景重建、运动估计、目标跟踪、目标识别、图像配准与匹配等计算机视觉领域起着非常重要的作用。
在视频图像中,车辆可以由自身的多个角点来表示。车辆的角点例如是两条轮廓线相交的点。对车辆的检测追踪可以简化到对车辆自身的多个角点的检测追踪来实现。基于上述的角点的特点,在交通事件中通过所检测出的角点能把握视频图像中的重要局部特征,从而避免了直接对车辆和人员进行识别。通过角点来表示车辆,能够在保留重要特征信息的同时有效地减少数据量,使得对图像处理时运算量大大地减少。因此,针对视频图像可以如图6所示,首先,在步骤S11中进行角点检测,从而获取表示车辆的角点。
角点检测的方法多种多样,但大致上可以分为4类:基于边缘特征的角点检测、基于灰度图像的角点检测、基于二值图像的角点检测和基于数学形态学的角点检测。角点检测的原理最直观的解释为:在任意两个相互垂直的方向上,都有较大变化的点。
可以采用Harris角点检测技术对角点进行检测,harris角点具备旋转不变性。
如图6的步骤S12所示,针对上述被检测出的角点进行光流跟踪。
光流分为稀疏光流和稠密光流,稠密光流的每个像素与速度或者可以说是与位移相关,使用稠密光流得以跟踪运动的方法有Horn-Shrunk方法,还有块匹配方法。在本发明中以采用稀疏光流为例进行说明。在计算时需要在被跟踪之前指定一组点(角点),然后利用金字塔LK光流算法,对运动进行跟踪。
如图6的步骤S13所示,获取各角点的运动状态量。在前后两帧视频中,设时间为t和t+1,角点i的坐标分别为pit(x,y)和pi(t+1)(x,y)。基于时间和角点的坐标计算出关于该角点i的运动状态量。该运动状态量例如是角点的速度vi=pi(t+1)(x,y)-pit(x,y)。其中,x方向的速度分量为vix=pi(t+1)(x)-pit(x),y方向的速度分量为viy=pi(t+1)(y)-pit(y)。
如图6的步骤S14所示,在获取运动状态量后,基于所获取的运动状态量计算熵值。以运动状态量是速度场为例,来计算视频中人、车共同构成的速度场的熵值。图6的步骤S14和S15与图1中的步骤S2和S3相比,除了利用角点代替实施例1中的车辆之外,其他与实施例1相同,因此不再重复说明。
图7示出交通事件检测系统的结构图。图7示出的交通事件检测系统1包括运动状态量获取部11、熵值计算部12、以及交通事件检测部13。
运动状态量获取部11获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;熵值计算部12基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及交通事件检测部13每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
熵值计算部12基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为所述各状态区间的概率,基于所述概率计算所述熵值。
所述特征点可以是所述车辆本身,在所述运动状态量获取部11中通 过车检器获取所述运动状态量。所述特征点也可以是视频图像中车辆的角点,运动状态量部11基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。
运动状态量可以是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合
在运动状态量是特征点的速度的大小和特征点的速度的方向的组合的情况下,所述熵值计算部12基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,并针对所有特征点计算出作为由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目的二维频数。
另外,在划分状态区间时,可以等分各个状态区间,也可以不等分各个状态区间。
所述交通事件检测部25根据熵值线性拟合的直线的斜率计算报警等级,将线性拟合的滑动窗口开始和结束的时刻作为报警的起止时间。
本发明的上述的交通事件检测系统具体的可以如图8所示,针对视频图像利用图像中车辆的角点来进行车辆的运动状态量的计算。
图8所示的交通事件检测系统还包括:角点检测部21,所述角点检测部从视频图像中的各像素点中检测出角点;以及角点跟踪部22,所述角点跟踪部利用光流跟踪对所述角点的运动进行跟踪,其中,所述特征点是所述车辆的角点。图8的交通事件检测系统20利用角点检测和跟踪来获取角点的运动状态量,例如可以利用角点的坐标和对应的时刻计算出角点的速度。在获取角点的运动状态量后,进行图7同样的处理,熵值计算部24和交通事件检测部25与交通事件检测系统10的熵值计算部和交通事件检测部25具有同样的功能,这里不再重复叙述。
应用例1
参照图9A到图9C,对检测车辆拥堵情况进行说明。图9A是示出所检测出的角点的图,其中,图9A中的圆圈表示角点,图9B是示出熵值的变化的图,图9C是示出报警等级随时间变化的图。
根据本发明的交通事件检测方法以及系统。能够进行以下应用。实时 检测交通事件,可检测预警的交通事件;对交通事件进行实时预警,根据实时计算的熵值趋势图,对未来交通趋势做出预判,对将要发生的交通事件进行预警;对交通事件实时报警,根据实时计算的熵值突变情况,对交通事件进行报警,报警等级与熵值的突变成正比,突变越大,报警等级越高。
根据上述,本发明提供能够迅速判断交通事件、并且简单易用的交通事件检测方法以及系统。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者将以将程序烧录到其中的固件来实现,或者以它们的组合实现。本领域的技术人员应当理解,图10是一交通事件检测系统的结构图,它可以是工作站,例如,该交通事件检测系统包括一个总线409,总线上连接各组成结构如下所述:该系统包括一个处理器405,处理器405是一块超大规模的集成电路,是一台计算机的运算核心和控制核心。它的功能主要是解释计算机指令以及处理计算机软件中的数据。处理器405主要包括运算器和高速缓冲存储器406及实现它们之间联系的数据、控制及状态的总线。
处理器405被配置为:获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
其中,处理部405可以基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。所述特征点是视频图像中所述车辆的角点,处理部405可基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。所述运动状态量可以是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。所述运动状态量也可以是所述特征点的速度的大小和所述特征点的速度的方向的组合,处理部405可以基于所述特征点的速度的大小和所述特征点的速度的方向进行 状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。在划分状态区间时,可以等分各个状态区间。在划分状态区间时,可以不等分各个状态区间。在该交通事件检测系统中,处理部405可以还从视频图像中的各像素点中检测出角点;以及利用光流跟踪对所述角点的运动进行跟踪,其中,所述特征点是所述车辆的角点。所述处理部可以根据熵值线性拟合的直线的斜率计算报警等级,将线性拟合的滑动窗口开始和结束的时刻作为报警的起止时间。
该交通事件检测系统还包括存储器,计算机中的存储器按用途可分为主存储器(内存),例如,ROM(Read Only Memory image,只读存储器)403、RAM(Random Access Memory,随机存取存储器)404和辅助存储器(外存)402。存储器具有用于执行上述方法中的任何方法步骤的程序代码的存储空间。例如,用于程序代码的存储空间可以包括分别用于实现上面的方法中的各种步骤的各个程序代码。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,光盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为便携式或者固定存储单元。该存储单元可以具有与前面所述的终端中的存储器类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码,即可以由诸如之类的处理器读取的代码,这些代码当由服务器上运行搜索引擎程序时,导致该服务器执行上面所描述的方法中的各个步骤。
进一步地,该交通事件检测系统包括至少一个输入设备401用于用户与交通事件检测系统之间的相互作用,输入设备401可以为键盘、鼠标、图像捕捉元件,重力传感器,声音接收元件,触摸屏等;交通事件检测系统还包括至少一个输出设备408,输出设备408可以是喇叭,蜂鸣器,闪光灯,图像投影单元,振动输出元件,屏幕或触摸屏等;交通事件检测系统还可以包括一个以有线或无线方式进行数据通信的通信接口407。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解, 本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。
上述实施例只为说明本发明的技术构思及特点,其目的是让熟悉该技术领域的技术人员能够了解本发明的内容并据以实施,并不能以此来限制本发明的保护范围。凡根据本发明精神实质所作出的等同变换或修饰,都应涵盖在本发明的保护范围之内。
虽然结合附图描述了本发明的实施方式,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下做出各种修改和变形,这样的修改和变形均落入由所述权利要求所限定的范围之内。

Claims (10)

  1. 一种交通事件检测方法,包括:
    运动状态量获取步骤,获取各时刻特征点的运动变化的运动状态量,所述特征点表示车辆;
    熵值计算步骤,基于所获取的各时刻的所述运动状态量的值来计算车流在各时刻的熵值;以及
    交通事件检测步骤,每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
  2. 如权利要求1所述的交通事件检测方法,其特征在于,
    在所述熵值计算步骤中,基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。
  3. 如权利要求1或2所述的交通事件检测方法,其特征在于,
    所述特征点是视频图像中所述车辆的角点,
    在所述运动状态量获取步骤中,基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。
  4. 如权利要求1至3中任一项所述的交通事件检测方法,其特征在于,
    所述运动状态量是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。
  5. 如权利要求1至3中任一项所述的交通事件检测方法,其特征在于,
    所述运动状态量是所述特征点的速度的大小和所述特征点的速度的方向的组合,
    在所述熵值计算步骤中,基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。
  6. 一种交通事件检测系统,包括:
    运动状态量获取部,所述运动状态量获取部获取各时刻反映各个特征点的运动变化的运动状态量,所述特征点表示车辆;
    熵值计算部,所述熵值计算部基于所获取的各个时刻的所述运动状态量的值来计算车流在各个时刻的熵值;以及
    交通事件检测部,所述交通事件检测部每隔预定时间对计算出的车流在各时刻的所述熵值进行拟合,根据拟合结果判断是否有交通事件发生。
  7. 如权利要求6所述的交通事件检测系统,其特征在于,
    所述熵值计算部基于运动状态量的值而划分多个状态区间,计算在各状态区间中的特征点的数量与全部特征点的数量之比作为各所述状态区间的概率,基于所述概率计算所述熵值。
  8. 如权利要求6或7所述的交通事件检测系统,其特征在于,
    所述特征点是视频图像中所述车辆的角点,
    所述运动状态量获取部基于所述角点在不同时刻的视频图像中的坐标,获取各时刻反映各个所述角点的运动变化的运动状态量。
  9. 如权利要求6至8中任一项所述的交通事件检测系统,其特征在于,
    所述运动状态量是特征点的速度的大小、特征点的速度的方向、特征点的加速度、特征点的位置坐标中的任一项或者是多项的组合。
  10. 如权利要求6至9中任一项所述的交通事件检测系统,其特征在于,
    所述运动状态量是所述特征点的速度的大小和所述特征点的速度的方向的组合,所述熵值计算部基于所述特征点的速度的大小和所述特征点的速度的方向进行状态区间的划分,计算出由所述特征点的速度的大小和所述特征点的速度的方向划分的各状态区间中的特征点数目相对于所有特征点数目的比率作为各状态区间的概率。
PCT/CN2015/099526 2014-12-30 2015-12-29 交通事件检测方法以及系统 WO2016107561A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG11201705390RA SG11201705390RA (en) 2014-12-30 2015-12-29 Traffic event detection method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410840963.2A CN105809954B (zh) 2014-12-30 2014-12-30 交通事件检测方法以及系统
CN201410840963.2 2014-12-30

Publications (1)

Publication Number Publication Date
WO2016107561A1 true WO2016107561A1 (zh) 2016-07-07

Family

ID=56284291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099526 WO2016107561A1 (zh) 2014-12-30 2015-12-29 交通事件检测方法以及系统

Country Status (3)

Country Link
CN (1) CN105809954B (zh)
SG (1) SG11201705390RA (zh)
WO (1) WO2016107561A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464734A (zh) * 2020-11-04 2021-03-09 昆明理工大学 一种基于视觉的四足动物行走运动特征自动识别方法
CN113920728A (zh) * 2021-10-11 2022-01-11 南京微达电子科技有限公司 高速公路抛洒障碍物检测与预警方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256380A (zh) * 2016-12-28 2018-07-06 南宁市浩发科技有限公司 道路交通异常自动检测方法
CN108345894B (zh) * 2017-01-22 2019-10-11 北京同方软件有限公司 一种基于深度学习和熵模型的交通事件检测方法
CN109145732B (zh) * 2018-07-17 2022-02-15 东南大学 一种基于Gabor投影的黑烟车检测方法
CN109887276B (zh) * 2019-01-30 2020-11-03 北京同方软件有限公司 基于前景提取与深度学习融合的夜间交通拥堵检测方法
CN113255405A (zh) * 2020-02-12 2021-08-13 广州汽车集团股份有限公司 车位线识别方法及其系统、车位线识别设备、存储介质
CN113286194A (zh) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 视频处理方法、装置、电子设备及可读存储介质
CN112053563B (zh) * 2020-09-16 2023-01-20 阿波罗智联(北京)科技有限公司 可用于边缘计算平台、云控平台的事件检测方法及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444442A (en) * 1992-11-05 1995-08-22 Matsushita Electric Industrial Co., Ltd. Method for predicting traffic space mean speed and traffic flow rate, and method and apparatus for controlling isolated traffic light signaling system through predicted traffic flow rate
CN101329815A (zh) * 2008-07-07 2008-12-24 山东省计算中心 一种新型的交通路口四相位车流量检测系统与方法
CN101639983A (zh) * 2009-08-21 2010-02-03 任雪梅 一种基于图像信息熵的多车道车流量检测方法
CN102436740A (zh) * 2011-09-29 2012-05-02 东南大学 一种高速公路交通事件自动检测方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4600929B2 (ja) * 2005-07-20 2010-12-22 パナソニック株式会社 停止低速車両検出装置
TWI452540B (zh) * 2010-12-09 2014-09-11 Ind Tech Res Inst 影像式之交通參數偵測系統與方法及電腦程式產品
CN103971521B (zh) * 2014-05-19 2016-06-29 清华大学 道路交通异常事件实时检测方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444442A (en) * 1992-11-05 1995-08-22 Matsushita Electric Industrial Co., Ltd. Method for predicting traffic space mean speed and traffic flow rate, and method and apparatus for controlling isolated traffic light signaling system through predicted traffic flow rate
CN101329815A (zh) * 2008-07-07 2008-12-24 山东省计算中心 一种新型的交通路口四相位车流量检测系统与方法
CN101639983A (zh) * 2009-08-21 2010-02-03 任雪梅 一种基于图像信息熵的多车道车流量检测方法
CN102436740A (zh) * 2011-09-29 2012-05-02 东南大学 一种高速公路交通事件自动检测方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464734A (zh) * 2020-11-04 2021-03-09 昆明理工大学 一种基于视觉的四足动物行走运动特征自动识别方法
CN112464734B (zh) * 2020-11-04 2023-09-15 昆明理工大学 一种基于视觉的四足动物行走运动特征自动识别方法
CN113920728A (zh) * 2021-10-11 2022-01-11 南京微达电子科技有限公司 高速公路抛洒障碍物检测与预警方法及系统
CN113920728B (zh) * 2021-10-11 2022-08-12 南京微达电子科技有限公司 高速公路抛洒障碍物检测与预警方法及系统

Also Published As

Publication number Publication date
SG11201705390RA (en) 2017-08-30
CN105809954A (zh) 2016-07-27
CN105809954B (zh) 2018-03-16

Similar Documents

Publication Publication Date Title
WO2016107561A1 (zh) 交通事件检测方法以及系统
Atev et al. A vision-based approach to collision prediction at traffic intersections
Wang et al. Spatio-temporal texture modelling for real-time crowd anomaly detection
US20090319560A1 (en) System and method for multi-agent event detection and recognition
Li et al. Robust event-based object tracking combining correlation filter and CNN representation
Shreyas et al. Implementation of an anomalous human activity recognition system
Hinz et al. Online multi-object tracking-by-clustering for intelligent transportation system with neuromorphic vision sensor
Jiang et al. A real-time fall detection system based on HMM and RVM
Vicente et al. Embedded vision modules for tracking and counting people
Henrio et al. Anomaly detection in videos recorded by drones in a surveillance context
Weimer et al. Gpu architecture for stationary multisensor pedestrian detection at smart intersections
Singh et al. Obstacle detection techniques in outdoor environment: process, study and analysis
Ren et al. A new multi-scale pedestrian detection algorithm in traffic environment
Su et al. A robust all-weather abandoned objects detection algorithm based on dual background and gradient operator
Tian et al. The cooperative vehicle infrastructure system based on machine vision
Vikruthi et al. A Novel Framework for Vehicle Detection and Classification Using Enhanced YOLO-v7 and GBM to Prioritize Emergency Vehicle
Moseva et al. Algorithm for Predicting Pedestrian Behavior on Public Roads
Nagulapati et al. Pedestrian Detection and Tracking Through Kalman Filtering
Blair et al. Event-driven dynamic platform selection for power-aware real-time anomaly detection in video
Kim et al. Development of a real-time automatic passenger counting system using head detection based on deep learning
Xu et al. Crowd density estimation based on improved Harris & OPTICS Algorithm
Khan et al. Multiple moving vehicle speed estimation using Blob analysis
Kulkarni et al. Managing crowd density and social distancing
Horng et al. Building an Adaptive Machine Learning Object-Positioning System in a Monocular Vision Environment
Shirazi et al. Vision-based vehicle counting with high accuracy for highways with perspective view

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15875237

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11201705390R

Country of ref document: SG

122 Ep: pct application non-entry in european phase

Ref document number: 15875237

Country of ref document: EP

Kind code of ref document: A1