CN114639243B - Intelligent traffic prediction and decision method and readable storage medium - Google Patents

Intelligent traffic prediction and decision method and readable storage medium Download PDF

Info

Publication number
CN114639243B
CN114639243B CN202210335583.8A CN202210335583A CN114639243B CN 114639243 B CN114639243 B CN 114639243B CN 202210335583 A CN202210335583 A CN 202210335583A CN 114639243 B CN114639243 B CN 114639243B
Authority
CN
China
Prior art keywords
traffic
data
time
prediction
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210335583.8A
Other languages
Chinese (zh)
Other versions
CN114639243A (en
Inventor
朱刚
粟栗
徐健飞
罗长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuzhou Video Technology Co ltd
Original Assignee
Sichuan Jiuzhou Video Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuzhou Video Technology Co ltd filed Critical Sichuan Jiuzhou Video Technology Co ltd
Priority to CN202210335583.8A priority Critical patent/CN114639243B/en
Publication of CN114639243A publication Critical patent/CN114639243A/en
Application granted granted Critical
Publication of CN114639243B publication Critical patent/CN114639243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an intelligent traffic prediction and decision-making method and a readable storage medium, which predict the congestion degree of a corresponding traffic intersection through various structural characteristics, have low requirement on equipment and high speed, can predict the congestion degree of the intersection for a long time, adopt an unstable model as machine learning training, reduce the deviation and variance of the model through integrated learning and model fusion, and ensure the generalization capability of the prediction result of the model in an actual scene; the application supports the macroscopic traffic law prediction by using detection data, and the prediction accuracy rate is more than 85%; the period and the release time of each intersection traffic light are optimized according to the real-time traffic flow information or regulation and control are carried out in a matched mode, and intelligent traffic service is provided.

Description

Intelligent traffic prediction and decision method and readable storage medium
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to an intelligent traffic prediction and decision method and a readable storage medium.
Background
With the rapid development of economy, the holding capacity of automobiles of residents keeps a rapid growth mode, and the urban/expressway traffic problem becomes a troublesome problem facing all cities and regions. With respect to current road traffic conditions, two major problems are mainly faced: road congestion problems and traffic control problems.
The intelligent transportation is an effective means for solving road congestion and traffic control, and the key points of the intelligent transportation are intelligent perception, intelligent prediction and intelligent decision. The intelligent traffic system acquires multi-source traffic information through an intelligent sensing technology, pre-judges the problem of road traffic jam through an intelligent prediction and big data analysis technology, provides a feasible solution for road traffic management and control through an intelligent decision technology, efficiently and reasonably relieves the problem of road traffic jam, realizes timely management and control of road traffic by a traffic management department, improves all-weather traffic capacity of road traffic, and improves the intelligent level of traffic management.
Moreover, with the continuous development of domestic urban economy, the road traffic pressure is gradually increased, the road traffic contradiction is excited, the back logics and the causes of the phenomena of traffic jam, accidents and the like are more and more complex, the urban management needs to enhance the urban management standard through strengthening intelligent management, and the urban scientific, fine and intelligent management level is improved by applying information technology means such as the internet, big data and the like more.
At present, the development and application of intelligent transportation systems mainly focus on developed countries and regions such as the united states, european union, japan, and the like, wherein the united states mainly focuses on the construction of safety facilities, japan focuses on the construction of inducible facilities, europe focuses on the construction of basic platforms, and other countries and regions focus on the construction of demonstration projects. In the early stage of the 90 s in the 20 th century, the intelligent traffic concept is introduced into China for the first time, and after more than 30 years of development, most of infrastructure and front-end sensing equipment of urban intelligent traffic are initially built at present, so that the problem of how to construct a high-intelligence intelligent traffic system capable of processing mass information data is faced in the future.
The application of the emerging technology in the intelligent traffic industry becomes a future development trend, and the innovative application in multiple aspects such as intelligent traffic prediction, traffic intelligent coordination and control, traffic big data analysis, AI-based innovative intelligent application and the like can promote the intelligent construction of road traffic and greatly energize the road traffic. The future traffic management system has strong storage capacity, rapid calculation capacity and scientific analysis capacity, has more excellent capacity of simulating the real world and predicting and judging, can rapidly and accurately extract high-value information from mass data, and provides a solution for management decision-making personnel.
In the prior art, an industry data resource pool is not formed, the characteristics of mass, rapidness and intelligence of a data platform are not exerted according to typical traffic data, and a deep model for exerting data functions cannot be formed; and the collection, storage, association and semantic processing analysis of traffic big data are not realized, and road congestion quantization indexes, urban energy consumption analysis and evaluation and the like are not provided, so that intelligent traffic services such as effective guidance and the like cannot be provided for urban planning.
Therefore, it is desirable to develop an intelligent traffic prediction and decision-making method and a readable storage medium to solve the above problems.
Disclosure of Invention
The invention aims to solve the problems and designs an intelligent traffic prediction and decision method and a readable storage medium.
The invention realizes the purpose through the following technical scheme:
the intelligent traffic prediction and decision method comprises the following steps:
s1, intelligently sensing and monitoring traffic targets, including static traffic target attribute identification and dynamic traffic behavior identification;
s2, performing feature extraction on the multi-source information data obtained by identification, and then fusing;
s3, processing the fused output data and the map output data to form a quantized, space-time and multi-dimensional traffic index and feature set, and analyzing the traffic law of a specific space and time to obtain features of the traffic flow under different space-time latitudes;
s4, carrying out traffic situation simulation and short-term trend prediction on a specific traffic flow development state based on the characteristics and real-time data of the traffic flow at different space-time latitudes;
s5, as shown in figure 2, traffic dispersion and command scheduling are carried out according to the traffic situation simulation and the short-term trend prediction information; and detecting and recording traffic behaviors according to the characteristics and the real-time data under different space-time latitudes, and then warning.
Further, the intelligent traffic prediction and decision method further comprises the following steps:
and S6, managing and controlling road vehicles and equipment according to the traffic situation simulation and the short-term trend prediction information.
Furthermore, the intelligent traffic prediction and decision method further comprises the following steps:
and S7, carrying out traffic flow quantitative evaluation and traffic situation evaluation on the traffic situation data processed by the steps S5 and S6, and judging the implementation effect.
Specifically, in S1, the static traffic target attribute identification includes:
detecting a traffic target: the target detection model adopts YOLOV3, YOLOV3 utilizes a network structure based on Darknet53, and object detection is carried out by setting a shortcut link and then utilizing multi-scale feature fusion;
and (3) traffic target multi-attribute identification: performing vehicle type recognition, non-motor vehicle recognition and pedestrian recognition by adopting a multi-task fine-grained traffic target recognition network;
dynamic traffic behavior recognition includes: the spatial-temporal characteristics of the video sequence are automatically extracted by utilizing a three-dimensional convolutional neural network, and a deep layer model which integrates characteristic extraction and classification recognition is trained in a supervision training mode, so that whether corresponding traffic behaviors exist in video segments or not is directly judged by utilizing the deep layer model.
Specifically, S2 specifically includes:
fusing traffic flow videos; extracting and fusing data characteristics including lane-level headway, saturated headway, number of turn-on queues of traffic lights, time waste, crossing flow direction-level queue length and saturation;
fusing data of traffic detection equipment; extracting and fusing data characteristics of an external field electric police, a bayonet, geomagnetism and a radar;
fusing multi-source data traffic indexes; extracting and fusing data characteristics of traditional detector data, video detection data and internet data for realizing traffic indexes/indexes;
fusing the radar vision; establishing a track fitting algorithm, utilizing radar and video data fitting, attaching license plate information detected by video to an object detected by the radar, calculating the specific running characteristics of each vehicle at each intersection at each moment, obtaining the track of each vehicle, extracting the vehicle position, speed and steering information characteristics in the track and fusing the vehicle position, speed and steering information characteristics.
Specifically, S3 specifically includes:
analyzing traffic flow characteristics: carrying out traffic flow characteristic analysis and extraction of time dimensions of morning and evening peak and average peak on working days and weekends, reflecting traffic flow characteristic change trends of different intersections, road sections and trunk lines in a city at a specific time period, and forming a trend curve of each date;
analyzing traffic flow situation: based on the internet data and the video detection data, performing regional traffic operation analysis, road section traffic operation analysis, intersection traffic operation analysis, traffic flow statistical analysis and traffic flow composition structure statistical analysis; acquiring real-time inquiry traffic jam index, road network average speed and jam mileage data information;
analysis of causes of traffic congestion: analyzing the time-space information of the traffic jam event and the causative event based on the information of the congested frequent road section, the frequent jam time period and the daily average jam time period, associating the suspected congestion causative event for the traffic jam event, and sequencing confidence degrees of the associated causative events.
Specifically, S4 specifically includes:
intelligent traffic simulation analysis: simulating feature changes and situation changes of the traffic flow in a certain time and space in the future by a simulation method, thereby providing quantitative data for predicting the future change trend of the traffic flow;
analyzing a trend prediction model: traffic situation is predicted through a trend prediction model based on traffic flow statistical analysis results and internet studying and judging congestion data and intelligent traffic simulation data, and prediction of traffic flow, traffic trend and congestion trend of an output day level is calculated;
short-term prediction model analysis: and establishing a short-time traffic prediction model according with a real scene based on historical data and a neural network prediction method, and performing small-level traffic prediction.
The intelligent traffic prediction and decision system for multi-source information fusion comprises:
a multi-source information awareness system;
a multi-source information fusion system;
a traffic situation analysis system;
a traffic situation prediction system;
a traffic control decision system;
a road traffic control system;
a traffic control evaluation system;
the traffic situation prediction system is also connected with the road traffic control system, and the traffic control evaluation system is respectively connected with the traffic control decision system and the road traffic control system.
A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform an intelligent traffic prediction and decision method.
The invention has the beneficial effects that:
the method has the advantages that the congestion degree of the corresponding traffic intersection is predicted through various structural characteristics, the requirement on equipment is low, the speed is high, meanwhile, the congestion degree of the intersection in a long time can be predicted, an unstable model is used as machine learning training, finally, the deviation and the variance of the model are reduced through integrated learning and model fusion, and the generalization capability of the prediction result of the model in an actual scene is guaranteed; the application supports the macroscopic traffic law prediction by using detection data, and the prediction accuracy rate is more than 85%; the period and the release time of each intersection traffic light are optimized according to the real-time traffic flow information or regulation and control are carried out in a matched mode, and intelligent traffic service is provided.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a schematic structural diagram of an urban traffic analysis module according to the present invention;
FIG. 3 is a flow chart of an intelligent traffic prediction and decision method;
FIG. 4 is a schematic diagram of deep learning model training according to the present invention;
FIG. 5 is a diagram illustrating a basic residual block structure;
FIG. 6 is a schematic diagram of a network structure of a D3DConvNet model;
FIG. 7 is a graph showing the 3DConvNet model accuracy (%) for different learning rates and different cycle numbers;
FIG. 8 is a flow chart of a multi-target vehicle trajectory real-time construction method;
fig. 9 is a frame diagram of a traffic flow parameter detection algorithm based on sparse features.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "inside", "outside", "left", "right", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, or the orientations or positional relationships that the products of the present invention are conventionally placed in use, or the orientations or positional relationships that are conventionally understood by those skilled in the art, and are used for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like are used merely to distinguish one description from another, and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is also to be noted that, unless otherwise explicitly stated or limited, the terms "disposed" and "connected" are to be interpreted broadly, and for example, "connected" may be a fixed connection, a detachable connection, or an integral connection; can be mechanically or electrically connected; the connection may be direct or indirect via an intermediate medium, and may be a communication between the two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The following detailed description of embodiments of the invention refers to the accompanying drawings.
As shown in fig. 3, the intelligent traffic prediction and decision method includes the following steps:
s1, intelligently sensing and monitoring traffic targets, including static traffic target attribute identification and dynamic traffic behavior identification;
s2, performing feature extraction on the multi-source information data obtained by identification, and then fusing;
s3, processing the fused output data and the map output data to form a quantized, space-time multi-dimensional traffic index and feature set, and analyzing the traffic law of a specific space and time to obtain features of the traffic flow at different space-time latitudes;
s4, carrying out traffic situation simulation and short-term trend prediction on a specific traffic flow development state based on the characteristics and real-time data of the traffic flow at different space-time latitudes;
s5, traffic dispersion and command scheduling are carried out according to the traffic situation simulation and the short-term trend prediction information; and detecting and recording traffic behaviors according to the characteristics and the real-time data under different space-time latitudes, and then warning.
And S6, managing and controlling road vehicles and equipment according to the traffic situation simulation and the short-term trend prediction information.
And S7, carrying out traffic flow quantitative evaluation and traffic situation evaluation on the traffic situation data processed in the steps S5 and S6, and judging the implementation effect.
Specifically, in S1, the static traffic target attribute identification includes:
detecting a traffic target: the target detection model adopts YOLOV3 and YOLOV3, utilizes a network structure based on Darknet53, borrows the reference of a residual error network method, achieves network deepening by setting a quick link, enriches low-level features and high-level features, then utilizes multi-scale feature fusion to carry out object detection, and can accurately identify small targets;
traffic target multi-attribute identification: adopting a multi-task fine-grained traffic target recognition network to perform vehicle type recognition, non-motor vehicle recognition and pedestrian recognition; and a multi-task fine-grained traffic target recognition network is adopted to perform target recognition work such as vehicle type recognition, non-motor vehicle recognition, pedestrian recognition and the like. The multitask fine-grained traffic target identification Network is an optimized Deep Residual error Network (DRN), and mainly comprises the following components: a feature extraction module formed by a ResNet deep convolution neural network; and secondly, a multi-task loss function module formed by combining fine-grained target identification and target class classification is combined, wherein although the target class classification task is mainly used as an auxiliary task to provide more information for the fine-grained target identification task so as to achieve the regularization effect, the task has good application prospect, and the fine-grained target identification task can also provide more information for the fine-grained target identification task, namely the two tasks are mutually promoted to provide gradient information for shared characteristic parts of the two tasks. Having a multitask penalty function of
L(s M ,g M ;s T ,g T )=Lmodel(s M ,g M )+λL type (s T ,g T )
Wherein, Lmodel is the loss of the fine-grained target recognition task; ltype is the penalty of the target type classification task, both of which are softmax cross entropy penalty functions. s M And g M Respectively obtaining a model predicted value and a true value of the fine-grained target recognition task; s T And g T And respectively representing the model predicted value and the true value of the target type classification task. Lambda is used as a super-parameter to control the proportion of the loss of the two tasks in the total loss.
The training process of the deep learning model is mainly characterized by the extraction stage of the features. According to the idea of measure learning, a Center Loss function is introduced in the training process to enhance the identification capability of the model, and meanwhile, a threshold damping impulse factor is introduced in the gradient descent algorithm to accelerate the updating process of the Center Loss feature Center and ensure the stability of the training, and the basic structure of the model is shown in fig. 4.
The DRN adds shortcut connection when constructing the network based on the prior that the network depth is increased without degradation under the identity mapping, so that the output of each layer is not the mapping of the input in the traditional neural network, but the mapping and the input are superposed, and the basic residual block structure is shown in figure 5.
Wherein f (x) represents the mapping result of each layer passing through the weight matrix, i.e.
f(x)=Wx
Where x is the input of the residual structure and W represents the weight matrix of each layer. The output of the first layer is:
f(x)=W 1 x
activating by ReLu function to obtain input sigma (W) of the second layer 1 x), σ denotes the nonlinear activation function ReLu.
Get the weight matrix W of two layers 2 After mapping, f (x) is updated as:
f(x)=W 2 σ(W 1 x)
obtaining output y through short equal mapping:
y=f(x)+x
f (x) is added element by element with x, but if the two dimensions are different, a linear mapping W needs to be performed for x s To match dimensions, i.e.
y=f(x)+W s x
That is to say that the first and second electrodes,
y=W 2 σ(W 1 x)+W s x
and the final output of the residual error network structure needs to use a ReLu activation function to ensure the non-negativity of the output. In addition, the residual block often needs more than two layers, and the single layer of residual block y is equal to W 1 x + x does not contribute to the promotion. A depth model combining series connection and parallel connection is constructed, each residual error unit is a two-layer residual error network structure connected in parallel, and finally an ROC curve is used as an evaluation standard of the performance of the depth model.
When measuring the distance between data, the Euclidean distance or the cosine distance between characteristic vectors cannot be simply used for representing, because the similar characteristics do not present linear distribution in a high-dimensional space, and a proper measurement criterion for evaluating the distance between data samples is learned through the statistical characteristics of the existing calibration data, so that the distance between the similar data is reduced, and the distance between the heterogeneous data is increased. The purpose of measure learning is to find a suitable projection matrix, re-project the features of the input samples in the high-dimensional space, increase the inter-class distance and decrease the intra-class distance, so as to simplify the classification problem and define:
Figure GDA0003791695620000071
Figure GDA0003791695620000072
wherein x is i ,x j Respectively representing the ith and jth features in a batch
Figure GDA0003791695620000073
Representing homogeneous data forming sets
Figure GDA0003791695620000074
Representing different classes of data forming a set, and therefore an objective function for metric learning can be defined as
Figure GDA0003791695620000075
Wherein the content of the first and second substances,
Figure GDA0003791695620000076
the representation measure learns the projection matrix and,
Figure GDA0003791695620000077
to represent
Figure GDA0003791695620000078
The relevant constraint terms of (a) are,
Figure GDA0003791695620000079
in order to be a function of the loss,
Figure GDA00037916956200000710
λ is a constant, a regular term.
Based on this, the Center Loss algorithm constructs a new Loss function:
Figure GDA00037916956200000711
wherein x is i The ith feature, which represents m features in a batch, is typically a one-dimensional vector, y i It represents the number of the feature center to which the feature corresponds in the feature centers of all categories, and thus
Figure GDA00037916956200000712
The representative number is y i Of a characteristic center of (a) with x i Belong to the same category.
Most of the existing depth models are trained by gradient descent and its variant algorithm, taking Softmax Loss as an example, the Loss function can be expressed as:
Figure GDA0003791695620000081
wherein x is i ∈R d Represents the ith feature vector belonging to the y i Class (d represents the dimension of a feature); w j ∈R d The weight W ∈ R representing the last fully-connected layer d×n And b ∈ R n Then it is the corresponding threshold; n is the total number of classes and m is the number of samples contained per training batch.
The model trained by only the Softmax Loss algorithm has low cohesiveness, the output features of the model are projected onto a two-dimensional plane, although samples of different classes are really distinguished, the feature distance between the classes is small, and the feature distance in the classes is large, so that the feature vectors of the classes are mutually overlapped in a high-dimensional space, and the actual use engineering effect is poor. The Center Loss and the Softmax Loss can be combined to jointly realize the training of the model.
To balance, it is also necessary to introduce a hyper-parameter λ, so as to obtain the total loss function:
Figure GDA0003791695620000082
the Softmax Loss is mainly used for training the classification capability of the model, and the Center Loss function is mainly used for training the identification capability of the model.
The Center Loss algorithm needs to determine the final position of each class of feature Center through training of multiple batches, so that the training process is very slow, and the Center Loss algorithm depends heavily on training samples in a single batch, so that updating is unstable, and convergence is not easy to occur. By increasing the feature dimension, the sample is projected into a feature space with a higher dimension, and the learning rate and the weight attenuation coefficient of the last fully-connected layer are adjusted at the same time, which helps to alleviate the problem, but the effect is not obvious. Therefore, a damping and impulse concept is introduced, and a threshold damping impulse factor is designed to accelerate the updating process of the Center Loss feature Center.
Damping refers to the ability of the system to dissipate energy, and converts the energy of mechanical vibration into heat energy or other energy that can be dissipated, thereby achieving the purpose of vibration reduction. With damping, the stability of the system is greatly enhanced and the system can be quickly restored to a steady state when encountering noise. If the damping is always present, the regulation speed of the whole system becomes very slow, the waste energy is also very high, and a threshold damping is designed to replace the general damping. Inertia when impulse simulating object motion, and the final updating direction is finely adjusted by utilizing the gradient of the current training batch, and meanwhile, the previous updating direction is kept to a certain extent, so that the stability of training can be increased, the learning speed is accelerated, and the occurrence of local optimum is avoided as much as possible. Furthermore, due to the application of threshold damping, the iterative trajectory of the feature centers will also not produce severe jitter and will arrive at the optimal position faster.
Assuming that the variable to be regulated is x, the regulation formula of the threshold damping impulse factor to the variable is as follows:
Figure GDA0003791695620000091
where ρ is the defined threshold damping impulse factor, which is typically a positive number less than 1.
Firstly, unfolding the Loss function to obtain a combined Loss function of Softmax Loss and Center Loss:
Figure GDA0003791695620000092
the function is the basis of the whole Center Loss algorithm, and the partial derivative of the function can be used for updating all parameters to be adjusted in the whole model. For the weight W of the Center Loss layer in the depth model, the update formula is as follows:
Figure GDA0003791695620000093
where t represents time and μ represents training learning rate.
And theta for other adjustable parameters in the model c (weights and thresholds) whose update formula is:
Figure GDA0003791695620000094
wherein the content of the first and second substances,
Figure GDA0003791695620000095
representing the back propagation error of the whole network, and the calculation formula is as follows:
Figure GDA0003791695620000096
feature Center for Center Loss
Figure GDA0003791695620000097
It cannot be defined once on the whole training set, so it is chosen to be updated along with the training process, and an optimal midpoint vector set is found step by step in an iterative manner, and the update formula is as follows:
Figure GDA0003791695620000098
Figure GDA0003791695620000099
Figure GDA00037916956200000910
the coefficient alpha is used for controlling the updating speed of the feature center to avoid the generation of jitter.
Redefining the average gradient of the feature Center in the original Center Loss algorithm to all input samples in the same training batch as:
Figure GDA0003791695620000101
then, by introducing a threshold damping impulse factor, the iterative update process of the feature center vector is changed into:
Figure GDA0003791695620000102
Figure GDA0003791695620000103
through the change, the characteristic Center iterative process of the Center Loss algorithm can be obviously and stably accelerated, the convergence speed of the Center Loss algorithm when the depth model is trained is also faster, and the improved Center Loss algorithm can be summarized as follows:
Figure GDA0003791695620000104
the Center Loss optimization of the deep residual error network can obviously and stably accelerate the characteristic Center iteration process, the convergence rate of the deep residual error network during training of a deep model becomes higher, the efficiency is improved, and the identification capability of the model is improved, so that the advantages of high identification efficiency, multiple identification types and easiness in expansion of traffic target identification are realized. Compared with an artificial characteristic method, the method finally realized becomes simple and efficient, and has considerable prospect and value when being applied to road monitoring equipment for identifying road vehicles and non-vehicle targets in real time. Tests show that the traffic target identification scheme of the project realizes identification of more than 15 motor vehicle characteristics, more than 16 vehicle types, more than 200 common vehicle main brands, more than 12 vehicle body colors, more than 6 markers and more than 3 vehicle illegal types, the system supports intelligent identification of non-motor vehicle types, vehicle postures, vehicle body colors, passenger sexes, nationalities, passenger head characteristics, upper body characteristics and the like, and the identification accuracy rate is more than 85% based on vehicle safety identification of vehicle running states and driver states.
Static traffic target attribute recognition also includes deep learning model training:
1. data processing: and adjusting the training sample according to the requirement of the input size of the model, preprocessing data according to different recognition characteristics, such as image graying, denoising and the like, and finally standardizing.
2. Initializing a model: because the motor vehicle data and the Imagenet data set have certain similarity and large data scale, the weight of the model trained on the Imagenet data set is used as an initial weight, the low-layer convolution layer is frozen, and other layers are set as trainable parameters, so that the learning rate is reduced for training.
3. Training: and monitoring the training process of the model through a callback function, saving the result with the minimum loss on the test set as a final weight file of the model, and ending the training when the loss of the test set of the model is not reduced for N continuous rounds.
4. And (3) prediction: a region of interest from a vehicle detection module is identified.
The static traffic target attribute identification further comprises a vehicle multi-attribute identification process:
the YOLOV3 model detects the targets in the video in real time, returns the types of the detected targets (such as motor vehicles, people, bicycles, motorcycles, and the like), marks the types and positions of the detected targets in real time in the video picture, divides the detected target types into motor vehicles and non-motor vehicles according to different types, intercepts the targets from the original image according to the position coordinates of the returned ROI (region of interest), and respectively calls corresponding attribute recognition models to perform feature recognition.
The system at least identifies more than 15 motor vehicle characteristics, more than 16 vehicle types, more than 200 common vehicle main brands, more than 12 vehicle body colors, more than 6 markers and more than 3 vehicle illegal types;
the system supports intelligent identification of non-motor vehicle models, vehicle postures, vehicle body colors, passenger sexes, nationalities, passenger head characteristics, upper body characteristics and the like, and vehicle safety identification based on vehicle driving states and driver states has identification accuracy rate of over 85 percent.
Dynamic traffic behavior recognition includes: the spatial-temporal characteristics of a video sequence are automatically extracted by utilizing a three-dimensional convolutional neural network, and a deep layer model which integrates characteristic extraction and classification recognition is trained in a supervision training mode, so that whether corresponding traffic behaviors exist in video segments or not is directly judged by utilizing the deep layer model;
in this embodiment, two depth network models are constructed by replacing a two-dimensional convolution kernel in a conventional CNN network with a three-dimensional convolution kernel: 3DConvNet and D3 DConvNet. The 3DConvNet is relatively simple to construct, the number of network layers is not very deep, and the model is mainly used for verifying the performance of extracting the space-time characteristics by 3D convolution. According to previous research experiments, some shortages of the 3DConvNet model are found. The number of network layers and the number of output of feature maps of the 3DConvNet model are small, and model parameters are small, so that the model can be trained and operated on a computer with general computing power. However, the simple model structure has limited capability of extracting and modeling features of video data, and the model structure is improved to provide a D3DConvNet model with a deeper network layer number.
The network structure of the D3DConvNet model is shown in fig. 6, and is composed of 20 layers of structures, including 1 input layer, 8 3D convolution layers, 6 pooling layers, 4 full-link layers, and a LogSoftmax classification layer, where the corresponding parameters of each layer are given. The processed video data can be higher in dimension due to the adoption of a more complex structure of the model, the model used in the method still inputs 40 frames of continuous images, but the image size is increased to 128 x 128 pixels, and meanwhile, the input image is a three-channel color image, so that more data information can be provided for the network.
In the three-dimensional convolution layer, convolution kernels are collectively set to 3 × 3 × 3 pixels, and at the time of convolution operation, a filling operation is performed on the feature map so that the feature map obtained after convolution maintains the same size as that before calculation. Three-dimensional pooling is also used herein in the pooling layer, which speeds up the extraction of video temporal information, removing much of the redundancy therein, with the pooling factor set to 2 x 2 pixels. In the first pooling layer, however, the model only down-samples the video input on a spatial scale, which is to ensure that the temporal information of the video does not disappear too quickly in the shallow layers of the model.
In the traffic behavior recognition method based on 3D convolution in this embodiment, the 3D convolution is used to effectively extract the spatiotemporal features of the video, and the depth network model constructed by the 3D convolution can integrate the feature extraction process and the feature classification process into the same network, so that after supervised training of a large number of data samples, the model can automatically learn the spatiotemporal features of the video, thereby realizing traffic behavior detection. The method trains corresponding D3DConvNet models for identifying 8 traffic events such as illegal parking event detection, reverse event detection, overspeed event detection, congestion event detection, traffic accident detection, dangerous lane change behavior detection and the like as targets, and verifies the effectiveness of the models in identifying the traffic behaviors. The adopted data set is from actual monitoring videos of all scenes, each scene comprises 1000 video instances in the traffic video, 300 of the scenes comprise corresponding scenes, and the remaining 700 scenes are normal scenes.
For each video segment, it is manually normalized to 40 image frames while the image size is down-sampled to 60 × 90 pixels and converted to a grayscale image. The data set was divided into two parts, with 200 dangerous changing video segments and 400 normal segments used to train the model, and the remaining 400 samples used to test the trained model performance.
In the training process, a Stochastic Gradient Descent (SGD) method is used for training a model, a Batchsize parameter is set to be 20, namely, training is performed once every 20 video segments and model parameters are updated, a mean pooling operation is adopted in a pooling layer, and a Sigmoid function is adopted as a nonlinear activation function. The model learning rate and the training times are optimally selected by a grid search method, wherein the training times are respectively 10, 20 and 50, the variation range of learning rate parameters is 1.0-0.1, and in order to reduce the test times, once the phenomenon of accuracy reduction is observed in the process of one round of search, the round of search is ended. As shown in fig. 7, it can be seen that the model obtains the highest accuracy when the learning rate is set to 1.0 and the training frequency is selected to be 20, and it is noted that as the training frequency increases, the fitting ability of the model to the sample data increases, so the recognition accuracy of the model increases continuously, however, as the training frequency further increases, the accuracy decreases, because the model overfitts on the training sample.
The trained and optimized D3DConvNet model has good detection effect on 8 traffic events such as an illegal parking event, a retrograde motion event, an overspeed event, a congestion event, a traffic accident, a dangerous lane change and the like, and the accuracy rate is over 90 percent.
Specifically, S2 specifically includes:
fusing traffic flow videos; extracting and fusing data characteristics including lane-level headway, saturated headway, number of turn-on queues of traffic lights, time waste, crossing flow direction-level queue length and saturation;
fusing data of traffic detection equipment; extracting and fusing data characteristics of an external field electric police, a bayonet, geomagnetism and a radar;
fusing multi-source data traffic indexes; extracting and fusing data characteristics of traditional detector data, video detection data and internet data for realizing traffic indexes/indexes;
fusing the radar vision; establishing a track fitting algorithm, utilizing radar and video data fitting, attaching license plate information detected by video to an object detected by the radar, calculating the specific running characteristics of each vehicle at each intersection at each moment, obtaining the track of each vehicle, extracting the vehicle position, speed and steering information characteristics in the track and fusing the vehicle position, speed and steering information characteristics.
In this embodiment, the construction of the vehicle motion trajectory is a key step in performing vehicle behavior recognition. The vehicle motion track refers to a characteristic point sequence formed by continuous different frame moments when a vehicle passes through a traffic video monitoring scene. The application provides a method based on a deep learning model and based on image convolution and HOG fusion characteristics based on a traffic monitoring video, which is low in cost, wide in application range and higher in precision, can solve the problem of simultaneously constructing the running tracks of a plurality of target vehicles directly from the traffic monitoring video, and can accurately construct the motion tracks of the plurality of vehicle targets in the monitoring video in real time.
The overall steps of the method for constructing the multi-target vehicle track in real time based on the traffic monitoring video are shown in FIG. 8, and the method integrally comprises five steps, which are respectively as follows: the method comprises the steps of collecting a traffic video, extracting and fusing characteristics of traffic video images, recognizing license plates based on the video, tracking a plurality of video vehicle targets and constructing vehicle motion tracks.
The basic flow of the video-based multi-vehicle target motion track construction method is as follows: firstly, the collection and labeling of traffic data under the angle of a batch of traffic monitoring cameras are completed for subsequent model training. And then, extracting the convolution characteristic and the HOG characteristic of the traffic video image by using a trained deep learning model and a trained HOG characteristic extraction tool respectively, and fusing. And then, based on the fusion characteristics, training a yolo-v 3-based license plate number detection model on one hand, and on the other hand, completing real-time accurate tracking of multiple vehicle targets by adopting a spatio-temporal regularization-based correlation filtering method. And finally, recording the license plate information and the driving track of each vehicle by combining the tracking result and the license plate detection result. Compared with a vehicle track tracking method based on other auxiliary equipment such as a GPS (global positioning system), the method reduces the cost of additionally installing and maintaining equipment, and has better practicability; compared with a tracking algorithm based on traditional background analysis, the method has higher accuracy, and particularly has remarkable improvement on the track construction effect under the shielding and deformation conditions.
Step 1, preprocessing and calibrating a video. Collecting traffic videos with different traffic flow rates at the angle of a traffic monitoring camera for a plurality of hours, storing the videos as a picture every 20 frames, cutting the pictures into standard sizes, dividing the pictures into a training set/verification set and a test set according to the proportion of 7:1:2, and storing the pictures as a sample set 1: adding class labels of more than 10 common vehicles such as cars, trucks, buses and the like as training samples of a convolution feature extraction basic model; sample set 2: and adding a license plate detection frame and a license plate identification label as a training sample of a license plate identification model.
And 2, extracting and fusing the characteristics of the video images. (1) Extracting convolution characteristics: and downloading a VGGM model which is pre-trained based on ImageNet ILSVR image classification, and further training and super-parameter adjustment are carried out on the basic model by using the sample set 1 in the step 1. After the input video image is normalized to be in a standard format, the convolution characteristics of the image are extracted by a trained VGGM model, and the output characteristic graphs of the 3 rd layer convolution layer and the 5 th layer convolution layer are selected as the convolution characteristics of the video image. (2) Extracting HOG characteristics: texture features are extracted from the HOG features, and the influence of graying and normalization of the picture is realized when the HOG features are calculated, so that the influence of color, light effect and the like is reduced. The Histogram of Oriented Gradient (HOG) features of the image are rapidly computed using the HOG () function in the PDollar Toolbox tool, which is mainly used. (3) And performing feature fusion, and obtaining a feature map of the fused video image with the convolution feature and the HOG feature fused and fixed in size by using a weighted sum method of max-posing downsampling.
And 3, identifying the license plate numbers of the vehicles. And (3) based on the fusion characteristics of the video images, adopting a pre-trained target detection network based on yolo-v3, and after offline learning is carried out by using the sample set 2 in the step 1, realizing the detection of the license plate number of the vehicle in the video images.
And 4, tracking multiple vehicle targets in real time. And based on the fusion characteristics of the video images, tracking the multiple vehicle targets by adopting an STDCF algorithm. By using
Figure GDA0003791695620000141
A set of samples representing a plurality of frames of a video image, each sample
Figure GDA0003791695620000142
All having D fused signatures of size M N, y, obtained in step 2 k The regression target value is marked in advance. The STDCF algorithm objective function is shown in the following formula 1, wherein x t And y represents the samples and sample labels learned to the t-th frame,. represents the Hadamard product,. represents the convolution operation,. f represents the learned correlation filter,. f t-1 Representing the correlation filter learned at the t-1 frame, α is the regularization parameter. In the formula 1, | | f-f t-1 || 2 Is a temporal regularization term that makes f learned from the last frame in training t-1 And the samples of the current frame are trained to train a new filter together, so that the situation that the performance of the trained filter is deteriorated due to the fact that the occlusion/deformation of the current frame is too serious can be avoided.
Figure GDA0003791695620000143
The influence of the edge effect on the performance of the filter can be reduced by being a spatial regular term. The objective function optimization of the STDCF is a convex optimization problem, and the global optimal solution can be obtained by adopting an alternating multiplier direction (ADMM) optimization algorithm, so that the multi-objective real-time pursuit is completedAnd (4) training a trace model.
Figure GDA0003791695620000144
And 5, generating the motion tracks of the multiple vehicle targets. Storing the traffic video data stream as a picture every 10 frames by using the STDCF-based multi-target tracking model obtained by training in the step 3 as input, and obtaining the category of each vehicle target i in each frame of the video and the corresponding tracking frame coordinate ((x) s ,y s ),(x e ,y e ) Wherein (x) s ,y s ) To track the coordinates of the upper left corner of the box, (x) e ,y e ) To track the coordinates of the upper right corner of the box. Calculating the centroid position coordinate (x) of the vehicle target i according to the formula 2 according to the coordinate of the tracking frame c ,y c ) And sequentially recording the centroid coordinates of the same vehicle target in the picture according to the time sequence, and sequentially drawing the centroid coordinates in the original coordinate system to obtain the driving track of the vehicle target i. Therefore, the real-time recording of the tracks of a plurality of vehicle targets can be realized by combining the detected license plate information, and the real-time recording system can be used for helping to analyze the vehicle behaviors in real time.
Figure GDA0003791695620000151
Generally, the method does not need to install and maintain additional equipment such as a GPS or an auxiliary camera and the like, and robustness of mutual shielding of targets and large deformation of tracks is enhanced.
Specifically, S3 specifically includes:
analyzing traffic flow characteristics: carrying out traffic flow characteristic analysis and extraction of time dimensions of morning and evening peak and average peak on working days and weekends, reflecting traffic flow characteristic change trends of different intersections, road sections and trunk lines in a city at a specific time period, and forming a trend curve of each date;
analyzing traffic flow situation: based on the internet data and the video detection data, performing regional traffic operation analysis, road section traffic operation analysis, intersection traffic operation analysis, traffic flow statistical analysis and traffic flow composition structure statistical analysis; acquiring real-time query traffic jam index, road network average speed and jam mileage data information;
analysis of causes of traffic congestion: analyzing the time-space information of the traffic jam event and the causative event based on the information of the congested frequent road section, the frequent jam time period and the daily average jam time period, associating the suspected congestion causative event for the traffic jam event, and sequencing confidence degrees of the associated causative events.
In this embodiment, the traffic flow parameter detection scheme is as follows: 1) and training pre-calibrated multi-angle traffic video data through an SSD target detection algorithm based on a multi-scale feature map to obtain a vehicle detection deep learning model. And detecting the type of the vehicle and the position coordinates of the vehicle in the video in real time. 2) The conversion between the video coordinates and the real coordinates is calculated by a camera self-calibration method based on vanishing point detection so as to detect the road length, the vehicle displacement and the like. 3) And tracking the vehicles entering the picture by a tracking algorithm of a nuclear correlation filter and combining a vehicle target detection algorithm. 4) And (4) timing the vehicle entering the picture by combining the result of the vehicle tracking algorithm and a timer of a preset or manual calibration area, and calculating the time occupancy. Meanwhile, the vehicle entering time and the time difference between the vehicle entering time and the next vehicle entering time are recorded, and traffic flow parameters such as the headway time and the like are calculated.
In the aspect of video-based accurate detection of traffic flow parameters, a traffic flow detection algorithm based on sparse features is provided on the basis of a deep learning model for traffic target detection and identification, as shown in fig. 9, the flow is as follows:
a. firstly, constructing a Gaussian mixture background model, extracting a moving target of a traffic video, and processing the scale invariant feature of the target by sparse coding to obtain sparse feature;
b. and then, calculating to obtain traffic flow parameters through reduction of maximally pooled sparse feature dimensions, training of a linear support vector machine and removal of misjudgment samples by a background modeling method, so that the accuracy and the anti-interference performance of traffic flow parameter detection can be improved.
c. The conversion between the video coordinate and the real coordinate is calculated by a camera self-calibration method based on vanishing point detection so as to detect spatial attributes such as road length, vehicle displacement and the like.
d. And tracking the vehicles entering the picture by a tracking algorithm of a nuclear correlation filter and combining a vehicle target detection algorithm.
e. Combining the vehicle tracking algorithm result with a timer, timing the vehicles entering the picture, calculating the time occupancy, and combining the spatial attributes to calculate the traffic flow parameters: space occupancy, average vehicle distance, time occupancy, average speed, headway, vehicle density;
the traffic flow parameter accurate detection based on the sparse characteristics can be divided into three modules.
(1) The preprocessing module is used for regarding the traffic monitoring video as an image sequence, constructing a background image by using a Gaussian mixture background modeling method and extracting a moving target foreground image;
(2) the characteristic extraction and vehicle type classification module is used for combining SIFT characteristics with sparse coding to obtain sparse characteristics of the image, and reducing the dimensionality of the sparse characteristics by using a maximum pooling method; the method comprises the steps of training classifier parameters by using sparse features, constructing a linear SVM classifier, inhibiting the interference of factors such as illumination change, scale transformation and the like on an algorithm by using the robustness of SIFT features, extracting deeper image features by combining sparse coding, and obtaining a better image representation model;
(3) the traffic flow information calculation module is used for calculating traffic flow parameters such as traffic flow, vehicle type information and the like according to the classification result given by the classification module, and calculating road occupancy or traffic density by combining the prior knowledge such as vehicle body length, vehicle width and the like;
1 Gaussian mixture background model
In the traffic video image, the position of the camera is fixed, and the background is relatively stable, so that the moving target is extracted by adopting a Gaussian background modeling method with high calculation efficiency. When a new sample is added, the covariance distance between the new sample and the current background is calculated by using the Mahalanobis distance (Mahalanobis distance), the covariance distance is probably the Foreground (FG) if the distance is larger, the weight is smaller, otherwise the covariance distance is probably the Background (BG), the weight is larger, the mean value and the variance of the GMM are continuously updated, and the B of the M Gaussian models which are most important to the background model are selected to obtain the B which is the most important to the background model
Figure GDA0003791695620000161
In the formula a m Represents the weight of the mth gaussian distribution in the mixture distribution,. mu.1, …, and μm represents the estimate of the mean,
Figure GDA0003791695620000171
representing an estimate of the variance. The formula (2) gives a pixel point x t The judgment formula belonging to the background is obtained,
Figure GDA0003791695620000172
according to the Bayes formula, the discrimination formula can be further converted into:
p(x t |BG)>c th (3)
in the formula c th Represents the threshold value:
Figure GDA0003791695620000173
during specific implementation, the GMM model is constructed by using an OpenCV _ MOG2 function, and vehicle target extraction in a traffic video is realized.
2 sparse feature extraction
Scale Invariant Feature Transform (SIFT) is a descriptor for image processing, and can extract local features of an image, and has high robustness to geometric transformations such as translation, rotation, Scale scaling, and the like.
L (x, y, σ) is defined as the convolution of a scale-variable Gaussian function G (x, y, σ) with the original image I (x, y). The scale space is constructed using the Difference of Gaussias (DoG). The algorithm describes the feature points by using a gradient histogram, the amplitude m (x, y) and the direction theta (x, y) of the feature points are calculated by using pixel point difference values, the peak value of the histogram corresponds to the main direction of the feature points, the rotation of an image is ensured not to be deformed, the 4 x 4 pixels are used as an image block, the gradient histogram is calculated for 4 x 4 image blocks around the feature points, the image blocks are weighted by using a Gaussian descent function, and finally a 128-dimensional feature point description vector is obtained.
Sparse coding (sparse coding) obtains a group of over-complete basis vectors by training the low-level features of the image, realizes further abstraction of the image, automatically completes feature selection, avoids over-fitting phenomenon, and obtains a better image representation model.
The pooling method is a method for extracting image summary statistical characteristics, can reduce dimension of sparse coding results, reduces training difficulty of a classifier, and avoids an overfitting phenomenon, the common pooling method comprises maximum pooling (max pooling) and average pooling (average pooling), the maximum pooling method is adopted in the project, a 256 x 256 pixel image is taken as an example, the size of an adopted SIFT characteristic image block is 16 x 16 pixels, the step length is 6 pixels, and the number of sparse coding basis vectors is 1024, so that 40 times of matching operation is transversely carried out on the image, 40 times of matching operation is longitudinally carried out, and finally, the output coding dimension is 1024 x 40 ═ 1638400.
3SVM classifier parameter training
The SVM is a common classifier in the field of pattern recognition, and has various improved algorithms, such as an SVM algorithm based on clustering, a Dix _ SVM algorithm, an MG-SVM algorithm and the like. In order to improve the training efficiency and ensure the real-time performance of the system, a linear SVM classifier is adopted to realize target classification.
Definition I { (x) i ,y i ) 1, …, n is a set of n input data points, x denotes the input variable, y denotes the target value, y ∈ {1, -1} in two classes of problems. The classification function is defined as
y=w T φ(x)+b (5)
Where phi (x) represents the mapping from the input space to the higher dimensional space, the purpose of the training is to find the appropriate w, based on the distance between the two classes
The constraint condition with the maximum geometric interval can obtain a solving method of w. The method comprises the steps of introducing a Lagrangian multiplier, simplifying the convex optimization problem into a quadratic optimization problem of a vector w, converting the quadratic optimization problem into a dual problem according to a KKT condition, finally using a linear kernel function for quickly calculating an inner product of two vectors after being mapped to a high-dimensional space, quickly solving the value of the Lagrangian multiplier according to a Sequence Minimum Optimization (SMO), and finally obtaining a decision function in the form of
Figure GDA0003791695620000181
4 model training and traffic parameter calculation
And testing video data selected by MPEG-2 video monitoring streams of 1 hour in each of three sections of a certain expressway, wherein a data set comprises misjudgment images caused by interference and vehicle images with low resolution. And training the SVM classifier by using part of samples in the experiment, and testing the performance of the algorithm by using the rest samples.
The method for calculating the traffic flow parameters required by the system comprises the following steps:
1 cross sectional flow
And counting the vehicles passing through the calibration position statistically.
2 divided lane flow
And counting the vehicles passing through the calibration position statistically.
3 mean time occupancy
Road occupancy is an important index for determining whether or not a road is fully utilized, and the road utilization rate is a ratio of the amount of the road utilized by traffic participants to the total amount of the road in a region at a specific time.
The time occupancy describes the percentage of the cumulative time a vehicle passes a section during a certain time period.
Figure GDA0003791695620000182
Wherein: r t Is the lane time occupancy; t is t T The total observation time; t is t i The occupation time of the ith vehicle; n is the number of vehicles on the road section;
4 mean space occupancy
Space occupancy describes the ratio of the amount of road that has been covered by the vehicle projected onto the ground in a particular area to the total amount of road in the area at a particular time.
Figure GDA0003791695620000191
Wherein: r s Is the lane space occupancy;
l is the total length of the observation road section;
L i is the length of the ith vehicle;
n is the number of vehicles on the road segment.
5 average headway
The time interval that two continuous vehicle head ends pass a certain section in a vehicle queue running on the same lane is referred to.
6 average inter-vehicle distance
And calculating according to a formula through the vehicle density.
Figure GDA0003791695620000192
7 Density of
And identifying the vehicle in the picture. Counting and counting all vehicles in the picture. Divided by the number of lanes and the calibrated actual road length (km).
Figure GDA0003791695620000193
8 mean velocity
The speed of the vehicle is calculated first, and then the average speed is taken. The velocity calculation requires a transformation of the coordinate system. The pixel coordinate system obtained from the video is converted into a world coordinate system. And calculating the displacement according to the coordinate difference between the two frames. Bit removal yields the speed as the time difference between two frames.
Based on a sparse feature-based dynamic traffic flow detection algorithm, on the basis of a Gaussian mixture background model (GMM), extracting a moving target image containing vehicles and non-vehicles; SIFT features are calculated, and robustness of illumination change and geometric transformation of a target image is improved; the sparse coding of the SIFT features is carried out to obtain sparse features, the pooled sparse features are used for training a Support Vector Machine (SVM), vehicle type classification is realized, misjudgment images are removed, and more accurate traffic flow parameters are calculated. Tests show that the scheme supports accurate traffic flow detection based on videos, and comprises the traffic flow detection section flow accuracy rate of more than 97%, lane dividing flow rate of more than 95%, average time occupancy rate of more than 97%, average head time distance of more than 95%, average vehicle interval rate of more than 95%, density of more than 90%, average speed of more than 90%, average space occupancy rate of more than 95% and the like.
Specifically, S4 specifically includes:
intelligent traffic simulation analysis: simulating characteristic changes and situation changes of the traffic flow in a certain future time and space by a simulation method, thereby providing quantitative data for predicting the future change trend of the traffic flow;
analyzing a trend prediction model: traffic situation is predicted through a trend prediction model based on traffic flow statistical analysis results and internet studying and judging congestion data and intelligent traffic simulation data, and prediction of traffic flow, traffic trend and congestion trend of an output day level is calculated;
short-term prediction model analysis: and establishing a short-time traffic prediction model according with a real scene based on historical data and a neural network prediction method, and performing small-level traffic prediction.
In this embodiment, a fast and accurate prediction method is provided, which includes: drawing characteristic values, taking indexes such as weather, time periods, holidays and the like as characteristics, taking the congestion degree as a label, and constructing a data set; respectively training a random forest, a support vector machine, a neural network and a logistic regression model on the data set D, and obtaining the prediction accuracy of the model to the traffic intersection congestion degree by a cross validation method; normalizing the cross validation accuracy rates of the various models to obtain the weight of each model; and weighting and summing the results of each model by adopting a voting method in model fusion to obtain the final prediction result of the congestion degree of the traffic intersection.
The specific scheme of the multi-model fusion prediction method comprises the following steps of S1-S7:
s1, collecting intersection traffic flow data, dividing the congestion degree grade of a traffic intersection and constructing a data set D;
a) and carrying out traffic flow statistics at a traffic intersection with a camera. The method comprises the steps of establishing a background model by adopting a Gaussian mixture model, extracting a foreground by adopting background diversity, obtaining moving vehicles through morphological processing, tracking a target by utilizing a multi-example learning method, and counting the traffic flow of a specific intersection by combining opencv.
b) Selecting characteristics; the characteristic f1 is a time period, the congestion degrees of intersections in different time periods are quite different, and the traffic flow in the peak working hours is definitely far greater than that in the noon, so that 24 hours a day is divided from 00:00 every half hour, such as 01:30, 02:00, 02:30 and the like; the characteristic f2 is the weather, and within half an hour of the intersection traffic flow statistics, the weather is divided into 7 conditions of sunny days, cloudy days, light rain, heavy rain, light snow, heavy snow and foggy days; the characteristic f3 is the working day, and is divided into yes or no; the characteristic f4 is festival, divided into yes or no; the characteristic f5 is the number of lanes, which is 1,2,3, 4 … n respectively; the characteristic f6 is whether a subway station exists within 1 kilometer of the intersection, and whether the subway station exists is judged; the characteristic f7 is whether a bus station exists within 1 kilometer of the intersection, and whether the station exists is judged as yes or not; the feature f8 is a region, such as the hai lake region in beijing, the chang ping region in beijing, because the population density distribution is uneven due to the difference of living resources, teaching resources and commercial resources in each region, and the population density in the hai lake region is certainly greater than that in the chang ping region, so the traffic congestion degree in the hai lake region is more serious than that in the chang ping region as a whole.
c) And counting the average traffic flow number N/30min of each lane at the intersection within each half hour according to the time. Setting 5 thresholds as N1, N2, N3, N4 and N5 respectively, wherein when N is less than N1, the congestion degree of the traffic intersection is S1 unblocked; when N is larger than N1 and smaller than N2, the congestion degree of the traffic intersection is S2 and is relatively smooth; when N is larger than N2 and smaller than N3, the congestion degree of the traffic intersection is S3 and normal; when N is larger than N3 and smaller than N4, the congestion degree of the traffic intersection is S4 and is relatively congested; when N is larger than N4 and smaller than N5, the congestion degree of the traffic intersection is S5 congestion; when N is larger than N5, the congestion degree of the traffic intersection is S6 congestion; to this end, a data set D is created.
S2 constructs a logistic regression model.
a) Characteristic engineering; the method comprises the steps of coding discrete features such as time periods, weather, working days, festivals, subways, buses, regions and the like, converting each feature into a unique hot code, and representing the feature of each explanatory variable through binary numbers. The continuity characteristics such as the number of lanes are data normalized, since the number of lanes is mainly concentrated on the values of 2,3 and 4, the min-max normalization formula is adopted as follows:
Figure GDA0003791695620000211
wherein max is the maximum number of lanes, min is the minimum number of lanes, x * The values after standard normalization are x values. The features after feature transformation are connected to be called feature vectors [ f1, f2, …, fn].
b) The logistic regression is a machine learning algorithm which solves parameters for optimizing a model by a gradient descent algorithm using a maximum likelihood function as a loss function. According to the scheme, grid search is firstly adopted to carry out combined search on regularization methods such as L1 and L2 and optimization algorithms such as SGD, RMSProp, Adam and the like to obtain an optimal parameter combination.
c) Adopting a 5-time cross validation algorithm; and averagely dividing the data set D into 5 parts, namely D1, D2, D3, D4 and D5, taking one part as a test set and the other 4 parts as training models of the training sets every time, and then obtaining the prediction accuracy on the test set. And obtaining the accuracy on 5 different test sets after 5 times of cross validation, and finally obtaining the average accuracy C1 of the traffic intersection congestion degree prediction of 5 times of cross validation on the logistic regression model by solving the average value of the accuracy.
S3 constructs a support vector machine model.
A Support Vector Machine (SVM) uses a hinge loss function to calculate empirical risks and adds a regularization term to a solution system to optimize structural risks, and the SVM is a classifier with sparsity and robustness. The SVM enables the hyperplane to separate different classes of data in space by finding multiple hyperplanes in space, while being as far away from the hyperplane as possible. For the linear inseparable data, the SVM can map the original data into a higher-dimensional space through different kernel functions, so that the linear inseparable data in the original low-dimensional space is linearly classified in the high-dimensional space.
Since the SVM is a classification model based on distance measurement, it can be trained using the feature vector obtained in S2. Firstly, grid search is adopted to support optimal parameters of a vector machine model, such as penalty factors, the category of kernel functions and the like. After the optimal parameters are obtained, the average accuracy rate C2 of the traffic intersection congestion degree prediction of 5 times of cross validation on the support vector machine model is obtained by adopting the same cross validation method as S2.
S4, constructing a random forest model.
a) And (4) extracting features. The random forest adopts the Bagging idea in ensemble learning, takes a decision tree as a base learner, and does not need to convert discrete features into a form of unique hot coding or normalize the number of continuous features such as lanes because the decision tree is a model based on probability. For characteristic weather, different weather conditions are coded as 1,2,3 …. The characteristic f2 time period divides time once every half hour, and thus encodes time period information as 1,2,3 … 48. And performing binary coding on the characteristics of working days, festivals and holidays, subway stations, bus stations and the like, wherein the condition is that the code is 1, and the condition is that the code is 0. And splicing the data after all the data are quantized.
b) And generating m training sets from the data after feature extraction by a random sampling method, then constructing a classification regression tree for each training set, and training by randomly selecting the features of the data set. The classification regression tree is a binary tree in which each node is a condition on a feature in order to divide the data set in two according to different response variables. And determining the optimal condition of the node by using the kini coefficient. The larger the kini coefficient is, the worse the effect after the node is split is.
c) And constructing m classification regression trees through m data sets sampled randomly, and then obtaining a traffic intersection congestion degree prediction result of the random forest model by adopting an average voting method.
d) And (3) obtaining the average accuracy C3 of the traffic intersection congestion degree prediction of 5 times of cross validation on the random forest model by adopting the same cross validation method as S2.
S5 constructs an extreme gradient boosting tree (xgboost) model.
a) Xgboost adopts the idea of boost in ensemble learning, uses the CART tree as a base learner to generate a first tree, and then prunes the tree. The loss function consists of a logarithmic loss function and a regularization term, the objective function continuously fits data through optimization, and the regularization term punishs an excessively complex model to avoid overfitting.
b) And (3) calculating a negative gradient value of the output of the last CART tree in the loss function as an approximate value of the loss in the current round, and using the gradient value as a training data label value of the next CART tree.
c) Iteratively generating a new CART tree through the steps of a) and b), and updating the learner through addition.
d) The average accuracy rate C4 of the traffic intersection congestion degree prediction of 5 times of cross validation on the XGboost model is obtained by adopting the same cross validation method as S2.
S6 constructs a neural network model.
And (3) obtaining the average accuracy rate C5 of the traffic intersection congestion degree prediction of 5 times of cross validation on the deep neural network model by adopting the same cross validation method as S2.
And fusing the S7 models.
a) The normalization coefficients, namely C, are obtained by using the cross validation accuracy rates C1, C2, C3, C4 and C5 of the five obtained models
Figure GDA0003791695620000231
b) Obtaining the weight value alpha of the model according to different accuracy rates t
Figure GDA0003791695620000232
c) And weighting and summing the prediction results of the traffic intersection congestion degrees of the models to obtain the final prediction value after model fusion.
Figure GDA0003791695620000233
Wherein g is t And S is the result after weighted summation of all model results.
In the aspect of traffic state prediction, innovation is as follows: the congestion degree of the corresponding traffic intersection is predicted through various structural characteristics, the demand on equipment is low, and the speed is high. Meanwhile, the degree of intersection congestion in a long time can be predicted, an unstable model is used as a base learning training, finally, the deviation and variance of the model are reduced through integrated learning and model fusion, and the generalization capability of the prediction result of the model in an actual scene is guaranteed. Tests show that the method supports the utilization of detection data to predict the macroscopic traffic law, and the prediction accuracy is over 85 percent; the system supports the optimization of the period and the release time of each intersection traffic light according to the real-time traffic flow information.
As shown in fig. 1, the intelligent traffic prediction and decision system for multi-source information fusion comprises a multi-source information perception system 1; a multi-source information fusion system 2; a traffic situation analysis system 3; a traffic situation prediction system 4; a traffic control decision system 5; a road traffic control system 6; a traffic control evaluation system 7; the traffic situation prediction system is also connected with the road traffic control system, and the traffic control evaluation system is respectively connected with the traffic control decision system and the road traffic control system.
The multi-source information perception system 1 comprises a static traffic information acquisition module and a dynamic traffic information acquisition module. The static traffic information mainly includes some relatively fixed information related to road traffic planning and management, which does not change much in a short period of time. The dynamic traffic information mainly refers to road traffic real-time acquisition information provided by various detection devices, and traffic information which is manually reported and observed. Dynamic traffic information acquisition is the key point of the multi-source information perception system 1, and mainly comprises traffic visual perception, microwave perception, GPS perception, cellular network perception and the like.
Wherein, traffic visual perception: the traffic visual perception detects, classifies and tracks motor vehicles at the intersection through real-time video processing of the intersection camera, and depicts the running state of vehicles participating in traffic at the intersection.
And (3) microwave sensing: the microwave sensing utilizes the radar linear frequency modulation technical principle, and realizes the acquisition of traffic information such as vehicle speed, vehicle body length, vehicle flow, lane occupancy and the like by sensing the reflected microwave signals.
GPS perception: the GPS-based traffic information sensing technology records three-dimensional position coordinates and time information of a vehicle at certain sampling intervals by equipping a vehicle with a GPS receiving device.
Cellular network awareness: the cellular network perception is to determine the position coordinate information of the mobile phone by utilizing the mutual relation between the mobile phone and the base station, and estimate the travel and the vehicle speed through path matching. The technology is complementary with GPS perception, and more accurate positioning is realized.
The dynamic traffic information acquisition module comprises a traffic visual perception module, a microwave perception module, a GPS perception module and a cellular network perception module.
The multi-source information fusion system 2 comprises a traffic flow video fusion module, a traffic detection device data fusion module, a multi-source data traffic index/index fusion module and a thunder-vision fusion module.
The traffic situation analysis system 3 comprises a traffic flow characteristic analysis module, a traffic flow situation analysis module and a traffic jam cause analysis module.
The traffic situation prediction system 4 comprises an intelligent traffic simulation analysis module, a trend prediction model analysis module and a short-time prediction model analysis module.
The traffic control decision system 5 comprises a big data cleaning, mining and analyzing module, a vehicle violation behavior analyzing module and an urban traffic analyzing module, wherein the big data cleaning, mining and analyzing module is respectively connected with the vehicle violation behavior analyzing module and the urban traffic analyzing module. The intelligent decision-making is always the research focus in the field of urban intelligent traffic, and the analysis of vehicle behaviors and road traffic is realized by comprehensively sensing the running state of urban roads and collecting data and adopting a big data analysis method. The vehicle behavior analysis adopts a big data analysis method, so that the judgment of dangerous behaviors such as the fact that a driver does not fasten a safety belt, plays a mobile phone and the like can be quickly realized, and the vehicle behavior can be quickly analyzed by adopting a different behavior analysis model. The road traffic analysis adopts a big data system to clean and mine various traffic data, realizes comprehensive application, deep application and advanced intelligent application of mass data, reveals regularity of road traffic transportation, provides quantitative basis for reasonably making traffic transportation management strategies and improves decision management capability.
The road traffic control system 6 links the intelligent traffic software system with the road traffic hardware facilities, and realizes real-time control of the road traffic by traffic management departments. In the road traffic control system 6, the internet of things technology, the vehicle-road cooperation technology and the like are mainly adopted between the software system and the hardware facilities.
The technology of the Internet of things comprises the following steps: the technology of the internet of things can effectively coordinate and control traffic facilities and share traffic resources. The intelligent traffic system is provided with a plurality of Internet of things devices including radio frequency identification devices, traffic signal lamps and the like, provides a series of important functions including traffic information management and control, traffic facility coordination control and the like, and provides high-quality and high-efficiency intelligent services for traffic management and control.
Vehicle road cooperation technology: by means of an advanced wireless communication technology and a new generation internet technology, dynamic real-time information interaction of vehicles and vehicles is carried out in an all-round mode, safety control is actively carried out on the vehicles on the basis of full-time and space dynamic traffic information collection and fusion, roads are managed in a coordinated mode, effective coordination among people, vehicles and roads is achieved, traffic safety is guaranteed, vehicle passing efficiency is improved, and safety, high efficiency and environmental protection are achieved.
As shown in fig. 2, the urban traffic analysis module includes a traffic jam analysis module, a traffic dispersion module, a command scheduling module, a traffic information publishing module, and a traffic organization optimization decision module; the traffic jam analysis module is respectively connected with the input end of the traffic dispersion module, the input end of the command scheduling module and the traffic organization optimization decision-making module, and the traffic information publishing module is respectively connected with the output end of the traffic dispersion module and the output end of the command scheduling module.
A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform an intelligent traffic prediction and decision method.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. The intelligent traffic prediction and decision method is characterized by comprising the following steps:
s1, intelligently sensing and monitoring traffic targets, including static traffic target attribute identification and dynamic traffic behavior identification;
static traffic target attribute identification includes:
detecting a traffic target: the target detection model adopts YOLOV3, YOLOV3 uses a Darknet 53-based network structure, and object detection is carried out by setting a shortcut link and then using multi-scale feature fusion;
and (3) traffic target multi-attribute identification: adopting a multi-task fine-grained traffic target recognition network to perform vehicle type recognition, non-motor vehicle recognition and pedestrian recognition;
dynamic traffic behavior recognition includes: the spatial-temporal characteristics of a video sequence are automatically extracted by using a three-dimensional convolutional neural network, and a deep layer model integrating characteristic extraction and classification identification is trained in a supervision training mode, so that whether corresponding traffic behaviors exist in video segments or not is directly judged by using the deep layer model;
s2, performing feature extraction on the multi-source information data obtained by identification, and then fusing; the method comprises the following steps:
analyzing traffic flow characteristics: carrying out time dimension traffic flow characteristic analysis and extraction of morning and evening peak and average peak on weekdays and weekends, reflecting traffic flow characteristic change trends of different intersections, road sections and trunk lines in a city at a specific time period, and forming a trend curve of each date;
analyzing traffic flow situation: based on the internet data and the video detection data, performing regional traffic operation analysis, road section traffic operation analysis, intersection traffic operation analysis, traffic flow statistical analysis and traffic flow composition structure statistical analysis; acquiring real-time inquiry traffic jam index, road network average speed and jam mileage data information;
analysis of causes of traffic congestion: analyzing the time-space information of the traffic jam event and the causative event based on the information of the congested frequent road section, the frequent jam time period and the daily average jam time period, associating the suspected congestion causative event for the traffic jam event, and sequencing confidence degrees of the associated plurality of causative events;
s3, processing the fused output data and the map output data to form a quantized, space-time multi-dimensional traffic index and feature set, and analyzing the traffic law of a specific space and time to obtain features of the traffic flow at different space-time latitudes; the method comprises the following steps:
analyzing traffic flow characteristics: carrying out traffic flow characteristic analysis and extraction of time dimensions of morning and evening peak and average peak on working days and weekends, reflecting traffic flow characteristic change trends of different intersections, road sections and trunk lines in a city at a specific time period, and forming a trend curve of each date;
analyzing traffic flow situation: based on the internet data and the video detection data, performing regional traffic operation analysis, road section traffic operation analysis, intersection traffic operation analysis, traffic flow statistical analysis and traffic flow composition structure statistical analysis; acquiring real-time inquiry traffic jam index, road network average speed and jam mileage data information;
analysis of causes of traffic congestion: analyzing the time-space information of the traffic jam event and the causative event based on the information of the congested frequent road section, the frequent jam time period and the daily average jam time period, associating the suspected congestion causative event for the traffic jam event, and sequencing confidence degrees of the associated plurality of causative events;
s4, carrying out traffic situation simulation and short-term trend prediction on a specific traffic flow development state based on the characteristics and real-time data of the traffic flow at different space-time latitudes; the method comprises the following steps:
intelligent traffic simulation analysis: simulating characteristic changes and situation changes of the traffic flow in a certain future time and space by a simulation method, thereby providing quantitative data for predicting the future change trend of the traffic flow;
analyzing a trend prediction model: traffic situation is predicted through a trend prediction model based on traffic flow statistical analysis results and internet studying and judging congestion data and intelligent traffic simulation data, and prediction of traffic flow, traffic trend and congestion trend of an output day level is calculated;
short-term prediction model analysis: establishing a short-time traffic prediction model according with a real scene based on historical data and a neural network prediction method, and performing small-level traffic prediction;
s5, traffic dispersion and command scheduling are carried out according to the traffic situation simulation and the short-term trend prediction information; and detecting and recording traffic behaviors according to the characteristics and the real-time data under different space-time latitudes, and then warning.
2. The intelligent traffic prediction and decision-making method according to claim 1, further comprising:
and S6, managing and controlling road vehicles and equipment according to the traffic situation simulation and the short-term trend prediction information.
3. The intelligent traffic prediction and decision-making method according to claim 2, further comprising:
and S7, carrying out traffic flow quantitative evaluation and traffic situation evaluation on the traffic situation data processed in the steps S5 and S6, and judging the implementation effect.
4. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the intelligent traffic prediction and decision method of any of claims 1-3.
CN202210335583.8A 2022-03-31 2022-03-31 Intelligent traffic prediction and decision method and readable storage medium Active CN114639243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210335583.8A CN114639243B (en) 2022-03-31 2022-03-31 Intelligent traffic prediction and decision method and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210335583.8A CN114639243B (en) 2022-03-31 2022-03-31 Intelligent traffic prediction and decision method and readable storage medium

Publications (2)

Publication Number Publication Date
CN114639243A CN114639243A (en) 2022-06-17
CN114639243B true CN114639243B (en) 2022-09-27

Family

ID=81951428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210335583.8A Active CN114639243B (en) 2022-03-31 2022-03-31 Intelligent traffic prediction and decision method and readable storage medium

Country Status (1)

Country Link
CN (1) CN114639243B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052131A (en) * 2022-06-22 2022-09-13 北京国信互通科技有限公司 Special operation site safety on-line monitoring system based on remote video monitoring
CN115100867B (en) * 2022-07-27 2022-11-29 武汉微晶石科技股份有限公司 Urban intelligent traffic simulation method based on digital twins
CN115499467B (en) * 2022-09-06 2023-07-18 苏州大学 Intelligent network vehicle connection test platform based on digital twinning and building method and system thereof
CN116030637B (en) * 2023-03-28 2023-07-21 南京理工大学 Traffic state prediction integration method
CN116110237B (en) * 2023-04-11 2023-06-20 成都智元汇信息技术股份有限公司 Signal lamp control method, device and medium based on gray Markov chain
CN116189439A (en) * 2023-05-05 2023-05-30 成都市青羊大数据有限责任公司 Urban intelligent management system
CN116597657A (en) * 2023-07-17 2023-08-15 四川省商投信息技术有限责任公司 Urban traffic prediction method, device and medium based on artificial intelligence
CN116935654B (en) * 2023-09-15 2023-12-01 北京安联通科技有限公司 Smart city data analysis method and system based on data distribution value
CN117173913B (en) * 2023-09-18 2024-02-09 日照朝力信息科技有限公司 Traffic control method and system based on traffic flow analysis at different time periods
CN117077042B (en) * 2023-10-17 2024-01-09 北京鑫贝诚科技有限公司 Rural level crossing safety early warning method and system
CN117116065B (en) * 2023-10-23 2024-02-02 宁波宁工交通工程设计咨询有限公司 Intelligent road traffic flow control method and system
CN117292551B (en) * 2023-11-27 2024-02-23 辽宁邮电规划设计院有限公司 Urban traffic situation adjustment system and method based on Internet of things
CN117558132B (en) * 2024-01-11 2024-03-15 北京华创智芯科技有限公司 Traffic management platform data processing method and system based on big data
CN117649632A (en) * 2024-01-29 2024-03-05 杭州感想科技有限公司 Expressway event identification method and device based on multi-source traffic data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037519B2 (en) * 2012-10-18 2015-05-19 Enjoyor Company Limited Urban traffic state detection based on support vector machine and multilayer perceptron
CN104464321B (en) * 2014-12-17 2017-02-22 合肥革绿信息科技有限公司 Intelligent traffic guidance method based on traffic performance index development trend
CN111462485A (en) * 2020-03-31 2020-07-28 电子科技大学 Traffic intersection congestion prediction method based on machine learning
CN111680745B (en) * 2020-06-08 2021-03-16 青岛大学 Burst congestion judging method and system based on multi-source traffic big data fusion
CN113538898A (en) * 2021-06-04 2021-10-22 南京美慧软件有限公司 Multisource data-based highway congestion management and control system

Also Published As

Publication number Publication date
CN114639243A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN114639243B (en) Intelligent traffic prediction and decision method and readable storage medium
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
Gkolias et al. Convolutional neural networks for on-street parking space detection in urban networks
Husain et al. Vehicle detection in intelligent transport system under a hazy environment: a survey
Yang et al. Image-based visibility estimation algorithm for intelligent transportation systems
Outay et al. Estimating ambient visibility in the presence of fog: a deep convolutional neural network approach
Abidin et al. A systematic review of machine-vision-based smart parking systems
Wang et al. A traffic prediction model based on multiple factors
Cheng et al. Modeling weather and illuminations in driving views based on big-video mining
Hu et al. Traffic density recognition based on image global texture feature
Ketcham et al. Recognizing the Illegal Parking Patterns of Cars on the Road in Front of the Bus Stop Using the Support Vector Machine
Wang Vehicle image detection method using deep learning in UAV video
Huang et al. A safety vehicle detection mechanism based on YOLOv5
Tituana et al. Vehicle counting using computer vision: A survey
Zhang et al. A front vehicle detection algorithm for intelligent vehicle based on improved gabor filter and SVM
Chen et al. Research on vehicle detection and tracking algorithm for intelligent driving
Sun et al. A practical weather detection method built in the surveillance system currently used to monitor the large-scale freeway in China
Chakraborty et al. MobiSamadhaan—intelligent vision-based smart city solution
Yang Novel traffic sensing using multi-camera car tracking and re-identification (MCCTRI)
Song et al. Method of Vehicle Behavior Analysis for Real-Time Video Streaming Based on Mobilenet-YOLOV4 and ERFNET
Prawinsankar et al. Traffic Congession Detection through Modified Resnet50 and Prediction of Traffic using Clustering
Mahmood et al. Enhanced detection and recognition system for vehicles and drivers using multi-scale retinex guided filter and machine learning
Moayed et al. Surveillance-based Collision-time Analysis of Road-crossing Pedestrians
Eichel et al. Diverse large-scale its dataset created from continuous learning for real-time vehicle detection
Abbas et al. Vision based intelligent traffic light management system using Faster R‐CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant