CN117133131B - Intelligent traffic control system based on ARM technology system - Google Patents

Intelligent traffic control system based on ARM technology system Download PDF

Info

Publication number
CN117133131B
CN117133131B CN202311400573.9A CN202311400573A CN117133131B CN 117133131 B CN117133131 B CN 117133131B CN 202311400573 A CN202311400573 A CN 202311400573A CN 117133131 B CN117133131 B CN 117133131B
Authority
CN
China
Prior art keywords
matrix
traffic
traffic flow
data
arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311400573.9A
Other languages
Chinese (zh)
Other versions
CN117133131A (en
Inventor
马怀清
范婧雅
杜潜
卢凌
王泽方
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shucheng Technology Co ltd
Shenzhen Metro Group Co ltd
Original Assignee
Shucheng Technology Co ltd
Shenzhen Metro Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shucheng Technology Co ltd, Shenzhen Metro Group Co ltd filed Critical Shucheng Technology Co ltd
Priority to CN202311400573.9A priority Critical patent/CN117133131B/en
Publication of CN117133131A publication Critical patent/CN117133131A/en
Application granted granted Critical
Publication of CN117133131B publication Critical patent/CN117133131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/08Controlling traffic signals according to detected number or speed of vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/096805Systems involving transmission of navigation instructions to the vehicle where the transmitted instructions are used to compute a route
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/096833Systems involving transmission of navigation instructions to the vehicle where different aspects are considered when computing the route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Analytical Chemistry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent traffic control system based on an ARM technology system, which relates to the technical field of ARM, and comprises: ARM sensor array, ARM sensor fusion processing system and ARM control system; the ARM sensor array is arranged in the target area and comprises a plurality of preset ARM sensors; the ARM sensor fusion system is configured to construct a traffic flow data matrix with a set time period according to the received traffic flow data; after corrosion denoising is carried out on the traffic flow data matrix, feature extraction is carried out, and a traffic flow data feature matrix is obtained; training a deep belief network according to the traffic flow data feature matrix to obtain a traffic flow prediction model; and the ARM control system is configured to control traffic in the central area according to the traffic flow prediction matrix. The invention can intelligently regulate and control traffic flow, effectively relieve congestion, improve road utilization efficiency and promote sustainable development of urban traffic.

Description

Intelligent traffic control system based on ARM technology system
Technical Field
The invention relates to the technical field of ARM, in particular to an intelligent traffic control system based on an ARM technical system.
Background
With the continuous acceleration of global urbanization, traffic jam problems are increasingly serious, and a great challenge is formed for daily life of urban residents and sustainable development of cities. To solve this problem, researchers and city managers in various countries have turned to the use of advanced technical solutions, including intelligent traffic control systems based on the ARM technology system. However, while these systems have met with some success in improving road efficiency and safety, they still face significant technical challenges when processing complex urban traffic flow data.
At present, an intelligent traffic control system based on an ARM technology system is widely applied to the fields of traffic monitoring, vehicle detection, data acquisition, real-time traffic management and the like. These systems typically include a sensor network, a wireless communication module, a data processing unit, and a user interface. ARM architecture has been the core technology of choice for many intelligent transportation systems because of its low power consumption, high performance and cost effectiveness.
Nevertheless, existing intelligent traffic control systems based on ARM rely mostly on traditional data processing and analysis methods such as threshold analysis, time series analysis, etc. In addition, some systems use basic machine learning algorithms such as Support Vector Machines (SVMs) and decision trees to predict traffic flow and identify traffic patterns. However, these methods often lack the ability to process real-time, large-scale, and high-dimensional traffic flow data, especially in complex and dynamic urban environments.
Disclosure of Invention
The invention aims to provide an intelligent traffic control system based on an ARM technology system, wherein sensor arrays based on ARM acquire traffic flow data in real time, and accurate prediction is performed by applying a deep belief network and an extended support vector regression technology, so that traffic flow is intelligently regulated, congestion is effectively relieved, road utilization efficiency is improved, and sustainable development of urban traffic is promoted.
In order to solve the technical problems, the invention provides an intelligent traffic control system based on an ARM technical system, which comprises: ARM sensor array, ARM sensor fusion processing system and ARM control system; the ARM sensor array is arranged in a target area and comprises a plurality of preset ARM sensors, and the target area is divided into a plurality of subareas and a central area; the central area is positioned in the geometric center of all the subareas, each ARM sensor acquires traffic flow data of the subarea where the ARM sensor is positioned according to a set time interval, and the acquired traffic flow data is sent to the ARM sensor fusion system after the set time period; the ARM sensor fusion system is configured to construct a traffic flow data matrix with a set time period according to the received traffic flow data; after corrosion denoising is carried out on the traffic flow data matrix, feature extraction is carried out, and a traffic flow data feature matrix is obtained; training a deep belief network according to the traffic flow data feature matrix; in the training process, continuously carrying out expansion support vector regression processing, and obtaining a traffic flow prediction model after training is completed; predicting the traffic flow of each moment in the future by using a traffic flow prediction model to obtain a traffic flow prediction matrix; and the ARM control system is configured to control traffic in the central area according to the traffic flow prediction matrix.
Further, each sensor is preset with a weight value; the expression of each element in the traffic flow data matrix is:
wherein M (t, i) is an element in the traffic data matrix M, characterizing the original values at time t and sensor i; w (i) is the weight of ARM sensor i, and c (i, t) is the traffic flow data of ARM sensor i at time t; n is the number of ARM sensors, which is equal to the number of subareas.
Further, the following formula is used to etch and denoise the traffic flow data matrix:
wherein,representing values of the traffic flow data matrix at time t and sensor i after the corrosion operation is applied; b is a structural element for defining the shape and size of the etching operation; (x, y) is the coordinates in structural element B; k (t, i) is a corrosion kernel function; the addition is element-by-element multiplication; />Is a corrosion operation.
Further, the method for extracting the characteristics to obtain the characteristic matrix of the traffic flow data comprises the following steps:
defining a dictionary matrix D, wherein each column represents a basis vector, using a sparse coding matrix X to represent a traffic data matrix M, learning X by the following optimization problem:
wherein X is a sparse coding matrix used for representing the linear combination of the traffic flow data matrix M; phi (X) is nonlinear transformation, and the sparse coding matrix X is converted into nonlinear characteristics; Representing a traffic flow data matrix MThe reconstruction errors of the dictionary matrix D and the nonlinear feature phi (X) are measured by using the Frobenius norm; i X I 1 The L1 norm of the sparse coding matrix X is represented by lambda, which is a regularization parameter for controlling sparsity, and the lambda is a set value; by solving the optimization problem, a sparse coding matrix X and a nonlinear characteristic phi (X) are obtained, and a dictionary matrix D and the nonlinear characteristic phi (X) are used for calculating a traffic flow data characteristic matrix F:
F(t,i)=D T Φ(X(t,i));
wherein F (t, i) represents the elements of the characteristic matrix F of the traffic data at time t and sensor i, D T Representing the transpose of the dictionary matrix D.
Further, the process of training a deep belief network based on the traffic data feature matrix includes: initializing the weight and deviation of the deep belief network: w (W) i (0) ,Wherein L is the number of hidden layers of the deep belief network; i is a subscript; w (W) i (0) The weight of the ith hidden layer for the initial deep belief network; />Deviation of the ith hidden layer for the initial deep belief network; using a traffic data feature matrix F as input, and transmitting the traffic data feature matrix F to a deep belief network for forward propagation; in each hidden layer i, calculating the output of the neuron; calculating the output layer output of the deep belief network; the weight and the deviation of the deep belief network are updated to minimize the traffic data prediction error, and the weight and the deviation are updated by using a back propagation algorithm and a loss function, so that training of the deep belief network is completed, and a deep belief network model P is obtained.
Further, in each hidden layer i, the output of the neuron is calculated using the following formula:
wherein a is i Is the input of hidden layer i, h i Is the output of hidden layer i, σ is the activation function; k is training times; the output layer output of the deep belief network is calculated using the following formula:
wherein p=α L+1 And outputting for an output layer of the deep belief network.
Further, the following loss function is defined:
where alpha is a regularization parameter,representing a weight matrix W i (k+1) The square of the Frobenius norm; o (t, i) is the output of the deep belief network model o, representing the prediction of traffic data at time t and sensor location i; h is a L (t, i) is the output of the last hidden layer in the deep belief network, representing the output of the neuron at time t and sensor location i; updating weights and biases using a back propagation algorithm and a minimization of a loss function; and updating the weight and the deviation of the deep belief network to minimize the error of traffic flow data prediction, thereby completing training of the deep belief network and obtaining a deep belief network model P.
Further, in the training process, the deep belief network model P defines an objective function of the extended support vector regression as follows:
Wherein ζ (t, i) and ζ * (t, i) are relaxation variables for tolerating errors in some of the data points; c is regularThe regularization parameters are set values and are used for weighing error terms and regularization terms;the Frobenius norm of the prediction error of the vehicle flow data; sigma (sigma) t,i (ξ(t,i)+ξ * (t, i)) is the sum of the relaxation variables; introducing a kernel function to process a nonlinear relation, and mapping a traffic flow data feature matrix F to a feature space with higher dimension; restating the extended support vector regression by a kernel function as a pair of Lagrangian multiplier l and Berkman multiplier l * In order to obtain a traffic flow prediction model S.
Further, defining a kernel function as phi (F), and mapping the traffic flow data feature matrix F to a feature space with higher dimension; the kernel function is used to represent the inner product:
K(F(t,i),F(t′,i′))=Φ(F(t,i))·Φ(F(t′,i′));
f (t, i) represents the values of the traffic data feature matrix F at the time t and the sensor position i, and F (t ', i') represents the values of the traffic data feature matrix F at the time t 'and the sensor position i'; the extended support vector regression is re-expressed as a pair of Lagrangian multiplier l and Berkman multiplier l by a kernel function using the following formula * In order to obtain a traffic flow prediction model S:
where l (t, i) represents the Lagrangian multiplier at time t and sensor position i; l (L) * (t ', i') represents the berkman multiplier at time t 'and sensor position i'; solving the maximum optimization problem to obtain a traffic flow prediction model S as follows:
S(t,i)=∑ t′,i′ (l(t′,i′)-l * (t′,i′))K(F(t′,i′),F(t,i));
where S (t, i) represents one element value in the traffic flow prediction matrix, representing the predicted value of the traffic flow at time t and sensor position i.
The intelligent traffic control system based on the ARM technology system has the following beneficial effects: first, by deploying ARM-based sensor arrays in critical areas, the system is able to collect traffic data for each sub-area in real-time and efficiently. The sensors have the characteristics of low power consumption and high performance, can stably operate for a long time, and ensure the continuity and reliability of data collection. In addition, due to the high integration of ARM architecture, the sensors can be easily integrated with other systems, and a wider smart city application is realized. In the aspect of data processing, the invention adopts an advanced data fusion technology, and can accurately construct a traffic flow data matrix of each time period. Through the steps of corrosion denoising and feature extraction, the system eliminates noise and redundant information in the original data, and only retains the most valuable features for traffic flow prediction. This not only improves the efficiency of data processing, but also lays a solid foundation for subsequent machine learning steps. The present invention provides a significant breakthrough in traffic prediction through the use of Deep Belief Networks (DBNs) and Extended Support Vector Regression (ESVR). The deep belief network is used as a powerful deep learning model and can process complex nonlinear relations and learn high-level abstract features from data. Whereas ESVR enhances the generalization ability of the model and the fitting ability to nonlinear data by introducing relaxation variables and kernel functions. By combining the two methods, the system can accurately predict the traffic flow at each moment in the future, and greatly improves the predictability and initiative of traffic management. Based on the prediction result, the ARM control system can implement an effective traffic control strategy. For example, it may rationally adjust the timing of traffic lights or rationally program traffic guidance systems to guide vehicles to preferentially select clear road segments. The system can not only reduce common traffic problems such as congestion and exhaust emission, but also quickly clear traffic lines in emergency, thereby providing a quick passage for rescue vehicles.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a method flow diagram of a visual asset management method according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, an intelligent traffic control system based on ARM technology architecture, the system comprising: ARM sensor array, ARM sensor fusion processing system and ARM control system; the ARM sensor array is arranged in a target area and comprises a plurality of preset ARM sensors, and the target area is divided into a plurality of subareas and a central area; the central area is positioned in the geometric center of all the subareas, each ARM sensor acquires traffic flow data of the subarea where the ARM sensor is positioned according to a set time interval, and the acquired traffic flow data is sent to the ARM sensor fusion system after the set time period; the ARM sensor fusion system is configured to construct a traffic flow data matrix with a set time period according to the received traffic flow data; after corrosion denoising is carried out on the traffic flow data matrix, feature extraction is carried out, and a traffic flow data feature matrix is obtained; training a deep belief network according to the traffic flow data feature matrix; in the training process, continuously carrying out expansion support vector regression processing, and obtaining a traffic flow prediction model after training is completed; predicting the traffic flow of each moment in the future by using a traffic flow prediction model to obtain a traffic flow prediction matrix; and the ARM control system is configured to control traffic in the central area according to the traffic flow prediction matrix.
Specifically, the sensor periodically (at set time intervals) transmits the collected data to the ARM sensor fusion processing system. Such periodic transmission can ensure real-time performance of data while reducing the burden of network transmission. Since the individual sensors operate independently and cover different sub-areas, the system enables distributed monitoring of the entire target area. This approach not only improves the coverage of the monitoring, but also improves the fault tolerance of the system, since other sensors can continue to operate even if an individual sensor fails. The ARM architecture is known for its low power consumption, which is very important for sensor networks that require long runs. Low power consumption means that maintenance requirements (e.g., frequency of battery replacement) can be reduced and the overall operating cost of the system reduced. The ARM processor provides enough computing power to process the sensing data, run necessary algorithms and perform effective data transmission, thereby meeting the requirement of real-time monitoring. The ARM architecture has excellent expandability, so that the system can easily add more sensors according to the needs. This flexibility allows the system to accommodate traffic monitoring requirements of varying size and complexity.
The collected data is organized into a matrix, each row representing a particular point in time, and each column representing traffic flow information for a particular sub-area. Such a data structure facilitates a better understanding of traffic patterns at different times and locations. Corrosion is an image processing technique that is commonly used to reduce noise in images. Here, it is used to reduce outliers and noise in the traffic data, such as data skew caused by an emergency or sensor error. This is accomplished by comparing each data point to its neighbors to identify and reject outliers that do not conform to the expected pattern. The denoised data matrix is further analyzed to extract key features, which may include peaks, valleys, trends in traffic flow, etc. These features will be used as inputs to a subsequent machine learning model to help the system predict future traffic flow more accurately.
Deep Belief Networks (DBNs) are a deep learning model that is well suited to identifying and learning complex patterns in data. In this system, the DBN uses the extracted features to learn traffic patterns at different times and locations. Because the DBN can handle multiple levels of data characterization, it is able to capture complex and nonlinear relationships that may exist in traffic data. Once the DBN model is trained and built, extended Support Vector Regression (ESVR) is used to further optimize the performance of the model. Support vector regression is a powerful machine learning algorithm that solves the regression problem, while ESVR is a modified version of it to more effectively deal with the non-linearity problem. Here, ESVR is used to fine tune the output of the DBN to more accurately predict future traffic flow.
When the ARM control system receives the traffic flow prediction matrix obtained through deep belief network analysis and extended support vector regression processing, the ARM control system starts a series of control strategies to manage traffic flow of a target area, particularly a central area. The following is a specific step of traffic control of the central area according to the traffic flow prediction matrix: the system first parses a traffic prediction matrix that includes the predicted traffic for each key point or road segment in each time period in the future. These data points provide a snapshot of the upcoming traffic pattern, allowing the system to anticipate possible congestion points and times. The system sets specific traffic flow thresholds that, once exceeded by the predicted traffic flow, indicate a potential traffic congestion risk. These thresholds are typically based on historical data, road capacity, and current traffic management objectives. Once the system determines the likely high traffic region and time, it uses preset algorithms or rules to formulate a response strategy. These strategies may include changing the green time of traffic lights, implementing lane reversal, setting temporary traffic signs, or limiting the ingress traffic to certain areas. The system can also reallocate resources and priorities in real-time situations, such as providing rapid access to emergency vehicles or adjusting traffic patterns during large activities, in the event of an emergency or special event.
Another example is: the specific process of traffic control of the central area according to the traffic flow prediction matrix is as follows: and analyzing the traffic flow prediction matrix to identify the expected traffic flow of each period. This matrix provides detailed estimates of future traffic flow for the central region, including expected peak periods, low flow periods, etc. Setting a threshold for traffic flow beyond which congestion in the central area may result. For example, more than 10,000 vehicles per hour may lead to traffic congestion. If the prediction matrix shows that the traffic flow for a certain period exceeds a threshold, the ARM control system may take a specific traffic control strategy. The traffic light time of the traffic signal lamp is prolonged or shortened, single and double number traffic restrictions are implemented, certain streets are temporarily closed, temporary bus lines are additionally arranged, and the like. According to the prediction matrix, the traffic flow of the main traffic thoroughfare is adjusted so that traffic is transferred from a busy road to a relatively free road. If a major road is predicted to be severely jammed, public standby detour routes can be informed in advance, and real-time guidance is performed by means of traffic signs, electronic screens and the like. The ARM control system continuously collects real-time traffic data and compares the traffic data with the prediction matrix. If the actual traffic flow has a larger deviation from the forecast, the system can adjust the traffic strategy in real time. Predictive information and traffic advice are published to the public using various communication means (e.g., social media, traffic broadcast, cell phone APP, etc.), encouraging citizens to select public transportation, riding or walking during peak hours. If the prediction matrix indicates that abnormally high traffic flow may occur (e.g., due to a particular activity or incident), the ARM control system may initiate an emergency response mechanism, such as mobilizing police forces, setting temporary traffic flags, and the like.
Example 2: on the basis of the previous embodiment, each sensor is preset with a weight value; the expression of each element in the traffic flow data matrix is:
wherein M (t, i) is an element in the traffic data matrix M, characterizing the original values at time t and sensor i; w (i) is the weight of ARM sensor i, and C (i, t) is the traffic flow data of ARM sensor i at time t; n is the number of ARM sensors, which is equal to the number of subareas.
In particular, in practical applications, not all data captured by the sensors are of equal importance. Some areas may be more busy or have a greater impact on traffic flow. By assigning different weights to each sensor, the system is able to distinguish the importance of these areas, giving them greater consideration in analysis and prediction. The weighting system provides a mechanism by which the impact of individual sensors can be adjusted based on real-time conditions or historical data. For example, during peak hours or special events (such as sporting events or concerts), the weight of a particular region may be temporarily increased to reflect its importance in the overall traffic flow.
Example 3: on the basis of the above embodiment, the following formula is used to perform corrosion denoising on the traffic flow data matrix:
Wherein,representing values of the traffic flow data matrix at time t and sensor i after the corrosion operation is applied; b is a structural element for defining the shape and size of the etching operation; (x, y) is the coordinates in structural element B; k (t, i) is a corrosion kernel function; the addition is element-by-element multiplication; />Is a corrosion operation.
Specifically, M (t, i): this is the original traffic data matrix, which contains traffic data at time t and location i (monitored by sensor i).This is the traffic flow data matrix after the corrosion operation is applied. Corrosion operations can reduce abnormally high flow readings (possibly due to noise orFalse positive causes) to smooth the data and improve its quality. B: this is a so-called structural element that defines the scope and shape of the etching operation. In image processing, the structural elements may be of any shape, such as circular, square or cross-shaped. Here, the shape and size of B determines the number and distribution of adjacent data points affected by the corrosion operation. (x, y): these are coordinates within the structural element B, which represent the relative position of the influence of V. K (t, i): this is a corrosion kernel function that defines how the corrosion operation is applied to each data point. The kernel function may be designed according to the needs of a particular application, e.g. it may weight the contributions of neighboring points, or apply a particular mathematical transformation. The following is true: this is an element-wise multiplication operation. It means that every point within the structural element B is multiplied by the value of the corresponding position of the kernel function K. The nature of the corrosion can thus be customized to more closely suit a particular data characteristic or requirement. / >This is the etching operation itself. Corrosion generally involves calculating a minimum value within the coverage area of a structural element and applying this minimum value to the corresponding location of the output image or matrix. In this scenario, corrosion is used to identify and reduce high peaks that may be the result of noise or other atypical traffic events.
By identifying and reducing abnormally high flow readings, the corroding operation helps to eliminate short term fluctuations in the data (which may be caused by noise or occasional events), thereby making the data smoother and easier to analyze. Corrosion operations make traffic predictions more accurate and reliable by reducing the effects of noise and outliers, especially in complex or crowded traffic environments. Smooth and noise-free data is critical for successful application of machine learning and deep learning algorithms. By preprocessing the data, the corrosion operation provides a more stable basis for more advanced analysis, such as feature extraction and flow prediction.
Example 4: on the basis of the above embodiment, the method for extracting features to obtain the traffic flow data feature matrix includes:
defining a dictionary matrix D, wherein each column represents a basis vector, using a sparse coding matrix X to represent a traffic data matrix M, learning X by the following optimization problem:
Wherein X is a sparse coding matrix used for representing the linear combination of the traffic flow data matrix M; phi (X) is nonlinear transformation, and the sparse coding matrix X is converted into nonlinear characteristics;representing the reconstruction errors of the traffic data matrix M, the dictionary matrix D and the nonlinear feature phi (X), and measuring by using the Frobenius norm; i X I 1 The L1 norm of the sparse coding matrix X is represented by lambda, which is a regularization parameter for controlling sparsity, and the lambda is a set value; by solving the optimization problem, a sparse coding matrix X and a nonlinear characteristic phi (X) are obtained, and a dictionary matrix D and the nonlinear characteristic phi (X) are used for calculating a traffic flow data characteristic matrix F:
F(t,i)=D T Φ(X(t,i));
wherein F (t, i) represents the elements of the characteristic matrix F of the traffic data at time t and sensor i, D T Representing the transpose of the dictionary matrix D.
Specifically, D: this is a so-called dictionary matrix, which contains a set of basis vectors. In the context of sparse coding, a dictionary may be considered a set of atomic elements that may be linearly combined to reconstruct input data. X: this is a sparse coding matrix that represents a linear combination of the traffic data matrix M. Each element x i,j Representing the basis vector D j And input feature M i Is a degree of association of (a) with each other. Φ (X): this represents a nonlinear transformation that converts the sparse coding matrix X into a set of nonlinear features. This means that the system considers not only linear relationships but also more complex nonlinear interactions that may affect traffic flow data. This represents the reconstruction error, i.e. the difference between the original traffic data matrix M and the data reconstructed by the dictionary matrix D and the non-linear features Φ (X). The Frobenius norm is a norm used in matrix analysis to quantify the square root of the sum of squares of matrix elements, providing a measure of reconstruction quality. I X I 1 : this is L 1 A norm (also known as manhattan norm) that measures the sparsity of the sparse coding matrix X, i.e., the number of non-zero elements in X. L (L) 1 Norms tend to produce sparse solutions, which are beneficial in feature selection and dimension reduction. Lambda: this is a regularization parameter used to control the degree of sparsity. By adjusting λ, the system can find a balance between exact reconstruction and sparsity. F (t, i) =d T Φ (X (t, i)): this represents how to calculate the traffic data feature matrix F. Once the sparse coding matrix X and its nonlinear features Φ (X) are obtained, they can be multiplied by the transpose of the dictionary matrix D to obtain the final feature matrix F, which contains the important description of the original traffic data.
By sparse coding, the system can represent complex traffic data with fewer basis vectors or features, greatly reducing the necessary memory space and computational complexity. Sparse coding helps to further remove noise and extract the most relevant features, which is critical for efficient traffic management and predictive analysis. By introducing a nonlinear transformation Φ (X), this approach is able to capture complex nonlinear dynamics that may affect traffic flow, which is not possible with conventional linear models. By adjusting the regularization parameter lambda and the dictionary matrix D, the system can flexibly adapt to different traffic conditions and environments, so that more accurate traffic flow prediction is realized.
The objective function represents an optimization problem aimed at finding a sparse coding matrix X that is capable of reconstructing the input data matrix M with minimal active (non-zero) components. It is composed of two parts: reconstruction errors and sparsity penalties. Reconstruction errorsThis part measures the dictionary D and the non-dictionaryThe difference between the reconstructed data of Φ (X) and the original traffic data matrix M is linearly transformed. The Frobenius norm gives the sum of squares of this difference and is a commonly used matrix norm. Sparsity penalty (|) X| 1 ):L 1 The norms promote zero for many elements of X, increasing the sparsity of X. λ is a regularization parameter that determines the importance of sparsity to the optimization problem.
By encouraging sparsity, the system can represent the data with fewer basis vectors, effectively compressing the data. Sparse coding helps identify and preserve important, representative data features while eliminating unimportant, redundant, or noise-related components. Sparse solutions are easier to interpret because only a few important features are used for data reconstruction.
Φ represents a nonlinear function that maps the elements in the sparse coding matrix X to a new space. This helps capture complex nonlinear relationships that may affect traffic flow. D (D) T Is a transpose of the dictionary matrix D for multiplication with nonlinear features to generate a feature matrix F. F (t, i) =d T Φ (X (t, i)) describes how the feature matrix F is generated from the sparse code X. F contains key features of the traffic data at each time t and sensor i. Through nonlinear transformation and dictionary application, the raw traffic data is converted into a form more suitable for subsequent analysis and prediction tasks.
Example 5: based on the above embodiment, the process of training a deep belief network according to the traffic data feature matrix includes: initializing the weight and deviation of the deep belief network: w (W) i (0) ,Wherein L is the number of hidden layers of the deep belief network; i is a subscript; w (W) i (0) The weight of the ith hidden layer for the initial deep belief network;deviation of the ith hidden layer for the initial deep belief network; use vehicleThe flow data characteristic matrix F is used as input and is transmitted to a deep belief network for forward propagation; in each hidden layer i, calculating the output of the neuron; calculating the output layer output of the deep belief network; the weight and the deviation of the deep belief network are updated to minimize the traffic data prediction error, and the weight and the deviation are updated by using a back propagation algorithm and a loss function, so that training of the deep belief network is completed, and a deep belief network model P is obtained.
In particular, parameters of the network need to be initialized before training the deep belief network. This includes the weight W of each hidden layer i (0) And deviation ofThe weights and deviations are parameters that are adjusted in neural network training to optimize the performance of the network. Typically, these parameters are initially given a small random value. At the beginning of training, a feature matrix F (containing key features of the traffic data) is input into the network. The network propagates forward through the hidden layers, with the neurons of each hidden layer calculating their outputs based on their inputs, weights and activation functions. This output is then passed on to the next hidden layer and so on until the output layer is reached. The output layer of the network gives the prediction result based on the current weight and the input characteristics. For a traffic prediction task, this output may represent an expected number of vehicles or traffic class. Once the network produces an output, it is used to calculate a loss function, typically the difference between the predicted and actual values (e.g., the mean square error). This error is then transmitted back to the network using a back-propagation algorithm to adjust the weights and bias. This process aims at minimizing prediction errors by gradually adjusting network parameters to better adapt the data. By iterating the above process multiple times (i.e., multiple training periods), the deep belief network gradually learns and refines the key features and patterns in the data. Eventually, the network receives the set of weights and biases that provide the best predictive performance on the training data. This final trained network is the model P.
Example 6: on the basis of the above embodiment, in each hidden layer i, the output of the neuron is calculated using the following formula:
wherein a is i Is the input of hidden layer i, h i Is the output of hidden layer i, σ is the activation function; k is training times; the output layer output of the deep belief network is calculated using the following formula:
wherein p=a L+1 And outputting for an output layer of the deep belief network.
In particular, a Deep Belief Network (DBN) analyzes and interprets traffic data feature matrices through multiple processing layers. Each hidden layer uses the weighted inputs and the activation function to calculate the outputs of its neurons, which are then the inputs to the next layer. Finally, the output layer of the network directly gives the traffic flow predicted value. The whole system is used for learning and identifying meaningful modes or characteristics from original traffic flow data, and then using the information to effectively predict future traffic flow conditions, thereby providing accurate data support for traffic management and planning.
First, the input a of layer i is hidden i By combining the output of the previous layer (the feature matrix F for the first hidden layer) with the weight matrix W of that layer i (k) Performing dot multiplication and then adding a deviation termCalculated by the method. Where k represents the number of iterations of the training, as the weights and bias are updated during the training process. / >This operation is a standard operation in neural networks, which provides a linearly combined input for each neuronAnd (5) inputting a value. Calculating to obtain a i The output h of the hidden layer i is then calculated by an activation function sigma i . . This activation function is typically nonlinear, allowing the network to capture and model nonlinear relationships in the input data. Sigma (a) i ) May be a variety of different functions such as Sigmoid, reLU (linear rectifying unit) or hyperbolic tangent functions, etc. Which activation function is selected depends on the specific application scenario and training behavior of the model.
For the output layer, the process is similar to the hidden layer. Output h of last hidden layer L L Input a used to calculate output layer L+1 By a weight matrix with the output layerDot multiplication is performed and then the deviation term +.>Here, a L+1 Is the linear activation value of the output layer neurons. In some network architectures, the output layer may have an activation function, particularly in classification problems, typically a softmax function, for converting the linear output into a probability distribution. However, in this embodiment, the activation value α of the output layer L+1 Directly regarded as the final output p of the network, which means that the task of the network may be regression (predicting continuous values) rather than classification. In this case, p=a L+1 Can be considered as network predicted traffic.
Example 7: on the basis of the above embodiment, the following loss functions are defined:
where alpha is a regularization parameter,representing a weight matrix W i (k+1) The square of the Frobenius norm; p (t, i) is a deep belief networkThe output of the model P represents the prediction of the traffic data at time t and sensor position i; h is a L (t, i) is the output of the last hidden layer in the deep belief network, representing the output of the neuron at time t and sensor location i; updating weights and biases using a back propagation algorithm and a minimization of a loss function; and updating the weight and the deviation of the deep belief network to minimize the error of traffic flow data prediction, thereby completing training of the deep belief network and obtaining a deep belief network model P.
Specifically, the whole formula represents an optimization problem, i.e. finding the optimal weight matrix W i (k+1) And deviation ofSo that the function reaches a minimum. />Calculated are the predicted traffic flow P (t, i) at time t and sensor location i and the last hidden layer output h of the deep belief network L Sum of squares of differences between (t, i). The sum of squares of this difference (often referred to as the mean square error) is a common indicator of the predicted performance of the network. Here- >The coefficients are to simplify the calculation when deriving, since this constant coefficient does not affect the final optimization direction when gradient descent is performed. />Is a regularization term, where α is a regularization parameter that controls the strength of the regularization. />Is a weight matrix W i (k+1) Is penalized by the square of the Frobenius norm, which penalizes large weight values to prevent overfitting. Regularization is a technique commonly used in machine learning to prevent model overfitting on a training set, thereby improving model fitting on a test set or notAnd (3) performance on the data. By minimizing this loss function, the system adjusts the weights and bias so that the prediction error is as small as possible while preventing the weights from becoming too large (overfitting). This optimization is typically done by gradient descent or variants thereof, i.e. calculating the gradient of the loss function with respect to the model parameters, and then updating the parameters in the negative direction of this gradient.
Composition of the loss function: the loss function consists of two parts: part is the sum of squares of the difference between the predicted value and the actual output, reflecting the accuracy of the network prediction; the other part is the sum of squares of the Frobenius norms of all weight matrices multiplied by a regularization parameter a, which is to prevent the problem of model overfitting, i.e. complexity overgrowth, which results in good but bad performance on training data. The Frobenius norm is one of the matrix norms used to measure the size of the matrix elements. Here it is used to regularize the term, limiting the size of the weights, thus preventing overfitting. This approach is a common regularization technique, sometimes also referred to as weight decay or L2 regularization. This is a commonly used neural network learning algorithm that minimizes the loss function by calculating the gradient of the loss function to the network parameters to adjust the parameters (i.e., weights and biases) step by step. Briefly, the algorithm first calculates the difference between the current network output and the expected output, and then back propagates this "error" back to the network, adjusting weights and biases layer by layer. In each training iteration, the performance of the current model is evaluated using the loss function. The weights and biases are then updated according to the gradient of the loss function using a back propagation algorithm. Through multiple iterations, the model will gradually fit the training data, while the regularization term also helps the model maintain proper complexity.
Example 8: on the basis of the above embodiment, the deep belief network model P defines an objective function of extended support vector regression as:
wherein ζ (t, i) and ζ * (t, i) are relaxation variables for tolerating errors in some of the data points; c is regularization parameter, is set value, and is used for weighing error term and regularization term;the Frobenius norm of the prediction error of the vehicle flow data; sigma (sigma) t,i (ξ(t,i)+ξ * (t, i)) is the sum of the relaxation variables; introducing a kernel function to process a nonlinear relation, and mapping a traffic flow data feature matrix F to a feature space with higher dimension; restating the extended support vector regression by a kernel function as a pair of Lagrangian multiplier l and Berkman multiplier l * In order to obtain a traffic flow prediction model S.
Specifically, this function is an optimization problem, with the aim of finding the optimal model P and the Song's variables ζ, ζ * So that the function reaches a minimum.Is the error between the predicted traffic data of the model P and the real traffic data M, measured using the Frobenius norm. This encourages the model P to accurately predict vehicle flow. ζ (t, i) and ζ * (t, i) is a relaxation variable that allows the model to have some deviation between the predicted value and the true value at some data points. This is to make the model somewhat fault tolerant and to prevent overfitting, especially if the data is noisy or outliers are large. C is a regularization parameter used to trade-off the importance of the prediction error and the relaxation variable. A larger C value means less tolerance to prediction errors, while a smaller C value allows more prediction errors. To address the non-linear relationship, embodiments mention the use of a kernel function to map the traffic data feature matrix F to a higher dimensional feature space. The kernel function may be of various types, such as a polynomial kernel, a radial basis function kernel, etc. The choice of kernel function depends on the nature of the data and the requirements of the model. By using a kernel function, the optimization problem can be restated as a Lagrangian multiplier l and a Berkman multiplier l * Is the greatest optimization problem of (a). This expression is a standard form of support vector machine (including SVR) optimization process, and the solution of the original problem is obtained by solving the dual problem. In general, the method in this embodiment combines the principles of deep learning and support vector regression to create a robust and accurate traffic flow prediction model. By using extended support vector regression, the model is better able to handle noise and non-linear relationships in the actual data, thus providing reliable predictions under a variety of conditions.
Representing the difference between the predicted output P (from the deep belief network) and the real traffic data M. This is a quadratic form intended to ensure that P is as close as possible to M. The Frobenius norm is used here to calculate the differences between the matrices (or in this scenario the data sets). ζ (t, i) and ζ * (t, i) is a relaxation variable that allows the predicted value of the data point to deviate to some extent from the actual value. This is because the actual data may contain noise, or the model may not map all data points perfectly. These variables provide a certain "elasticity" to the model, allowing its error to exist rather than requiring exactly perfect matching of every point. The parameter C is used to balance the trade-off between prediction error and model complexity. If C is large, the optimization algorithm will be more prone to reduce prediction errors, even though this may lead to model overfitting (high complexity). Conversely, a smaller C would tend to be a simple model, even though this may increase the prediction error.
In the optimization problem of support vector machines (including support vector regression), some constraints are typically encountered. To address these constraints, the original problem is converted into its dual form, which typically involves a lagrangian multiplier. These multipliers are coefficients that are added to each constraint in the optimization problem to form a new unconstrained problem (lagrangian function), the solution of which is the same as the solution of the original constraint problem. Lagrangian multiplier l and Berkman multiplier l * Is a parameter that occurs in this transition. They representThe constraints of the original problem are incorporated into the weights of the objective function. In the context of support vector regression, these multipliers are associated with support vectors (data points) and determine their importance in determining the decision function (in this case the prediction function).
In support vector machines and support vector regression, the use of the berkman multiplier together with the lagrangian multiplier together constitutes the so-called Karush-Kuhn-turner (KKT) condition, which is a requirement for many optimization solutions. The berkman multiplier is introduced in the dual problem, especially in the optimization problem where there are inequality constraints.
The Berckmann multiplier and the relaxation variables (in this case ζ and ζ) * ) Multiplying forms a penalty term that increases with the degree to which the data points deviate from the "street" boundary. The larger the value of these multipliers, the more critical the support vector that represents the point, and the greater the impact on the model. The lagrangian multiplier and the berkman multiplier together form a KKT condition that helps determine the solution to the optimization problem. In particular, they help ensure that the solution, while meeting a certain tolerance level, also minimizes the prediction error as much as possible. The berkman multiplier helps balance allowing a certain amount of prediction error (through relaxation variables) with keeping the model complexity low (preventing overfitting) when selecting model parameters (e.g., weights). In more complex models, such as the use of kernel methods, the Berckmann multiplier allows the model to build linear decision boundaries in higher-dimensional feature spaces where the original feature space may be nonlinear.
Example 9: on the basis of the above embodiment, defining a kernel function as phi (F), and mapping the traffic data feature matrix F to a feature space with higher dimension; the kernel function is used to represent the inner product:
K(F(t,i),F(t′,i′))=Φ(F(t,i))·Φ(F(t′,i′));
F (t, i) represents the values of the traffic data feature matrix F at the time t and the sensor position i, and F (t ', i') represents the values of the traffic data feature matrix F at the time y 'and the sensor position i'; by kernel function using the following formulaNumber, reformulate extended support vector regression on Lagrangian multiplier l and Bercheman multiplier l * In order to obtain a traffic flow prediction model S:
where l (t, i) represents the Lagrangian multiplier at time t and sensor position i; l (L) * (t ', i') represents the berkman multiplier at time t 'and sensor position i'; solving the maximum optimization problem to obtain a traffic flow prediction model S as follows:
S(t,i)=∑ t′,i′ (l(y′,i′)-l * (y′,i′))K(F(y′,i′),F(t,i));
where S (t, i) represents one element value in the traffic flow prediction matrix, representing the predicted value of the traffic flow at time t and sensor position i.
In particular, the kernel function K (F (t, i), F (t ', i')) allows the dot product of two vectors to be calculated in a high-dimensional feature space without explicitly representing the vectors in the empty-dimensional space. This is done by using vectors in the original feature space, avoiding "dimension disasters". In this scenario, the kernel functions map the traffic data from the original input space to a higher dimensional space, so that nonlinear data relationships can be captured by constructing a linear model in this new space. The maximization problem in the formula is related to the Lagrangian multiplier l and the Berckmann multiplier l * A kind of electronic device. This maximization problem seeks to find the values of the multipliers that maximize the boundary or interval represented by the multipliers and take into account the inner product of the data points represented by the kernel function. Maximizing this function helps to find the optimal hyperplane, separating or predicting the data points of different categories. Once the optimal lagrangian and berkman multipliers are found, they can be used to construct the predictive model S. The model is a weighted sum in which the weights are Lagrangian and Bercheman multipliers and the value of each term is calculated by a kernel functionInner product of related data points. This weighted sum forms a predictive model for predicting the traffic flow at a given moment and location. For a given time t and position i, the traffic flow prediction S (t, i) is calculated by applying the model S over all training data points, including the calculation of the optimal multiplier and kernel associated with each point.
The system processes data in a high-dimensional feature space using a kernel approach, which can capture complex nonlinear relationships that may be insoluble in the original space. The system can then construct an efficient traffic prediction model by solving an optimization problem for the lagrangian and berkman multipliers.
The kernel function K is essentially a function that accepts two vectors in the original data space-in this case, two elements in the traffic data feature matrix F-and returns the dot product of these vectors in the new high-dimensional feature space. This is calculated in the original space, avoiding direct operation in the high-dimensional space. Φ in the formula represents a function that maps the original data space to the high-dimensional feature space. The function of the kernel function K is that it is able to calculate the dot product of Φ (F (t, i)) and Φ (F (t ', i')) without the need to explicitly calculate the map Φ or the vector in the high-dimensional space. In this way, it is possible to "work" in this high-dimensional space indirectly, simply by calculation in the original space. The choice of kernel function K is very critical because it determines the construction of the feature space and how the original data is mapped. There are many different types of kernel functions, such as linear kernels, polynomial kernels, gaussian Radial Basis Function (RBF) kernels, etc., each type being applicable to a different data structure and relationship. In this particular embodiment, a kernel function is used to calculate the similarity of data points at different times and locations in the traffic prediction model. This information is used to construct a predictive model that can evaluate traffic flow conditions at a particular time and location in the future. By using core skills, the model is able to capture nonlinear dynamics of the data, which may not be obvious or difficult to model in the original data space.
This formula is an optimization problem in the Support Vector Regression (SVR) framework using kernel methods, especially when dealing with traffic prediction. Here, the goal is to find a function that has as little error as possible around the training data points while maintaining the smoothness or regularities of the function. This is done by finding the optimal Lagrangian and Berckmann multipliers l * To achieve this, these multipliers are key parameters in building support vector regression models. This formula defines an optimization objective intended to maximize a system involving Lagrangian multiplier l and Berckmann multiplier l * Is a function of (2). In support vector machines and support vector regression, these multipliers are used to define decision boundaries (in SVM) or prediction functions (in SVR). First term sigma t,i l (t, i) is the target to maximize, which promotes the smoothness of the function. The second term relates to the kernel function K and the lagrangian multiplier, which reflects a tradeoff of model complexity and error between training data. The minus sign indicates that these two goals are competing with each other: it is desirable to find a function that performs well on training data while remaining smooth (or regularized). K (F (t, i), F (t ', i')) in the formula represents a kernel function that measures similarity of two points in the new feature space. By using this kernel function, the model is able to implement nonlinear regression in higher dimensional space, which generally better captures patterns of complex data.
S (t, i) is a predictive function that attempts to estimate the traffic flow at a particular time t and a particular location i. This is achieved by summing the contributions of all training data points, each of which is determined by their own Lagrangian multiplier, berckmann multiplier, and kernel function values. (l (t ', i') -l * (t ', i')) represents the difference between the Lagrangian multiplier and the Berckmann multiplier for a particular data point. In support vector regression, the multipliers determine eachThe degree of influence of the data point on the final prediction function. The positive multiplier indicates that the data point is outside the error tolerance range and has a positive effect on the model; while a negative multiplier indicates that the data point has a negative impact on the model. K (F (t ', i'), F (t, i)) is a kernel function that calculates the inner product of two data points in a new (possibly high-dimensional) feature space. The kernel function allows the relationship of the data points to be considered in a more complex space so that a nonlinear pattern of data can be captured without having to directly calculate in this high-dimensional space. Sum of equations t′,i′ All training data points are covered. The contribution of each point is the product of their Lagrangian and Berckmann multipliers (representing the importance of the point to the model) and the kernel value (representing the similarity between the point and the predicted point). This combination ensures that the model not only considers the information for all data points, but also weights the information according to the location and importance of each point.
The present invention has been described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (4)

1. Intelligent traffic control system based on ARM technical system, characterized by that, the said system includes: ARM sensor array, ARM sensor fusion processing system and ARM control system; the ARM sensor array is arranged in a target area and comprises a plurality of preset ARM sensors, and the target area is divided into a plurality of subareas and a central area; the central area is positioned in the geometric center of all the subareas, each ARM sensor acquires traffic flow data of the subarea where the ARM sensor is positioned according to a set time interval, and the acquired traffic flow data is sent to the ARM sensor fusion system after the set time period; ARM sensor fusion system, configuration The system comprises a vehicle flow data matrix for constructing a set time period according to the received vehicle flow data; after corrosion denoising is carried out on the traffic flow data matrix, feature extraction is carried out, and a traffic flow data feature matrix is obtained; training a deep belief network according to the traffic flow data feature matrix; in the training process, continuously carrying out expansion support vector regression processing, and obtaining a traffic flow prediction model after training is completed; predicting the traffic flow of each moment in the future by using a traffic flow prediction model to obtain a traffic flow prediction matrix; the ARM control system is configured to control traffic in the central area according to the traffic flow prediction matrix; the process of training a deep belief network based on the traffic data feature matrix includes: initializing the weight and deviation of the deep belief network:wherein L is the number of hidden layers of the deep belief network; i is a subscript; w (W) i (0) The weight of the ith hidden layer for the initial deep belief network; />Deviation of the ith hidden layer for the initial deep belief network; using a traffic data feature matrix F as input, and transmitting the traffic data feature matrix F to a deep belief network for forward propagation; in each hidden layer i, calculating the output of the neuron; calculating the output layer output of the deep belief network; updating the weight and the deviation of the deep belief network to minimize the prediction error of traffic flow data, and updating the weight and the deviation by using a back propagation algorithm and a loss function so as to complete training of the deep belief network and obtain a deep belief network model P; in each hidden layer i, the output of the neuron is calculated using the following formula:
Wherein a is i Is the input of hidden layer i, h i Is the output of hidden layer i, σ is the activation functionThe method comprises the steps of carrying out a first treatment on the surface of the k is training times; the output layer output of the deep belief network is calculated using the following formula:
wherein p=a L+1 Outputting for an output layer of the deep belief network;
the following loss function is defined:
where alpha is a regularization parameter,representing a weight matrix W i (k+1) The square of the Frobenius norm; p (t, i) is the output of the deep belief network model P, representing the prediction of traffic data at time t and sensor location i; h is a L (t, i) is the output of the last hidden layer in the deep belief network, representing the output of the neuron at time t and sensor location i; updating weights and biases using a back propagation algorithm and a minimization of a loss function; updating the weight and the deviation of the deep belief network to minimize the error of traffic flow data prediction, thereby completing training of the deep belief network and obtaining a deep belief network model P;
in the training process, the depth belief network model P defines an objective function of the extended support vector regression as follows when the extended support vector regression process is continuously carried out:
wherein ζ (t, i) and ζ * (t, i) are relaxation variables for tolerating errors in some of the data points; c is regularization parameter, is set value, and is used for weighing error term and regularization term; The Frobenius norm of the prediction error for the traffic data; sigma (sigma) t,i (ξ(t,i)+ξ * (t, i)) is the sum of the relaxation variables; introducing a kernel function to process a nonlinear relation, and mapping a traffic flow data feature matrix F to a feature space with higher dimension; restating the extended support vector regression by a kernel function as a pair of Lagrangian multiplier l and Berkman multiplier l * In order to obtain a traffic flow prediction model S;
defining a kernel function as phi (F), and mapping the traffic flow data feature matrix F to a feature space with higher dimension; the kernel function is used to represent the inner product:
K(F(t,i),F(t′,i′))=Φ(F(t,i))·Φ(F(t′,i′));
f (t, i) represents the values of the traffic data feature matrix F at the time t and the sensor position i, and F (t ', i') represents the values of the traffic data feature matrix F at the time t 'and the sensor position i'; the extended support vector regression is re-expressed as a pair of Lagrangian multiplier l and Berkman multiplier l by a kernel function using the following formula * In order to obtain a traffic flow prediction model S:
where l (t, i) represents the Lagrangian multiplier at time t and sensor position i; l (L) * (t ', i') represents the berkman multiplier at time t 'and sensor position i'; solving the maximum optimization problem to obtain a traffic flow prediction model S as follows:
S(t,i)=∑ t′,i′ (l(t′,i′)-l * (t′,i′))K(F(t′,i′),F(t,i));
Where S (t, i) represents one element value in the traffic flow prediction matrix, representing the predicted value of the traffic flow at time t and sensor position i.
2. The intelligent traffic control system based on ARM technology architecture as claimed in claim 1, wherein each sensor is pre-provided with a weight value; the expression of each element in the traffic flow data matrix is:
wherein M (t, i) is an element in the traffic data matrix M, characterizing the original values at time t and sensor i; w (i) is the weight of ARM sensor i, and C (i, t) is the traffic flow data of ARM sensor i at time t; n is the number of ARM sensors, which is equal to the number of subareas.
3. The intelligent traffic control system based on the ARM technology system of claim 2, wherein the corrosion denoising of the traffic data matrix is performed using the following formula:
wherein,representing values of the traffic flow data matrix at time t and sensor i after the corrosion operation is applied; b is a structural element for defining the shape and size of the etching operation; (x, y) is the coordinates in structural element B; k (t, i) is a corrosion kernel function; the addition is element-by-element multiplication; />Is a corrosion operation.
4. The intelligent traffic control system based on ARM technology system as claimed in claim 3, wherein the method for extracting features to obtain the feature matrix of the traffic data comprises:
Defining a dictionary matrix D, wherein each column represents a basis vector, using a sparse coding matrix X to represent a traffic data matrix M, learning X by the following optimization problem:
wherein X is a sparse coding matrix used for representing the linear combination of the traffic flow data matrix M; phi (X) is nonlinear transformation, and the sparse coding matrix X is converted into nonlinear characteristics;representing the reconstruction errors of the traffic data matrix M, the dictionary matrix D and the nonlinear feature phi (X), and measuring by using the Frobenius norm; i X I 1 The L1 norm of the sparse coding matrix X is represented by lambda, which is a regularization parameter for controlling sparsity, and the lambda is a set value; by solving the optimization problem, a sparse coding matrix X and a nonlinear characteristic phi (X) are obtained, and a dictionary matrix D and the nonlinear characteristic phi (X) are used for calculating a traffic flow data characteristic matrix F:
F(t,i)=D T Φ(X(t,i));
wherein F (t, i) represents the elements of the characteristic matrix F of the traffic data at time t and sensor i, D T Representing the transpose of the dictionary matrix D.
CN202311400573.9A 2023-10-26 2023-10-26 Intelligent traffic control system based on ARM technology system Active CN117133131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311400573.9A CN117133131B (en) 2023-10-26 2023-10-26 Intelligent traffic control system based on ARM technology system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311400573.9A CN117133131B (en) 2023-10-26 2023-10-26 Intelligent traffic control system based on ARM technology system

Publications (2)

Publication Number Publication Date
CN117133131A CN117133131A (en) 2023-11-28
CN117133131B true CN117133131B (en) 2024-02-20

Family

ID=88863182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311400573.9A Active CN117133131B (en) 2023-10-26 2023-10-26 Intelligent traffic control system based on ARM technology system

Country Status (1)

Country Link
CN (1) CN117133131B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222742A (en) * 2007-11-22 2008-07-16 中国移动通信集团山东有限公司 Alarm self-positioning and self-processing method and system for mobile communication network guard system
CN106548645A (en) * 2016-11-03 2017-03-29 济南博图信息技术有限公司 Vehicle route optimization method and system based on deep learning
CN109118763A (en) * 2018-08-28 2019-01-01 南京大学 Vehicle flowrate prediction technique based on corrosion denoising deepness belief network
CN110309886A (en) * 2019-07-08 2019-10-08 安徽农业大学 The real-time method for detecting abnormality of wireless sensor high dimensional data based on deep learning
CN111127888A (en) * 2019-12-23 2020-05-08 广东工业大学 Urban traffic flow prediction method based on multi-source data fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11023749B2 (en) * 2019-07-05 2021-06-01 Zoox, Inc. Prediction on top-down scenes based on action data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101222742A (en) * 2007-11-22 2008-07-16 中国移动通信集团山东有限公司 Alarm self-positioning and self-processing method and system for mobile communication network guard system
CN106548645A (en) * 2016-11-03 2017-03-29 济南博图信息技术有限公司 Vehicle route optimization method and system based on deep learning
CN109118763A (en) * 2018-08-28 2019-01-01 南京大学 Vehicle flowrate prediction technique based on corrosion denoising deepness belief network
CN110309886A (en) * 2019-07-08 2019-10-08 安徽农业大学 The real-time method for detecting abnormality of wireless sensor high dimensional data based on deep learning
CN111127888A (en) * 2019-12-23 2020-05-08 广东工业大学 Urban traffic flow prediction method based on multi-source data fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Estimating traffic volumes in an urban network based on taxi GPS and limited LPR data using machine learning techniques;Jiping Xing 等;《2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)》;第20-24页 *
The Application of Edge Computing Technology in the Collaborative Optimization of Intelligent Transportation System Based on Information Physical Fusion;GONGXING YAN 等;《IEEE Access》;第153264-153272页 *
深圳市轨道交通客流预测的新探索;杨良;闫铭;;都市快轨交通(第05期);全文 *

Also Published As

Publication number Publication date
CN117133131A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
Chen et al. PCNN: Deep convolutional networks for short-term traffic congestion prediction
Liu et al. Forecast methods for time series data: a survey
Reddy et al. A deep neural networks based model for uninterrupted marine environment monitoring
Wang et al. Deep learning for real-time crime forecasting and its ternarization
Badrzadeh et al. Hourly runoff forecasting for flood risk management: Application of various computational intelligence models
Du et al. Deep learning with long short-term memory neural networks combining wavelet transform and principal component analysis for daily urban water demand forecasting
Liu et al. A hybrid WA–CPSO-LSSVR model for dissolved oxygen content prediction in crab culture
CN107529651B (en) Urban traffic passenger flow prediction method and equipment based on deep learning
US11803744B2 (en) Neural network learning apparatus for deep learning and method thereof
Giles et al. Noisy time series prediction using recurrent neural networks and grammatical inference
Zhang et al. A graph-based temporal attention framework for multi-sensor traffic flow forecasting
Sugiartawan et al. Prediction by a hybrid of wavelet transform and long-short-term-memory neural network
Chen et al. A novel reinforced dynamic graph convolutional network model with data imputation for network-wide traffic flow prediction
Jalali et al. An advanced short-term wind power forecasting framework based on the optimized deep neural network models
Li et al. Decomposition integration and error correction method for photovoltaic power forecasting
Zou et al. FDN-learning: Urban PM2. 5-concentration spatial correlation prediction model based on fusion deep neural network
Massaoudi et al. Performance evaluation of deep recurrent neural networks architectures: Application to PV power forecasting
He et al. A cooperative ensemble method for multistep wind speed probabilistic forecasting
Tang et al. Missing traffic data imputation considering approximate intervals: A hybrid structure integrating adaptive network-based inference and fuzzy rough set
CN115936069A (en) Traffic flow prediction method based on space-time attention network
Sahoo et al. Multi-step Ahead Urban Water Demand Forecasting Using Deep Learning Models
Mo et al. Annual dilated convolutional LSTM network for time charter rate forecasting
Madhavi et al. Multivariate deep causal network for time series forecasting in interdependent networks
Surakhi et al. On the ensemble of recurrent neural network for air pollution forecasting: Issues and challenges
CN117133131B (en) Intelligent traffic control system based on ARM technology system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant