CN116665483A - Novel method for predicting residual parking space - Google Patents

Novel method for predicting residual parking space Download PDF

Info

Publication number
CN116665483A
CN116665483A CN202310664772.4A CN202310664772A CN116665483A CN 116665483 A CN116665483 A CN 116665483A CN 202310664772 A CN202310664772 A CN 202310664772A CN 116665483 A CN116665483 A CN 116665483A
Authority
CN
China
Prior art keywords
data
parking
gru
decomposition
ceemdan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310664772.4A
Other languages
Chinese (zh)
Inventor
孟炜
张凯
葸国隆
乌惜辰
马昌喜
黄晓婷
金科臣
来永寿
贺全红
张志强
侯树杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gansu Longyuan Information Technology Co ltd
Original Assignee
Gansu Longyuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gansu Longyuan Information Technology Co ltd filed Critical Gansu Longyuan Information Technology Co ltd
Priority to CN202310664772.4A priority Critical patent/CN116665483A/en
Publication of CN116665483A publication Critical patent/CN116665483A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a novel method for predicting a residual parking space, and belongs to the technical field of intelligent parking. The method comprises the following steps: decomposing the parking original data into a plurality of IMFs with different frequencies by using a CEEMDAN decomposition algorithm, and decomposing fluctuation features and trends with different scales from the original data step by step to eliminate noise; performing PCA dimension reduction processing on the decomposed data, extracting abstract high-level features, and eliminating the correlation and redundancy of different time sequences after CEEMDAN decomposition; and deep learning prediction by adopting a GRU neural network. The invention integrates the ideas of three subjects of signal processing, statistics and deep learning, performs noise reduction and feature reconstruction on a decomposition sequence by using principal component analysis based on the preliminary decomposition of CEEMDAN on parking original data, selects key principal components according to the accumulated variance contribution rate, reduces feature dimension, and removes redundant fluctuation information. And finally, inputting the feature subjected to dimension reduction and the original data into a GRU network for nonlinear modeling, and outputting a prediction result by utilizing the long-term memory of the GRU.

Description

Novel method for predicting residual parking space
Technical Field
The invention relates to the technical field of intelligent parking, in particular to a novel method for predicting a residual parking space.
Background
Parking problems have been an important issue in urban areas throughout the world, and research related to parking space guidance and search has been receiving increasing attention. The imbalance between the rapidly growing number of vehicles and limited parking spaces, and increasingly limited urban land resources, is exacerbating the parking dilemma. Parking queuing and cruising exacerbate urban road congestion in peak hours, and vehicles traveling at low speeds are in a state of high row, high power consumption and high noise, pollute the environment and threaten human health.
At present, aiming at the problem of difficult parking, more important means is to reasonably regulate and control the parking space through scientific means, and the driver can grasp the dynamic state of the parking space in real time through analysis and prediction of static traffic data, so that the utilization rate of the parking space is improved. Since reasonable parking guidance relies on accurate prediction of parking spaces. On the premise of fully investigating the related features of the parking and analyzing the factors affecting the number of parking places, more accurate prediction of the future short-term residual parking places is particularly important. How to accurately and efficiently predict the number of remaining parking places has become one of the most popular research subjects for traffic information management and parking guidance systems.
In recent years, parking space prediction has become an important research component and has made significant progress. It is a typical time series prediction problem. Unlike classification and regression problems, time series prediction problems increase the complexity of the order and time dependence between observations, which makes time series prediction problems more complex than conventional prediction problems. Statistical-based linear prediction models are widely used in early-stop prediction. However, statistical methods cannot support the dynamics of the parking data due to the randomness and complex nonlinearities of the parking sequence. Therefore, how to obtain satisfactory predictive performance remains a problem to be solved.
With the development of computer technology, machine learning is an emerging artificial intelligence algorithm, which can solve more complex problems and mine deeper hidden information from historical parking data. The complex relation in the neural network learning data is utilized to adjust the overall structure of the neural network so as to improve the prediction accuracy of the model, but the model has the problems of over fitting, gradient explosion, local extremum and the like.
The accurate prediction of the residual parking space plays a vital role in optimizing the utilization rate of parking resources and improving traffic conditions. However, in the past, most of researches are based on historical data of parking or numerous factors influencing parking prediction to perform model modeling, so that complexity of data and time spent on running the model are increased, the fitting degree of the model and an extreme point is poor, and the requirements of practical application cannot be met.
Disclosure of Invention
The object of the present invention is to provide a new method of predicting the number of parking remaining bits based on a hybrid prediction model of a fully adaptive noise set empirical mode decomposition (CEEMDAN) and a gate cycle unit (GRU) model. CEEMDAN is used as a sequence smooth decomposition module, and can gradually decompose time sequence fluctuation or trend of different scales to generate a series of eigen-mode functions (IMF) with different characteristic scales; then, by keeping most of information of the original data, the Principal Component Analysis (PCA) reduces the dimension of the decomposed I/MF sequence, eliminates redundant information, and improves the prediction response speed; and then, inputting the high-level abstract features into the GRU network, and completing the construction, test and prediction of the network based on the deep learning framework Keras. The validity of the proposed model is verified by using the real parking data set acquired by the parking lot. Thereby solving the problems set forth in the background art described above.
The technical scheme adopted by the invention is as follows:
a new method of predicting remaining parking spaces, comprising the steps of:
step one: data preprocessing, namely, summarizing the obtained parking big data to form an original parking time sequence, and detecting whether abnormal data exist or not; decomposing the parking original data into a plurality of I MF with different frequencies by using CEEMDAN decomposition algorithm, and decomposing fluctuation features and trends with different scales from the original data step by step to eliminate noise;
step two: performing PCA dimension reduction processing on the decomposed data by adopting a PCA algorithm, extracting abstract high-level features, improving the calculation efficiency, and eliminating the correlation and redundancy of different time sequences after CEEMDAN decomposition;
step three: deep learning prediction, normalizing original parking data and reduced-dimension data, converting multidimensional data into an input format of a GRU neural network, and dividing a training set and a testing set; initializing parameters of the GRU neural network, and continuously training a model to achieve ideal precision; and outputting a parking space prediction sequence by the GRU output layer, calculating errors and evaluating the model effect.
The CEEMDAN decomposition algorithm in the step one comprises the following specific steps:
1) For a given original sequence x (t), an adaptive gaussian white noise n is added j (t) to obtain a signal x j (t) wherein x j The expression of (t) is:
x j (t)=x(t)+p i n j (t)(i=1,2,...,M;j=1,2,...,N) (1)
wherein i represents the number of IMFs in step one, j represents the number of experiments, p i Representing the standard deviation in noise, by user setting;
2) Decomposing signal x using EMD j (t) obtaining IMF component IMF 1 i (t), which represents x j (t) is the ith IMF component after EMD decomposition;
3) Repeating steps 1) to 2) for N times, and adding different Gaussian white noise N each time j (t);
4) Setting the mean value of the IMF components after M times of decomposition as a first IMF, wherein the first IMF component is expressed as follows:
the residual signal is expressed as:
then e 1 (t) is defined as the first IMF component derived from EMDIs formed by the sequence r 1 (t)+p 2 e 1 (n j (t)) is expressed as follows:
the residual signal is expressed as:
similar to the above steps, the kth residual signal is expressed as:
the k+1th IMF component is expressed as:
finally, repeating the above steps until the residual signal r k (t) becomes a constant function or a monotonic function, assuming a total of M IMF components, the original sequence can be expressed as follows:
r (t) represents the final residual signal that has become a constant function or a monotonic function.
The PCA algorithm in the second step comprises the following specific steps:
1) When an original data set is formed by n groups of data, each group of data has m parameters, and the matrix form is as follows:
2) Normalizing the matrix X to obtain a matrix Z:
wherein ,
3) Calculating a covariance matrix of Z:
4) According to |lambda i E-C|Calculating eigenvalue λ of matrix C =0 i1 >λ 2 >...>λ m ) And feature vector u i
u i =[u i1 u i2 ... u im ] T ,i=1,2,...,m (12)
5) Calculating principal component F i
F i =u 1i X 1 +u 2i X 2 +...u mi X m ,i=1,2,...,m (13)
6) The i-th principal component contribution rate is ki, and the cumulative contribution rate of the first i principal components is pi
The specific expression of the GRU neural network in the third step is as follows:
z t =σ(W z ·[h t-1 ,x t ]) (16)
r t =σ(W r ·[h t-1 ,x t ]) (17)
h′ t =tanh(W h ·[r t ⊙h t-1 ,x t ]) (18)
h t =(1-z t )⊙h t-1 +h′ t ⊙z t (19)
wherein ,xt For input value, z t To reset the gate state r t To update the door state, h' t For the current candidate set, W is a weight, h t As the current state, the product of Hadmad is as follows, and Sigmod is as a function.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. the invention starts from the parking sequence itself, and utilizes the signal decomposition algorithm CEEMDAN to reduce the random fluctuation of the original sequence, so as to generate a series of more stable IMFs. The method can directly and fully extract implicit fluctuation and integral change characteristics in the time sequence without depending on the selection of external characteristics, and embody the characteristics of the data.
2. The invention integrates the ideas of three subjects of signal processing, statistics and deep learning, performs noise reduction and feature reconstruction on a decomposition sequence by using principal component analysis based on the preliminary decomposition of CEEMDAN on parking original data, selects key principal components according to the accumulated variance contribution rate, reduces feature dimension, and removes redundant fluctuation information. And finally, inputting the feature subjected to dimension reduction and the original data into a GRU network for nonlinear modeling, and outputting a prediction result by utilizing the long-term memory of the GRU.
In summary, the present invention predicts the number of parking remaining bits based on a hybrid prediction model of a fully adaptive noise set empirical mode decomposition (CEEMDAN) and gate cycle unit (GRU) model. In the model, CEEMDAN is used as a sequence smoothing decomposition module, and time sequence fluctuation or trend with different scales can be decomposed step by step to generate a series of eigenmode functions (IMFs) with different characteristic scales. Then, by retaining most of the information of the original data, principal Component Analysis (PCA) reduces the dimension of the decomposed IMF sequence, eliminates redundant information, and improves the predicted response speed. And then, inputting the high-level abstract features into the GRU network, and completing the construction, test and prediction of the network based on the deep learning framework Keras. The validity of the proposed model is verified by using the real parking data set acquired by the parking lot. Experimental results show that the model provided by the invention is superior to a reference model in terms of prediction accuracy.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a frame diagram of the present invention;
FIG. 3 is a diagram of the internal cell architecture of the GRU neural network of the invention;
FIG. 4 is a diagram of the original data of a parking lot according to an experimental example of the present invention;
FIG. 5 is a diagram showing the decomposition results of CEEMDAN according to the present invention;
FIG. 6 is a graph showing the variance contribution rate of the principal component and the cumulative variance contribution rate in the PCA dimension reduction process according to the present invention;
FIG. 7 is a graph showing the predicted results of comparative model I in the experimental results and analysis of the present invention;
FIG. 8 is a graph of the predicted results of the comparison of model II and model IV in the experimental results and analysis of the present invention;
FIG. 9 is a graph showing the results of the comparison model III in the experimental results and analysis of the present invention;
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1-9, the present embodiment provides a new method for predicting a remaining parking space, which includes the following steps:
step one: data preprocessing, namely, summarizing the obtained parking big data to form an original parking time sequence, and detecting whether abnormal data exist or not; decomposing the parking original data into a plurality of I MF with different frequencies by using CEEMDAN decomposition algorithm, and decomposing fluctuation features and trends with different scales from the original data step by step to eliminate noise;
step two: performing PCA dimension reduction processing on the decomposed data by adopting a PCA algorithm, extracting abstract high-level features, improving the calculation efficiency, and eliminating the correlation and redundancy of different time sequences after CEEMDAN decomposition;
step three: deep learning prediction, normalizing original parking data and reduced-dimension data, converting multidimensional data into an input format of a GRU neural network, and dividing a training set and a testing set; initializing parameters of the GRU neural network, and continuously training a model to achieve ideal precision; and outputting a parking space prediction sequence by the GRU output layer, calculating errors and evaluating the model effect.
The CEEMDAN decomposition algorithm in the step one comprises the following specific steps:
1) For a given original sequence x (t), an adaptive gaussian white noise n is added j (t) to obtain a signal x j (t) wherein x j The expression of (t) is:
x j (t)=x(t)+p i n j (t)(i=1,2,...,M;j=1,2,...,N) (1)
wherein i represents the number of IMFs in step one, j represents the number of experiments, p i Representing the standard deviation in noise, by user setting;
2) Decomposing signal x using EMD j (t) obtaining IMF component IMF 1 i (t), which represents x j (t) is the ith IMF component after EMD decomposition;
3) Repeating steps 1) to 2) for N times, and adding different Gaussian white noise N each time j (t);
4) Setting the mean value of the IMF components after M times of decomposition as a first IMF, wherein the first IMF component is expressed as follows:
the residual signal is expressed as:
then e 1 (t) is defined as the first IMF component derived from EMDIs a process of (2)From the sequence r 1 (t)+p 2 e 1 (n j (t)) is expressed as follows:
the residual signal is expressed as:
similar to the above steps, the kth residual signal is expressed as:
the k+1th IMF component is expressed as:
finally, repeating the above steps until the residual signal r k (t) becomes a constant function or a monotonic function, assuming a total of M IMF components, the original sequence can be expressed as follows:
r (t) represents the final residual signal that has become a constant function or a monotonic function.
The PCA algorithm in the second step comprises the following specific steps:
1) When an original data set is formed by n groups of data, each group of data has m parameters, and the matrix form is as follows:
2) Normalizing the matrix X to obtain a matrix Z:
wherein ,
3) Calculating a covariance matrix of Z:
4) According to |lambda i Calculating eigenvalue λ of matrix C by E-c|=0 i1 >λ 2 >...>λ m ) And feature vector u i
u i =[u i1 u i2 ... u im ] T ,i=1,2,...,m (12)
5) Calculating principal component F i
F i =u 1i X 1 +u 2i X 2 +...u mi X m ,i=1,2,...,m (13)
6) The i-th principal component contribution rate is ki, and the cumulative contribution rate of the first i principal components is pi
In the third step, the internal cell structure of the GRU neural network is as shown in fig. 3, and the specific expression of the GRU neural network is as follows:
z t =σ(W z ·[h t-1 ,x t ]) (16)
r t =σ(W r ·[h t-1 ,x t ]) (17)
h′ t =tanh(W h ·[r t ⊙h t-1 ,x t ]) (18)
h t =(1-Z t )⊙h t-1 +h′ t ⊙Z t (19)
in FIG. 3 and equations (16-19), x t For input value, z t To reset the gate state r t To update the door state, h' t For the current candidate set, W is a weight, h t As the current state, the product of Hadmad and Sigmod, y t Is a predicted value.
Experimental example analysis
1. Experimental data
The time span of the real parking data set collected by the user is 1 day from 11 months in 2020 to 30 days in 11 months, the number of actual effective parking spaces in the garage is 336, the data collection time interval is 10 minutes, and the data comprise data such as collection time, vehicle in-out number, number of parked vehicles, occupancy rate and the like. Too small a sampling interval will increase the error and too large a sampling interval will not reflect the time varying characteristics of the number of parking places, so this experimental example chooses a larger sampling interval (10 minutes) to reduce the fluctuation of the data and describe the overall varying characteristics over the whole time period. Judging whether the original data has abnormal value or not through an abnormal value detection function, and finally obtaining 4320 pieces of effective parking data. Table 1 shows some sample data for day 11 months 22. The visualization of part of the data is shown in fig. 4, and three pictures in fig. 4 are the trend of the parking garage for one day, one week and the whole vehicle parking data, respectively. From the graph, it can be clearly seen that the parking data is a nonlinear and non-stationary signal and is influenced by various factors to show certain randomness and periodicity, and the parking number increases every day from about 5 a.m., reaches a parking peak in noon and then decreases.
Table 1 partial sample data of parking garage
Time of day Number of vehicle approach Number of vehicle departure Number of parking Occupancy rate
8:00 5 0 189 56.25%
8:10 2 0 191 56.85%
8:20 3 1 193 57.44%
8:30 1 0 194 57.74%
8:40 5 0 199 59.23%
8.50 6 2 203 60.42%
9:00 8 0 211 62.80%
Decomposition results and analysis of CEEMDAN
The fluctuation of the parking data is affected by the superposition of different factors, and a plurality of components are difficult to separate, and the factors can also cause the sudden irregular influence of the parking data, which is similar to noise in signals. Decomposition is an important step in the course of signal processing in order to eliminate the effect of noise. By taking the advantages of signal processing into consideration, the invention adopts an advanced CEEMDAN decomposition technology to decompose the original data into subsequences with different characteristics, and the subsequences reduce the nonlinearity of the original sequence and extract local characteristics, thereby being beneficial to grasping the overall trend and implicit information of the change of the parking space. The results after decomposition are shown in FIG. 5. The uppermost curve is the original sequence, IMF1-IMF9 are 9 IMF components obtained by decomposition, the IMF components are sequentially arranged from high frequency to low frequency, and the last is the residual sequence, and the residual sequence is not considered in modeling due to small magnitude. In fig. 5 it is clearly seen that the amplitude and fluctuation of the 9 components are decreasing, which illustrates that the random fluctuation of the original sequence is significantly reduced by CEEMDAN decomposition compared to the original data. The resulting more stable IMF helps to provide a relatively stable time series for the next predictive model.
Results and analysis of PCA
PCA converts the high-dimensional correlated input vector into a low-dimensional uncorrelated principal component. More importantly, the extraction of the first few principal components almost guarantees the validity and integrity of the original data. The method can realize the deep learning of the reduction and the reduction of training time, thereby improving the running efficiency of the model and saving the running cost. Table 2 shows the variance contribution rate and the cumulative contribution rate of each component, and the variation thereof is shown in fig. 6. The cumulative contribution rate of the first four main components of the CEEMD decomposed IMF sequence after PCA exceeds 90%, and the screened main components have stronger representativeness to the original characteristic sequence and have stronger comprehensive information capability. To obtain better model predictive performance, we calculated values of the first four principal components (cumulative contribution up to 96%) as training sample data for the input GRU instead of the original variables.
TABLE 2 variance contribution ratio of principal Components
Composition of the components Variance contribution rate/100% Cumulative variance contribution/100%
1 44.86821412 44.868214
2 38.39167529 83.259889
3 8.106849802 91.366739
4 4.851063754 96.217803
5 2.46951539 98.687318
6 1.107839843 99.795158
7 0.131505977 99.926664
8 0.041081431 99.967746
9 0.032254393 100
4. Predictive model parameter setting
Before GRU neural network prediction, 4320 sets of data are normalized, and the experimental example uses a minimum-maximum normalization method to linearly transform the original data into mapping values between [0,1 ]. The conversion formula is shown as formula (20). The normalization processing of the data not only can fully utilize the fitting capacity of the neural network, but also can accelerate the convergence speed of the model.
Wherein the method comprises the steps ofMax is the maximum value of the sample data, min is the minimum value of the sample data, x * Is a normalized value.
Then 90% of the dataset was used as training data, 10% as test data, and the dataset was converted to an input format suitable for the GRU network. The adaptive moment estimation (Adam) algorithm selected in this experimental example was used to optimize and adjust the parameters of the predictive model. The loss function is defined by the Mean Square Error (MSE). Since the ReLU function has no gradient vanishing problem when the input data is positive, the activation function of the GRU unit is set as the ReLU function in the experiment. In order to achieve the best training effect of the cyclic neural network and reduce the training set error to the most stable value, the experiment is iterated 50 times in total. Since there is a lot of data in the training set, the batch size is set to 60, i.e. the model will update the parameters after processing 60 sets of data. Early stop strategies were added prior to training to prevent model overfitting. In the network training and prediction process, since the weights and the thresholds are randomly generated by random numbers, the network convergence rate is different for each run. The model parameters obtained after repeated comparative tests on several parameters are shown in table 3.
Table 3 parameter values for GRU networks
Parameter name Value of
Training set 3888
Test set 432
Input layer time step number 6
Input layer dimension (6,5)
Number of hidden layers 2
Number of full connection layer units 64
GRU layer number of units 32
Number of iterations 50
Batch size number 60
Output layer dimension 1
5. Error index
In terms of model evaluation, to observe the data processing more accurately, three commonly used error indicators were chosen as the basis for the evaluation: mean Absolute Error (MAE), mean Square Error (MSE), and Root Mean Square Error (RMSE). The best predictive model is determined by comparing MAE, MSE and RMSE for each model, with the lowest value for each indicator being the best predictive model. The evaluation index was calculated as follows.
Where n is the number of samples, y i For the actual park number of the i-th sample,the predicted park number for the i-th sample.
6. Experimental results and analysis
In order to verify the predictive power of the proposed model in the prediction of parking spaces, the experimental example uses a plurality of comparison models for comparison under valid parking time series data. In comparison model I, LSTM, BI_LSTM and GRU are compared to analyze the effectiveness of the different single models and select the best single prediction model among all single models used. In comparison model II, two data decomposition algorithms, including EMD and CEEMDAN models, were introduced in combination with the GRU based on PCA dimension reduction, demonstrating the significant effectiveness of data decomposition techniques in time series prediction. Then, in comparative model III, the CEEMDAN algorithm was demonstrated to have significant impact on the prediction results of other models using CEEMDAN-PCA-LSTM, CEEMDAN-PCA-BI_LSTM, and CEEMDAN-PCA-GRU models. In the comparison model IV, after CEEMDAN decomposition, subsequences with different characteristics are sequentially input into the GRU model to generate predicted values of a single sequence, all the predicted values are finally comprehensively overlapped to form a final predicted result, and the advantage of PCA dimension reduction processing is verified by comparison with CEEMDAN-PCA-GRU. Finally, the developed model is compared with all the comparison models, and the proposed model is proved to be superior to all the comparison models in predicting the number of parking places.
To ensure the validity of the experiments, all experiments were performed at the same experimental setup and the same training and test set was used in the data. In addition, in order to reduce the effect of partial model randomness on the prediction results, 10 replicates were performed for all models. The predictions of comparative model I, model II, model III, model IV are visualized in FIGS. 7, 8, and 9. To verify the advantages of the model presented herein in terms of parking space prediction accuracy, the prediction results of each model were evaluated using three evaluation indexes of 3.5 sections, and the results are shown in table 4. Meanwhile, the computer running time of each model is shown in table 5 in consideration of the costs of the proposed model and the comparative model.
As can be clearly seen in conjunction with fig. 7-9, all models are able to approximately fit the trend of the parking sequence, which also suggests that deep learning models can learn and mine their deep temporal characteristics when dealing with long-term parking time sequences that have dependencies. Wherein the predictive model of a single model fluctuates significantly and the predicted outcome deviates significantly from the actual data due to the nonlinearity and complexity of the parking sequence itself. Under the condition that the original data is not decomposed, the inherent hidden characteristic information of the sequence data is difficult to extract, and a good fitting effect is obtained. Compared with a single model, the combination model subjected to decomposition treatment has a qualitative improvement in the fitting of real data. The CEEMDAN-PCA-GRU combined prediction model provided by the invention has the optimal prediction performance and is closest to the fluctuation condition of an actual sequence. This also demonstrates the feasibility and effectiveness of the model.
Table 4 prediction error for different models
Model name MAE MSE RMSE
LSTM 2.39726 10.40985 3.22643
BI_LSTM 2.31742 11.22288 3.35006
GRU 2.12857 8.38794 2.89619
CEEMDAN-PCA-LSTM 2.14742 8.16461 2.85738
CEEMDAN-PCA-BI_LSTM 2.27126 9.28348 3.04688
EMD-PCA-GRU 1.72953 6.63868 2.57656
CEEMDAN-GRU 1.92480 7.53878 2.74568
CEEMDAN-PCA-GRU 1.62852 5.31956 2.30642
The following conclusions can be drawn from the prediction errors of table 4 versus the different models:
(1) In the comparison of a single model, the GRU has lower prediction error than similar deep learning models LSTM and BI_LSTM, and the GRU has the advantages of simple structure, less training samples and easier realization, although the three network structures are all used for solving the problem that RNNs cannot realize long-term correlation in sequence prediction. The invention adopts GRU to predict the number of parking spaces by comprehensively considering the running time and the calculation performance of the computer.
(2) The comparison model ii demonstrates the contribution of the data decomposition algorithm in the prediction of parking spaces. By comparing the prediction errors of the GRU, EMD-PCA-GRU and CEEMDAN-PCA-GRU models, it can be seen that the MSE of the model incorporating the EMD, CEEMDAN decomposition algorithm is reduced by 1.74926 and 3.06838, respectively, compared to the MSE of the single GRU model. The CEEMDAN algorithm is adopted to add the I MF component of white noise during each decomposition, so that residual noise in the component is fewer, reconstruction errors are effectively reduced, fluctuation characteristics of parking sequences at different time periods can be acquired more accurately, and therefore, the prediction effect is better than that of the EMD algorithm.
(3) To verify the effect of CEEMDAN on other predictive models, CEEMDAN was combined with LSTM, bi_lstm with GRU in comparative model iii. The results show that all evaluation indexes of the model combined with the GRU are superior to those of other comparison models, and the prediction effect of CEEMDAN on other models is obviously improved.
(4) The PCA analysis method screens the main components of the decomposed components, reduces the input dimension of the prediction model, and eliminates redundancy and correlation among different subsequences decomposed by CEEMDAN to a certain extent.
(5) CEEMDAN-PCA-GRU was compared to all models, MAE, MSE, RMSE for the predictive model 1.62852, 5.31956 and 2.30642 respectively, with all evaluation indices being optimal. The model is shown to have lower prediction errors and model fitting effects compared with other single models and mixed models, and the mixed model based on deep learning also has great advantages and great potential in the aspect of parking sequence prediction.
TABLE 5 run length for each model
All experiments were performed in a 64-bit operating system with a processor I ntel (R) Core (TM) I5-8250U CPU@1.60GHz,Wi ndows 11. As can be seen from Table 5, GRU has a lower prediction accuracy and lower running cost than LSTM and B I _LSTM due to its simple structure. Compared with the CEEMDAN-GRU model, the introduction of PCA in the CEEMDAN-PCA-GRU realizes the dimension reduction of abstract features, reduces the time for model optimization and training on the premise that the original data information is reserved by more than 90%, saves the cost of calculation resources to a great extent, and provides a solid foundation for the follow-up prediction model. The CEEMDAN-PCA-GRU model provided by the invention has the running time of 28s and is within the allowable and feasible range.
According to the novel deep learning prediction method, a CEEMDAN-PCA-GRU combined prediction model is constructed, and modeling and prediction of nonlinear, non-stable and multi-scale complex parking time sequences become feasible and efficient. Within this method and framework, predictive models are combined by methods of denoising, depth feature extraction, or time series fitting. Firstly, CEEMDAN is applied to decompose fluctuation of different scales from an original parking time sequence, then PCA is utilized to perform data dimension reduction, abstract and advanced features are extracted, and a smoother sequence is provided for a subsequent prediction model. These features are then input into the GRU network, modeling the nonlinear relationship between the multivariate time series and the parking series, to predict the final number of parks. The method has the advantages that the example verification is carried out on the parking data of a large three-dimensional parking lot, 7 models are built as comparison research models, and the result shows that the prediction result of the model is more fit with the original data, so that the prediction error is greatly reduced, and compared with other comparison models, the method has obvious advantages in the aspects of prediction precision and prediction efficiency. The novel prediction model and the framework provided by the invention can provide valuable references for the development of the future parking prediction field.

Claims (4)

1. A new method of predicting remaining parking spaces, comprising the steps of:
step one: data preprocessing, namely, summarizing the obtained parking big data to form an original parking time sequence, and detecting whether abnormal data exist or not; decomposing the parking original data into a plurality of IMFs with different frequencies by using a CEEMDAN decomposition algorithm, and decomposing fluctuation features and trends with different scales from the original data step by step to eliminate noise;
step two: performing PCA dimension reduction processing on the decomposed data by adopting a PCA algorithm, extracting abstract high-level features, improving the calculation efficiency, and eliminating the correlation and redundancy of different time sequences after CEEMDAN decomposition;
step three: deep learning prediction, normalizing original parking data and reduced-dimension data, converting multidimensional data into an input format of a GRU neural network, and dividing a training set and a testing set; initializing parameters of the GRU neural network, and continuously training a model to achieve ideal precision; and outputting a parking space prediction sequence by the GRU output layer, calculating errors and evaluating the model effect.
2. A new method of predicting remaining parking space as claimed in claim 1, wherein: the CEEMDAN decomposition algorithm in the step one comprises the following specific steps:
1) For a given original sequence x (t), an adaptive gaussian white noise n is added j (t) to obtain a signal x j (t) wherein x j The expression of (t) is:
x j (t)=x(t)+p i n j (t)(i=1,2,...,M;j=1,2,...,N) (1)
wherein i represents the number of IMFs in step one, j represents the number of experiments, p i Representing the standard deviation in noise, by user setting;
2) Using EMD fractionationSolution signal x j (t) obtaining IMF component IMF 1 i (t), which represents x j (t) is the ith IMF component after EMD decomposition;
3) Repeating steps 1) to 2) for N times, and adding different Gaussian white noise N each time j (t);
4) Setting the mean value of the IMF components after M times of decomposition as a first IMF, wherein the first IMF component is expressed as follows:
the residual signal is expressed as:
then e 1 (t) is defined as the first IMF component derived from EMDIs formed by the sequence r 1 (t)+p 2 e 1 (n j (t)) is expressed as follows:
the residual signal is expressed as:
similar to the above steps, the kth residual signal is expressed as:
the k+1th IMF component is expressed as:
finally, repeating the above steps until the residual signal r k (t) becomes a constant function or a monotonic function, assuming a total of M IMF components, the original sequence can be expressed as follows:
r (t) represents the final residual signal that has become a constant function or a monotonic function.
3. A new method of predicting remaining parking space as claimed in claim 1, wherein: the PCA algorithm in the second step comprises the following specific steps:
1) When an original data set is formed by n groups of data, each group of data has m parameters, and the matrix form is as follows:
2) Normalizing the matrix X to obtain a matrix Z:
wherein ,
3) Calculating a covariance matrix of Z:
4) According to |lambda i Calculating eigenvalue λ of matrix C by E-c|=0 i1 >λ 2 >...>λ m ) And feature vector u i
u i =[u i1 u i2 ... u im ] T ,i=1,2,...,m (12)
5) Calculating principal component F i
F i =u 1i X 1 +u 2i X 2 +...u mi X m ,i=1,2,...,m (13)
6) The i-th principal component contribution rate is ki, and the cumulative contribution rate of the first i principal components is pi
4. A new method of predicting remaining parking space as claimed in claim 1, wherein: the specific expression of the GRU neural network in the third step is as follows:
z t =σ(W z ·[h t-1 ,x t ]) (16)
r t =σ(W r ·[h t-1 ,x t ]) (17)
h′ t =tanh(W h ·[r t ⊙h t-1 ,x t ]) (18)
h t =(1-Z t )⊙h t-1 +h′ t ⊙Z t (19)
wherein ,xt For input value, z t To reset the gate state r t To update the door state, h t ' is the current candidate set, W is the weight, h t As the current state, the product of Hadmad is as follows, and Sigmod is as a function.
CN202310664772.4A 2023-06-06 2023-06-06 Novel method for predicting residual parking space Pending CN116665483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310664772.4A CN116665483A (en) 2023-06-06 2023-06-06 Novel method for predicting residual parking space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310664772.4A CN116665483A (en) 2023-06-06 2023-06-06 Novel method for predicting residual parking space

Publications (1)

Publication Number Publication Date
CN116665483A true CN116665483A (en) 2023-08-29

Family

ID=87711510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310664772.4A Pending CN116665483A (en) 2023-06-06 2023-06-06 Novel method for predicting residual parking space

Country Status (1)

Country Link
CN (1) CN116665483A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117131369A (en) * 2023-10-27 2023-11-28 福建福昇消防服务集团有限公司 Data processing method and system of intelligent safety management and emergency rescue integrated station
CN117649559A (en) * 2023-12-12 2024-03-05 兰州交通大学 Intelligent parking lot parking space detection system based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117131369A (en) * 2023-10-27 2023-11-28 福建福昇消防服务集团有限公司 Data processing method and system of intelligent safety management and emergency rescue integrated station
CN117131369B (en) * 2023-10-27 2023-12-22 福建福昇消防服务集团有限公司 Data processing method and system of intelligent safety management and emergency rescue integrated station
CN117649559A (en) * 2023-12-12 2024-03-05 兰州交通大学 Intelligent parking lot parking space detection system based on deep learning
CN117649559B (en) * 2023-12-12 2024-06-14 兰州交通大学 Intelligent parking lot parking space detection system based on deep learning

Similar Documents

Publication Publication Date Title
CN110830303B (en) Network flow prediction method based on bidirectional long-short term memory recurrent neural network
CN110309603B (en) Short-term wind speed prediction method and system based on wind speed characteristics
Wang et al. A compound framework for wind speed forecasting based on comprehensive feature selection, quantile regression incorporated into convolutional simplified long short-term memory network and residual error correction
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN111292525B (en) Traffic flow prediction method based on neural network
CN116665483A (en) Novel method for predicting residual parking space
CN112434848B (en) Nonlinear weighted combination wind power prediction method based on deep belief network
CN112990556A (en) User power consumption prediction method based on Prophet-LSTM model
CN111339712A (en) Method for predicting residual life of proton exchange membrane fuel cell
CN109492748B (en) Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network
CN109767043B (en) Intelligent modeling and prediction method for big data of power load time sequence
CN111222689A (en) LSTM load prediction method, medium, and electronic device based on multi-scale temporal features
CN115496257A (en) Short-term vehicle speed prediction based on space-time fusion
CN117406100A (en) Lithium ion battery remaining life prediction method and system
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN110516792A (en) Non-stable time series forecasting method based on wavelet decomposition and shallow-layer neural network
CN114596726A (en) Parking position prediction method based on interpretable space-time attention mechanism
Sun et al. A compound structure for wind speed forecasting using MKLSSVM with feature selection and parameter optimization
CN117313936A (en) Clean flue gas SO in flue gas desulfurization process of coal-fired power plant 2 Concentration prediction method
CN116933025A (en) Transformer top layer oil temperature prediction method based on VMD and DBO-LSTM-AT
CN116613745A (en) PSO-ELM electric vehicle charging load prediction method based on variation modal decomposition
CN115526430A (en) Load interval prediction method, system and medium for multi-distance clustering and information aggregation
CN113821401A (en) WT-GA-GRU model-based cloud server fault diagnosis method
CN113569460A (en) Real vehicle fuel cell system state multi-parameter prediction method and device
CN113537573A (en) Wind power operation trend prediction method based on dual space-time feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination