CN114596726B - Parking berth prediction method based on interpretable space-time attention mechanism - Google Patents

Parking berth prediction method based on interpretable space-time attention mechanism Download PDF

Info

Publication number
CN114596726B
CN114596726B CN202111257194.XA CN202111257194A CN114596726B CN 114596726 B CN114596726 B CN 114596726B CN 202111257194 A CN202111257194 A CN 202111257194A CN 114596726 B CN114596726 B CN 114596726B
Authority
CN
China
Prior art keywords
parameter
data
representing
berth
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111257194.XA
Other languages
Chinese (zh)
Other versions
CN114596726A (en
Inventor
王竹荣
赵瑞琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Huaqi Zhongxin Technology Development Co ltd
Original Assignee
Xi'an Huaqi Zhongxin Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Huaqi Zhongxin Technology Development Co ltd filed Critical Xi'an Huaqi Zhongxin Technology Development Co ltd
Priority to CN202111257194.XA priority Critical patent/CN114596726B/en
Publication of CN114596726A publication Critical patent/CN114596726A/en
Application granted granted Critical
Publication of CN114596726B publication Critical patent/CN114596726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a parking berth prediction method based on an interpretable space-time attention mechanism, which comprises the following steps: 1) Determining a parking lot to be predicted, and collecting parking lot berth data; 2) Preprocessing the collected parking lot berth data; 3) Constructing a berth prediction model based on a two-way long-short-term memory neural network; 4) Capturing global key feature information at a space-time attention mechanism layer; 5) Dividing the preprocessed data into a training set and a testing set, training a constructed space-time attention mechanism neural network model by using the training set data, and searching a global optimal point by using a self-adaptive moment estimation optimization algorithm; 6) And carrying out statistics and recording on the operation result of the operation test set data of the obtained space-time attention mechanism neural network model to obtain a berth prediction result, a space attention weight and a time attention weight. The invention solves the problems of low precision and unstable prediction result of the existing parking berth prediction method.

Description

Parking berth prediction method based on interpretable space-time attention mechanism
Technical Field
The invention belongs to the technical field of berth prediction methods, and particularly relates to a parking berth prediction method based on an interpretable space-time attention mechanism.
Background
In recent years, with the progress of urban mass production, the amount of private cars held by urban residents has increased dramatically, and the demand for parking spaces has also increased. Usually, the parking spaces on the streets are limited, and the time and fuel cost for people to find the free parking spaces on the streets are over the payment of the parking lot; meanwhile, in the process of searching for a parking space on a street, adverse effects are brought to the fluency of traffic and the air quality.
Based on the above-mentioned current situation, a concept of city intelligence is proposed, and one main aspect of city intelligence is to solve the problems existing in the current city, such as the problem of parking space shortage, through the internet of things (Internet of Things, ioT). The main idea is to monitor the traffic condition, air temperature, pollution level, parking area utilization rate and other data of the city by using the sensor to know the city state. Therefore, the problem of shortage of parking spaces can be solved by using the method of monitoring the utilization rate of the parking spaces in the urban parking lot through the Internet of things, and the intelligent effect is achieved. Although the method of monitoring a single parking space is difficult to perform, the parking rate of a future parking space can be analytically predicted by counting the vehicle data coming in and going out of a non-street parking space.
The prediction of parking space in a parking lot is a key for fully playing the intelligent effect of parking. Berth prediction is a typical time series prediction problem. The prediction of the time series can be classified into long-term prediction (multi-step prediction) and short-term prediction (single-step prediction) according to the difference of the prediction targets. Time series prediction, unlike classification and regression problems, increases the complexity of order and time dependence between observations, which makes the time series prediction problem more complex than general prediction problems.
Currently, prediction methods can be divided into: statistical-based predictions, machine-learning-based predictions. Statistical-based predictions include exponential smoothing, markov prediction, autoregressive moving average (Autoregressive Integrated Moving Average Model, ARIMA), etc. prediction methods; the prediction method based on machine learning comprises BP neural network, wavelet neural network, regression tree, support vector machine, cyclic neural network, long-term and short-term memory neural network and the like.
However, the high prediction accuracy of both of the above two types of methods is based on a sufficiently small number of prediction steps, typically 1 to 3 prediction steps. If the prediction step increases, the prediction accuracy will be greatly reduced. Meanwhile, the method cannot accurately predict the model containing the influence of various uncertain factors.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a parking berth prediction method based on an interpretable space-time attention mechanism, solve the problems of low precision and unstable result of the existing parking berth prediction method and interpret a time attention mechanism model.
In order to achieve the above object, the present invention adopts a technical scheme that a parking berth prediction method based on an interpretable space-time attention mechanism comprises the following steps:
step S1: determining a parking lot to be predicted, and collecting parking lot berth data;
step S2: preprocessing the parking lot berth data collected in the step S1, including data cleaning, feature extraction and normalization processing;
step S3: constructing a berth prediction model based on a Bi-directional long-short Term Memory neural network (Bi-directional Long Short-Term Memory, biLSTM);
step S4: on the basis of a berth prediction model based on BiLSTM in the step S3, a space-time attention mechanism layer is constructed to capture global key feature information;
step S5: the data after preprocessing in step S2 is processed according to 4:1 is divided into a training set and a testing set, training the space-time attention mechanism neural network model constructed in the step S4 by training set data, searching a global optimal point by using a self-adaptive moment estimation (Adaptive Moment Estimation, adam) optimization algorithm, selecting a super-parameter by using a variable control method, and obtaining a loss function as a mean square error (Mean Square Error, MSE); the adaptive moment estimation optimization algorithm accelerates the convergence rate by using the momentum and the adaptive learning rate, and continuously optimizes the neural network model;
step S6: and (5) running test set data in the space-time attention mechanism neural network model optimized in the step (S5), and counting and recording running results to obtain a berth prediction result, a spatial attention weight and a time attention weight, wherein the neural network model obtained through the space-time attention weight change training can well reflect the berth prediction result, namely, the berth prediction result is interpreted through the space-time attention weight change.
Further, the parking lot berth data collected in the step S1 includes: parking lot name (parkgname), collection time (LastUpdated), number of berths parking (Occupancy), and parking lot berth Capacity (Capacity).
Further, in the step S2, the method for cleaning data uses the adjacent average value to replace the missing data, so as to remove the abnormal data; the feature extraction method is to extract the effective space-time features of the collection time (Lastupdated), the berth residence number (Occupied) and the parking space Capacity (Capacity) of the parking lot; the normalization processing method is to linearly transform the original data by adopting min-max normalization, so that the result value is mapped between intervals [0,1 ].
Further, in step S3, the BiLSTM includes two unidirectional LSTM chain structures, which are formed by combining a forward LSTM and a backward LSTM. The current time step t of each unidirectional LSTM is input as x t =[x 1 ,x 2 ,…,x w ]W is the length of the sliding window, and the LSTM structure specifically comprises:
i t =σ(w i x t +u i h t-1 +b i ) (1)
f t =σ(w f x t +u f h t-1 +b f ) (3)
o t =σ(w o x t +u o h t-1 +b o ) (4)
h t =o t tanh(c t ) (8)
in the formulas (1) - (8), the parameter i t Input gate representing current time step t, parameter sigma representing sigmoid function, parameter x t Representing the input sequence corresponding to the current time step t and the parameter h t-1 Representing the hidden state of the previous time step, parameter f t Forgetting gate representing current time step t, parameter o t Output gate representing current time step t, tanh (x) being the activation function, parameterRepresenting candidate memory cells corresponding to the current time step t; parameter w i Weight parameter, w, representing the input gate input process f Weight parameter indicating forgetting process of forgetting gate, w o Weight parameter indicating output gate output process, w c Weight parameters representing the transmission process of the memory unit; parameter u i Representing input gate state transition weight parameters, u f Weight parameter for indicating forgetting door state transition, u o Representing the output gate state transition weight parameter, u c Representing a memory cell state transition weight parameter; parameter b i Representing input door bias parameters, b f Representing forgetting door deviation parameters b o Representing the output door bias parameter, b c Representing a memory cell bias parameter; c t Representing the state of the cell corresponding to the current time step t.
Further, the method for constructing the space-time attention mechanism in step S4 is as follows:
step 4.1. Constructing a spatial attention module, inputting a sequence x for the current time step t =[x 1 ,x 2 ,…,x w ] w×1 Firstly, performing a sigmoid activation function and then performing softmax regularization to obtain a spatial attention weight;
step 4.2. The spatial attention weights s obtained above are used t And input sequence x t Carrying out Hadamard product operation to obtain x t ′;
Step 4.3. Input sequence x with spatial attention weights t ' input into BiLSTM, get hidden layer data h t
Step 4.4, constructing a time attention module, activating the hidden layer state data obtained in the step C through a relu function, and then obtaining a time attention weight t through regularization processing t
Further, the general expressions (9) to (10) used in step 4.1 are as follows:
in equation (9), parameter σ t Is x t Sigma, the result obtained by activating the function sigmoid 1 Is x 1 Sigma, the result obtained by sigmoid 2 Is x 2 Sigma, the result obtained by sigmoid w Is x w The result obtained by sigmoid is applied to the input sequence x t Each item in (a) is subjected to sigmoid activation, and an activation function inputs a sequence x t The values of (2) vary between 0 and 1;
in equation (10), parameter s t Is sigma t Results obtained through regularization function softmax, s 1 Is sigma 1 Results obtained by softmax, s 2 Is sigma 2 Results obtained by softmax, s w Is sigma w Results obtained through softmax; sigma obtained in the formula (9) t Performing a softmax regularization process, the softmax function will input the sequence sigma t The values of (2) are transformed between 0 and 1 and the sum of the values is 1.
Further, the formula used in step 4.2 is as (11):
x t ′=s t x t =[s 1 x 1 ,s 2 x 2 ,…,s w x w ] w×1 (11)
in formula (11), x t ' is the spatial attention weight s t And input sequence x t Carrying out HadamardThe result of the malproduct operation.
Further, the formula used in step 4.3 is as (12):
h t =[h 1 ,h 2 ,…,h w ] w×s (12)
in formula (12), h t Is the hidden layer output in BiLSTM, s is the neuron number of the hidden layer.
Further, the formulas used in step 4.4 are as (13) - (14):
r t =relu(h t )=max(0,h t )=[r 1 ,r 2 ,…,r w ] w×1 (13)
in equation (13), parameter r t Is h t Results after activation, r 1 Is h 1 Results obtained by relu, r 2 Is h 2 Results obtained by relu, r w Is h w Results obtained with relu. The hidden layer data is activated to relu, and the relu function changes the value to a non-negative number.
In equation (14), parameter t t R is t T 1 Is r 1 Regularized result, t 2 Is r 2 Regularizing the obtained result, t w Is r w Regularizing the obtained result. R obtained by equation (13) t And (5) performing regularization treatment.
Compared with the prior art, the invention has at least the following beneficial effects:
the parking berth prediction method based on the interpretable space-time attention mechanism establishes a model through a bi-directional long-short-term memory neural network BiLSTM module, and a door mechanism is introduced into BiLSTM for controlling the circulation and loss of characteristics to avoid gradient elimination and gradient explosion, so that the problem of insufficient learning ability due to long-term dependence is solved.
Furthermore, the invention establishes a model through the space-time attention module, performs weight calculation on parking position data in the space attention module to select data with higher correlation to be input into the neural network model, performs weight calculation on model hidden layer data in the time attention module to select data with higher weight to be output, captures global key characteristic information through the space-time attention mechanism, obtains the correlation between a learning sequence and a target sequence, and solves the problems of unstable prediction result and low precision on the parking position occupation ratio in the prior art.
Furthermore, the deep learning algorithm adopted by the invention has good data characteristic extraction capability and predictive capability of fitting a nonlinear complex system when processing a large amount of parking lot data.
Furthermore, the invention can be used as a parking guidance information system (parking guidance information system, PGIS) to assist users in decision making, not only can provide berth information for users in time and reduce parking time and fuel consumption, but also can estimate berth requirements for a period of time in the future and relieve the pressure of traffic running around a parking lot.
Furthermore, the parking guidance method and the parking guidance system effectively improve the berth utilization rate of the parking lot through efficient parking guidance, provide more travel planning choices for users, and have good economic and social benefits.
Drawings
FIG. 1 is a flow chart of a parking lot prediction method based on an interpretable spatiotemporal attention mechanism of the present invention;
FIG. 2 is a block diagram of BiLSTM in the parking-berth prediction method based on interpretable spatio-temporal attention mechanisms of the present invention;
FIG. 3 is a block diagram of LSTM in the parking-berth prediction method of the present invention based on an interpretable spatiotemporal attention mechanism;
FIG. 4 is a block diagram of a spatial attention mechanism in a parking lot prediction method based on an interpretable spatiotemporal attention mechanism of the present invention;
FIG. 5 is a block diagram of a time attention mechanism in a parking lot prediction method based on an interpretable time-space attention mechanism of the present invention;
FIG. 6 is a graph of root mean square error versus time with or without a spatial-temporal attention mechanism for predictive models for different iteration numbers BiLSTM in an embodiment;
FIG. 7 is a graph of root mean square error versus whether there is a spatio-temporal attention mechanism for the predictive model of the different hidden layer dimensions BiLSTM in an embodiment;
FIG. 8 is a graph of root mean square error versus time with or without spatial and temporal attention mechanisms for predictive models of different learning rates BiLSTM in an embodiment;
FIG. 9 is a graph of predicted versus actual values in an embodiment;
FIG. 10 is a graph of spatial attention weight change for prediction steps 1, 7, 14, 21, 28, 35 in the example;
FIG. 11 is a graph of temporal attention weight variation of a spatiotemporal attention mechanism model in an embodiment;
FIG. 12 is a graph of temporal attention weight change without spatiotemporal attention mechanisms;
Detailed Description
The present invention will be further described with reference to the accompanying drawings, wherein the following examples are provided on the premise of the present technical solution, and detailed embodiments and specific operation procedures are given, but the scope of the present invention is not limited to the examples.
1. Principles of the invention
The invention discloses a parking berth prediction method based on an interpretable space-time attention mechanism, which is shown in fig. 1 and comprises the following steps:
step 1: aiming at a parking lot to be predicted, uniformly collecting parking lot berth occupation condition data in a fixed time period, wherein the collected parking lot berth data comprises: the name of the parking lot, the collection time LastUpdated, the parking space residence number Occupiecy and the parking space Capacity capability.
Step 2: preprocessing the parking lot berth data collected in the step 1, including data cleaning, feature extraction and normalization processing, to obtain an experimental data set, wherein the specific method comprises the following steps:
in step 2.1, the data cleaning is mainly to process the data missing value and the abnormal value, the data missing frequently occurs in the research process, a large amount of valuable information is easy to lose, and the uncertainty of the law behind the data sample is aggravated. There are generally two ideas for data loss: one is data stuffing and the other is direct discarding of the whole sample data. The direct discarding of the whole sample data is often not a good method, and it is more significant to choose a suitable data filling method according to the actual application scenario. Common data filling methods include special value filling, mean filling, neighbor filling and the like. Because the berth data varies along the time axis, the invention fills in with the mean value of the missing values at the time instants as shown in equation (15).
In the formula (15), the parameter x t ' represents a data loss value at time t, x t-1 Data value x representing time t-1 t+1 The data value at time t+1 is indicated.
And 2.2, extracting effective space-time characteristics such as the collection time LastUpdated, the berth residence number Occupiecy, the parking lot berth Capacity and the like by adopting a principal component analysis method (Principal Component Analysis, PCA) for characteristic extraction. Let the input sample set be d= (x 1 ,x 2 ,…,x n ) The output dimension is m, and the PCA algorithm comprises the following steps:
A. centralizing a sample set
In formula (16), x i ' is the value of the sample after centering, x i For each sample in the set of samples,is the average of all samples.
B. Calculating covariance matrix of sample
As shown in formula (17), m is the output dimension, D is the matrix form of the input samples, D T For transpose of input samples D, C is the covariance matrix of input samples D.
C. Eigenvalue decomposition of covariance matrix
i =λ i β i i=1, 2, …, n and λ 1 ≥λ 2 ≥…≥λ n (18)
As shown in equation (18), C is the covariance matrix of the input samples D, λ i For the ith eigenvalue, beta i As a characteristic value lambda i The corresponding feature vector.
D. Taking the unit eigenvector w corresponding to the maximum m eigenvalues 1 ,w 2 ,…,w m
Step 2.3, normalization may cause data without comparability to become comparable while maintaining a relative relationship between the two data compared. In order to accelerate the convergence efficiency of the prediction model, the original data is subjected to linear transformation by adopting min-max standardization, so that a result value is mapped between intervals [0,1], and a conversion function is shown as a formula (19):
in the formula (19), a parameter x' represents the normalized data, a parameter x represents the original data berth entry number, a parameter min (x) represents the minimum value in the original data berth entry number, and a parameter max (x) represents the maximum value in the original data berth entry number.
Step 3: and constructing a berth prediction model based on a Bi-directional long-short Term Memory neural network (Bi-directional Long Short-Term Memory, biLSTM).
As shown in FIG. 2, biLSTM is a bi-directional LSTM network, consisting of forward LSTM L And backward LSTM R A combination used to model context information, wherein the parameters LSTM L Is in front ofTo LSTM, parameter (x 0 ,x 1 ,…,x t ) To input a sequence, the parameter LSTM R For backward LSTM, parameter h Lt For the hidden state of the forward LSTM, parameter h Rt For the hidden state of backward LSTM, the hidden state of BiLSTM outputs { h ] Lt ,h Rt }。
As can be seen from the above, biLSTM is composed of two unidirectional LSTM chain structures, as shown in FIG. 3, which is the internal structure of LSTM, wherein the symbols areRepresenting dot product, symbol->Representing addition, the symbol tanh represents the hyperbolic tangent activation function, the parameter sigma represents the sigmoid activation function, the parameter i t Input gate representing current time step t, parameter f t Forgetting gate representing current time step t, parameter o t Output gate representing current time step t, parameter c t-1 Representing the last time step memory cell, parameter h t-1 Representing the hidden state of the previous time step, parameter h t Representing the hidden state of the current time step t, c t Representing the state of the cell corresponding to the current time step t. The current time step t of each unidirectional LSTM is input as x t =[x 1 ,x 2 ,…,x w ](w is the length of the sliding window), the LSTM structure is specifically:
i t =σ(w i x t +u i h t-1 +b i ) (20)
f t =σ(w f x t +u f h t-1 +b f ) (22)
o t =σ(w o x t +u o h t-1 +b o ) (23)
h t =o t tanh(c t ) (27)
in formulas (20) - (27), parameter x t Representing the input sequence corresponding to the current time step t, with tanh (x) as the activation function, parametersRepresenting candidate memory cells corresponding to the current time step t; parameter w i Weight parameter, w, representing the input gate input process f Weight parameter indicating forgetting process of forgetting gate, w o Weight parameter indicating output gate output process, w c Weight parameters representing the transmission process of the memory unit; parameter u i Representing input gate state transition weight parameters, u f Weight parameter for indicating forgetting door state transition, u o Representing the output gate state transition weight parameter, u c Representing a memory cell state transition weight parameter; parameter b i Representing input door bias parameters, b f Representing forgetting door deviation parameters b o Representing the output door bias parameter, b c Representing the memory cell bias parameter.
sigmoid activation functions are a common s-type function that can map a real number to between 0 and 1, since their single increment and anti-function single increment properties are often used for neural network activation functions. tanh is a hyperbolic tangent function that converts an input value between-1 and 1. Input gate i in LSTM t The new information can be selectively recorded in the cell stateForgetting door f t Can selectively forget information in the cell state and output the gate o t Information in the cell state may be output.
Step 4: on the basis of the berth prediction model based on the bidirectional Lstm in the step 3, a space-time attention mechanism layer is constructed to capture global key feature information.
The BiLSTM neural network model receives an input sequence, the memory cell of the last time step and the hidden state of the last time step, and the memory cell c of the current time step is obtained through control transformation of an input gate, a forgetting gate and an output gate t And hidden state h of the current time step t . As shown in fig. 4, the sequence x is input t Spatial attention weights are generated through sigmoid and softmax activation. As shown in fig. 5, the hidden state h t Time attention weights are generated through relu and softmax activation. The method comprises the following specific steps:
step 4.1 building a spatial attention module for the current time step x t Is input sequence x of (2) t =[x 1 ,x 2 ,…,x w ] w×1 Firstly, performing sigmoid activation function and then performing softmax regularization to obtain the spatial attention weight. As in formulas (28) - (29):
in equation (28), wherein σ t Is x t Sigma, the result obtained by activating the function sigmoid 1 Is x 1 Sigma, the result obtained by sigmoid 2 Is x 2 Sigma, the result obtained by sigmoid w Is x w Results obtained via sigmoid. For input sequence x t Each item in (a) is subjected to sigmoid activation, and an activation function inputs a sequence x t The values of (2) are transformed between 0 and 1.
In formula (29), wherein s t Is sigma t Results obtained through regularization function softmax, s 1 Is sigma 1 Results obtained by softmax, s 2 Is sigma 2 Results obtained by softmax, s w Is sigma w Results obtained with softmax. Sigma obtained in the formula (28) t Performing a softmax regularization process, the softmax function will input the sequence sigma t The values of (2) are transformed between 0 and 1 and the sum of the values is 1.
Step 4.2, the spatial attention weight s obtained above is used t And input sequence x t Performing Hadamard product operation as in equation (30):
x t ′=s t x t =[s 1 x 1 ,s 2 x 2 ,s w x w ] w×1 (30)
in formula (30), x t ' is the spatial attention weight s t And input sequence x t The result of the Hadamard product operation is specifically the multiplication of each term of the corresponding sequence.
Step 4.3, input sequence x with spatial attention weights t ' input into BiLSTM, get hidden layer state data, as in equation (31):
h t =[h 1 ,h 2 ,…,h w ] w×s (31)
in formula (31), h t Is the hidden layer output in BiLSTM, s is the hidden layer neuron number.
Step 4.4, constructing a time attention module, namely activating the hidden layer state data obtained in step 4.3 through a relu function, and then performing regularization processing to obtain time attention weights, wherein the time attention weights are as shown in formulas (32) - (33):
r t =relu(h t )=max(0,h t )=[r 1 ,r 2 ,…,r w ] w×1 (32)
in equation (32), parameter r t Is h t Results after activation, r 1 Is h 1 The results obtained by the procedure of the relu,r 2 is h 2 Results obtained by relu, r w Is h w Results obtained with relu. Hidden layer state data h t With the relu activation, the relu function will change the value to a non-negative number.
In formula (33), t t R is t T 1 Is r 1 Regularized result, t 2 Is r 2 Regularizing the obtained result, t w Is r w Regularizing the obtained result. R is obtained for equation (32) t Regularization processing is carried out to obtain a time attention weight t t
Step 5: the data after pretreatment in step 2 are processed according to the following steps of 4: the ratio of 1 is divided into a training set and a test set. Training the space-time attention mechanism neural network model constructed in the step 4 by using training set data, searching a global optimal point by using an adaptive moment estimation (Adaptive Moment Estimation, adam) optimization algorithm, selecting a super-parameter by using a variable control method, and obtaining a loss function as a mean square error (Mean Square Error, MSE). The adaptive moment estimation optimization algorithm accelerates the convergence rate by using the momentum and the adaptive learning rate, and continuously optimizes the neural network model.
The specific optimization process is to input training set data into the space-time attention mechanism neural network model constructed in the step 4 for training, calculate the mean square error between the prediction result and the berth residence number in the training set data each time, gradient model parameters by the mean square error, update weight parameters (w i 、w f 、w o 、w c ) State transition weight parameter (u) i 、u f 、u o 、u c ) Deviation parameter (b) i 、b f 、b o 、b c ) Etc. The number of hidden neurons (E_hidden), learning rate (learning_rate) and iteration number (Epochs) of BiLSTM are selected by a variable control method. Updating by multiple iterationsThe accuracy of the prediction result of the neural network model is improved, and parameters of the neural network model are adjusted according to the mean square error in each training. When the mean square error is smaller and stable, the iteration number is determined, so that the prediction of the neural network model is more efficient and accurate. And after training, obtaining an optimized neural network model.
Step 6: and (5) running test set data in the optimized space-time attention mechanism neural network model in the step (5), and counting and recording running results to obtain berth prediction results, spatial attention weights and time attention weights.
In the spatial attention module, the input sequence is subjected to sigmoid and softmax processing. Since the sigmoid function is dense in the continuous function space, the input features can be output between 0 and 1, and the data is not easy to diverge in the transfer process. The Softmax calculates the probability of the data sequence, and the value with larger probability has great influence on the prediction result. The change of the spatial attention weight can influence the feature extraction of the model on the input data, and the change trend of the data can be well captured by the spatial attention weight obtained by the spatial attention module. When the data length is 18, the prediction steps 1, 7, 14, 21, 28 and 35 are selected as input data sequences, and the interpretability is analyzed by combining the change of the spatial attention weight.
At the temporal attention module, the hidden layer data is processed by relu and softmax. Because the relu linear activation function is fast, the output result of the neuron is possibly 0, so that sparsity of the neural network is caused, the interdependence relation of parameters is reduced, and the occurrence of the over fitting problem is relieved. The Softmax calculates the probability of the data sequence, and the value with larger probability has great influence on the prediction result. The temporal attention weight variation affects the model's prediction of berth data.
The time attention weight t is weighted as shown in equation (34) t And hidden layer state data h t Matrix multiplication is performed to obtain h ij
In formula (34), t t Represents the time attention weight, h t Represents hidden layer state data, h ij Representing t t And h t As a result of matrix multiplication, subscript i, j represents the ith row and jth column. R is R s×1 Represents h ij Is an s-dimensional matrix.
Obtaining h ij Thereafter, as shown in formula (35), h ij Obtaining a berth prediction result through neural network output processing, and obtaining h ij And carrying out matrix multiplication on the s-dimensional vector obtained by cutting off the normal distribution.
p=o(h ij ),p∈R 1×1 (35)
In formula (35), p is h ij Outputting the processed result through the neural network, R 1×1 The representation p is a 1-dimensional matrix. p is the predicted result.
The relationship between the berth prediction result and the time attention weight is shown in formulas (34) - (35). Theoretically, the time attention weight will vary with the input berth data. The effectiveness of the proposed model is illustrated by a comparison of the time attention weight change graphs with or without the spatiotemporal attention mechanism.
The high-precision prediction of the berth data can be realized through the weight capture of the spatial attention module and the time attention module.
2. Simulation experiment
The parking lot berth occupation condition data are collected in the parking lot to be predicted, as shown in table 1:
table 1 parking lot berth occupancy data
And acquiring parking position data of a parking lot, wherein the parking position data are acquired every 30 minutes, and the Occupancy is the number of parking positions at the moment of data recording.
And obtaining an experimental data set by carrying out data preprocessing measures such as data cleaning, feature extraction, normalization processing and the like on the acquired data. The dataset was read as per 4:1 is divided into a training set and a testing set, the total amount of the acquired data sets is 1276, wherein 1020 pieces of data are the training set, and 256 pieces of data are the testing set.
And constructing a berth prediction model based on a two-way long-short-term memory neural network BiLSTM, wherein the structure comprises two one-way LSTM. On the basis, a space-time attention mechanism layer is constructed to capture global key characteristic information, and the structure comprises a space attention module and a time attention module. The number of hidden neurons (E_hidden), learning rate (learning_rate) and iteration number (Epochs) of BiLSTM are selected by a variable control method.
The data set is input into a network for training, the number (E_hidden) and the learning rate (learning_rate) of hidden neurons are fixed, and the variation condition of RMSE (RMSE) in the BiLSTM berth prediction model of different iteration times (Epochs) is tested. As shown in FIG. 6, where lstm variation is data measured without the spatiotemporal attention mechanism for the BiLSTM neural network model, sta_lstm is data measured with the BiLSTM neural network model plus the spatiotemporal attention mechanism. As can be seen from the figure, when the Epochs is 130, the root mean square error (Root Mean Square Error, RMSE) is the lowest, and the training effect is better. Wherein the root mean square error is as shown in equation (36):
in equation (36), n is the number of samples, y i For a sample predictor, y is the true sample value. RMSE is a measure of the deviation between predicted and true values, with smaller RMSE representing smaller prediction error of the model, the better the model.
The number of hidden neurons (E_hidden) was tested for variation of RMSE in the BiLSTM berth prediction model with a fixed number of iterations (Epochs) and learning rate (learning_rate). As shown in FIG. 7, where lstm variation is data measured without the spatiotemporal attention mechanism for the BiLSTM neural network model, sta_lstm is data measured with the BiLSTM neural network model plus the spatiotemporal attention mechanism. As can be seen from the figure, the root mean square error is the lowest and the training effect is better when E_hidden is 128.
The number of hidden neurons (E_hidden) and the number of iterations (Epochs) were fixed, and the learning rate (learning_rate) was tested for changes in RMSE in the BiLSTM berth prediction model. As shown in FIG. 8, where lstm variation is data measured without the spatiotemporal attention mechanism for the BiLSTM neural network model, sta_lstm is data measured with the BiLSTM neural network model plus the spatiotemporal attention mechanism. As can be seen from the figure, when the learning_rate is 0.001, the root mean square error is the lowest, and the training effect is good.
In deep learning, the model learns the general rule of all samples from the training set by training, which easily results in over-fitting or under-fitting. By increasing the number of model training iterations, the phenomenon of insufficient model fitting can be overcome. By adding data sets and introducing formalization methods, the overfitting phenomenon can be overcome. The invention adopts Dropout of nerve units, and the connection between the nerve units is randomly disconnected in proportion in the training process, and the probability is 0.5.
And (3) saving the optimized training model, then reading test set data to run in the training model, and giving a prediction result of the model and comparing the effect of adding the space-time attention mechanism with the effect of not adding the space-time attention mechanism.
The example super parameters are: the prediction step size was 36, the number of hidden neurons (e_hidden) was 128, the number of iterations (Epochs) was 130, the learning rate (learning_rate) was 0.001, dropout was 0.5, and the data length (time_step) was 18. Under the condition of the above super parameter determination, the BiLSTM neural network model is trained to achieve a better prediction effect. And then inputting the test set data into the model to obtain a prediction result.
As shown in table 2, the prediction effect of the addition of the spatiotemporal attention mechanism is significantly better than that of the non-addition of the spatiotemporal attention mechanism, as can be seen from the table, for comparison of the prediction effect of the presence or absence of the spatiotemporal attention mechanism. When the prediction step length is 36, the prediction result errors of the prediction model added with the space-time attention mechanism are less than 33 of 10, and the average error of the prediction results is 4.6; the prediction results of the prediction model without the space-time attention mechanism are less than 26 of 10, and the average error of the prediction results is 10.67. The prediction result of the spatiotemporal attention mechanism is shown in fig. 9.
TABLE 2 prediction of contrast with or without spatial and temporal attention mechanisms
/>
Therefore, the method can predict the occupancy condition of the parking lot berths with 36 target step sizes in the future, and can ensure the higher prediction precision of the prediction result; the error of the model prediction result is relatively stable, and the model achieves a good fitting effect.
The spatial attention weight variation is specifically analyzed according to the data sequences when the prediction steps are 1, 7, 14, 21, 28 and 35 and the combination of the predicted value and the true value.
When the prediction step is 1, the data sequence is: 40 44 58 72 85 95 105 130 141 150 153 167 174 175 168 164 159 150, true value: 49, predicted value: 46.
as can be seen from the first graph in fig. 10, when the prediction step is 1, the spatial attention weights of the first four data 40, 44, 58, 72 of the data sequence are larger, and the influence on the prediction result is also larger. The data sequence with larger weight is basically matched with the prediction result.
When the prediction step is 7, the data sequence is: 130 141 150 153 167 174 175 168 164 159 150 46 56 71 84 89 116 134, true value: 151, predicted value: 146.
as can be seen from the second graph in fig. 10, when the prediction step is 7, the spatial attention weight of the first data 130 of the data sequence is the largest, and the greater the influence on the prediction result, the prediction result is 146.
When the prediction step is 14, the data sequence is: 168 164 159 150 46 56 71 84 89 116 134 146 162 180 191 191 185 173, true value: 177, predicted value: 175.
as can be seen from the third graph in fig. 10, when the prediction step is 14, the spatial attention weight of the three data 191, 185, 173 after the data sequence is larger, and the influence on the prediction result is also larger. The predicted result is 175 and the error is 2.
When the prediction step is 21, the data sequence is: 84 89 116 134 146 162 180 191 191 185 173 175 169 168 166 55 60 75, true value: 75, predicted value: 80.
as can be seen from the fourth graph in fig. 10, when the prediction step is 21, the spatial attention weight of the 16 th data 55 of the data sequence is larger, and the influence on the prediction result is also larger. The predicted result is 80 and the error is 5.
When the prediction step is 28, the data sequence is: 191 191 185 173 175 169 168 166 55 60 75 80 83 101 111 140 152 160, true value: 170, predicted value: 173.
as can be seen from the fifth graph in fig. 10, when the prediction step is 28, the spatial attention weight of the first three data 191, 185 of the data sequence is larger, and the influence on the prediction result is also larger. The predicted result is 173, with an error of 3.
When the prediction step is 35, the data sequence is: 166 55 60 75 80 83 101 111 140 152 160 173 176 171 165 149 141 126, true value: 111, predicted value: 115.
as can be seen from the sixth graph in fig. 10, when the prediction step is 35, the spatial attention weights of the fifth data 80 and the ninth data 140 of the data sequence are larger, and the influence on the prediction result is also larger. The predicted result is 115 and the error is 4.
From the above analysis, it can be seen that the spatial attention module can capture the effective information of the input sequence, so as to achieve a better prediction effect.
The prediction result and the time attention weight can be obtained by the BiLSTM neural network model, and the specific time attention weight is shown in the table 3.
Table 3 temporal attention weights
Prediction step size Time attention weighting Prediction step size Time attention weighting Prediction step size Time attention weighting
1 0.90046465 13 0.89176965 25 0.83872008
2 0.86617154 14 0.89536774 26 0.84665161
3 0.83993518 15 0.90152711 27 0.85470814
4 0.84633106 16 0.9050622 28 0.86113214
5 0.8483004 17 0.90600705 29 0.86848521
6 0.85123992 18 0.90672338 30 0.87479812
7 0.85662454 19 0.90131193 31 0.88095808
8 0.86199999 20 0.86308038 32 0.88461047
9 0.86963457 21 0.8381806 33 0.88824958
10 0.87839156 22 0.84075493 34 0.89198482
11 0.88378865 23 0.83980519 35 0.89123964
12 0.88736379 24 0.83798218 36 0.88929909
The time attention weight in the table is displayed in a form of a graph more intuitively, and as can be seen in fig. 11, the time attention weight change rule approximately accords with the berth prediction result change rule of fig. 9, and the berth prediction result is explained through the time attention weight change.
The neural network model without spatiotemporal attention mechanism, the temporal attention weight variation is shown in fig. 12. The relation between the weight change and the berth prediction result change is not obvious. This also illustrates the effectiveness of the spatiotemporal attention neural network.
The prediction precision and stability of the invention in long-term prediction are obviously improved compared with the LSTM model.
Various modifications and variations of the present invention will be apparent to those skilled in the art in light of the foregoing teachings and are intended to be included within the scope of the following claims.

Claims (4)

1. The parking berth prediction method based on the interpretable space-time attention mechanism is characterized by comprising the following steps of:
step S1: determining a parking lot to be predicted, and collecting parking lot berth data;
step S2: preprocessing the parking lot berth data collected in the step S1, wherein the preprocessing comprises data cleaning, feature extraction and normalization processing;
step S3: constructing a berth prediction model based on a two-way long-short-term memory neural network;
step S4: on the basis of a berth-based prediction model in the step S3, a space-time attention mechanism layer is constructed to capture global key feature information;
step S5: the data after preprocessing in step S2 is processed according to 4:1 is divided into a training set and a testing set, the training set data is used for training the space-time attention mechanism neural network model constructed in the step S4, the self-adaptive moment estimation optimization algorithm is used for searching the global optimal point, and the variable control method is used for selecting the super-parameters; the adaptive moment estimation optimization algorithm accelerates the convergence rate by using the momentum and the adaptive learning rate, and continuously optimizes the neural network model;
step S6: running test set data in the space-time attention mechanism neural network model optimized in the step S5, and counting and recording running results to obtain berth predicting results, space attention weights and time attention weights;
the method for constructing the space-time attention mechanism in the step S4 is as follows:
step 4.1. Constructing a spatial attention module, inputting a sequence x for the current time step t =[x 1 ,x 2 ,…,x w ]Firstly performing a sigmoid activation function and then performing softmax regularization to obtain a spatial attention weight, wherein w is the length of a sliding window;
step 4.2. The spatial attention weights s obtained above are used t And input sequence x t Carrying out Hadamard product operation to obtain x t ′;
Step 4.3. Input sequence x with spatial attention weights t ' input into a two-way long-short-term memory neural network to obtain hidden layer state data h t
Step 4.4, constructing a time attention module, activating the hidden layer state data obtained in the step 4.3 through a relu function, and then obtaining a time attention weight t through regularization processing t
The general expressions used in step 4.1 are as (9) to (10):
in equation (9), parameter σ t Is x t Sigma, the result obtained by activating the function sigmoid 1 Is x 1 Sigma, the result obtained by sigmoid 2 Is x 2 Sigma, the result obtained by sigmoid w Is x w The result obtained by sigmoid is applied to the input sequence x t Each item in (a) is subjected to sigmoid activation, and an activation function inputs a sequence x t The values of (2) vary between 0 and 1;
in equation (10), parameter s t Is sigma t Results obtained through regularization function softmax, s 1 Is sigma 1 Results obtained by softmax, s 2 Is sigma 2 Results obtained by softmax, s w Is sigma w Results obtained through softmax; sigma obtained in the formula (9) t Performing softmax regularizationProcessing, the softmax function will input the sequence σ t The value of (a) is converted to between 0 and 1, and the sum of the values is 1
The formula used in step 4.2 is as (11):
x t ′=s t x t =[s 1 x 1 ,s 2 x 2 ,…,s w x w ] w×1 (11)
in formula (11), x t ' is the spatial attention weight s t And input sequence x t Performing a Hadamard product operation;
the formula used in step 4.3 is as (12):
h t =[h 1 ,h 2 ,…,h w ] w×s (12)
in formula (12), h t The state output of the hidden layer in the two-way long-short-term memory neural network is given, and s is the number of neurons of the hidden layer;
formulas used in step 4.4 such as (13) - (14):
r t =relu(h t )=max(0,h t )=[r 1 ,r 2 ,…,r w ] w×1 (13)
in equation (13), parameter r t Is h t Results after activation, r 1 Is h 1 Results obtained by relu, r 2 Is h 2 Results obtained by relu, r w Is h w Results obtained by relu
In equation (14), parameter t t For r obtained for equation (13) t T 1 Is r 1 Regularized result, t 2 Is r 2 Regularizing the obtained result, t w Is r w Regularizing the obtained result.
2. The method for predicting parking space based on interpretable spatiotemporal attention mechanism of claim 1, wherein the parking space data collected in step S1 includes: parking lot name, collection time, number of berths and parking lot berth capacity.
3. The method for predicting parking space based on interpretable spatiotemporal attention mechanism according to claim 1, wherein the method for data cleaning in step S2 is to replace missing data with an average value of the neighborhood, and remove abnormal data;
the feature extraction method is to extract the effective space-time features of the collection time, the berth residence number and the berth capacity of the parking lot;
the normalization processing method is to adopt min-max normalization to carry out linear transformation on parking lot berth data, so that a result value is mapped between intervals [0,1 ].
4. The parking space prediction method based on interpretable space-time attention mechanism of claim 1, wherein the two-way long-short term memory neural network in step S3 includes two unidirectional LSTM chain structures, which are formed by combining forward LSTM and backward LSTM, and the current time step t of each unidirectional LSTM is input as x t =[x 1 ,x 2 ,…,x w ]Wherein w is the length of the sliding window, and the LSTM structure specifically comprises:
i t =σ(w i x t +u i h t-1 +b i ) (1)
f t =σ(w f x t +u f h t-1 +b f ) (3)
o t =σ(w o x t +u o h t-1 +b o ) (4)
h t =o t tanh(c t ) (8)
in the formulas (1) - (8), the parameter i t Input gate representing current time step t, parameter sigma representing sigmoid function, parameter x t Representing the input sequence corresponding to the current time step t and the parameter h t-1 Representing the hidden state of the previous time step, parameter f t Forgetting gate representing current time step t, parameter o t Output gate representing current time step t, tanh (x) being the activation function, parameterRepresenting candidate memory cells corresponding to the current time step t; parameter w i Weight parameter, w, representing the input gate input process f Weight parameter indicating forgetting process of forgetting gate, w o Weight parameter indicating output gate output process, w c Weight parameters representing the transmission process of the memory unit; parameter u i Representing input gate state transition weight parameters, u f Weight parameter for indicating forgetting door state transition, u o Representing the output gate state transition weight parameter, u c Representing a memory cell state transition weight parameter; parameter b i Representing input door bias parameters, b f Representing forgetting door deviation parameters b o Representing the output door bias parameter, b c Representing a memory cell bias parameter; c t Representing the state of the cell corresponding to the current time step t. />
CN202111257194.XA 2021-10-27 2021-10-27 Parking berth prediction method based on interpretable space-time attention mechanism Active CN114596726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111257194.XA CN114596726B (en) 2021-10-27 2021-10-27 Parking berth prediction method based on interpretable space-time attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111257194.XA CN114596726B (en) 2021-10-27 2021-10-27 Parking berth prediction method based on interpretable space-time attention mechanism

Publications (2)

Publication Number Publication Date
CN114596726A CN114596726A (en) 2022-06-07
CN114596726B true CN114596726B (en) 2024-01-19

Family

ID=81813871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111257194.XA Active CN114596726B (en) 2021-10-27 2021-10-27 Parking berth prediction method based on interpretable space-time attention mechanism

Country Status (1)

Country Link
CN (1) CN114596726B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019509B (en) * 2022-06-22 2023-08-04 同济大学 Parking lot vacant parking space prediction method and system based on two-stage attention LSTM
CN116884523B (en) * 2023-09-07 2023-11-21 山东科技大学 Multi-parameter prediction method for water quality of marine pasture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619442A (en) * 2019-09-26 2019-12-27 浙江科技学院 Vehicle berth prediction method based on reinforcement learning
CN110909953A (en) * 2019-12-03 2020-03-24 浙江科技学院 Parking position prediction method based on ANN-LSTM
CN111260124A (en) * 2020-01-11 2020-06-09 大连理工大学 Chaos time sequence prediction method based on attention mechanism deep learning
CN111915059A (en) * 2020-06-29 2020-11-10 西安理工大学 Method for predicting occupancy of Seq2Seq berth based on attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619442A (en) * 2019-09-26 2019-12-27 浙江科技学院 Vehicle berth prediction method based on reinforcement learning
CN110909953A (en) * 2019-12-03 2020-03-24 浙江科技学院 Parking position prediction method based on ANN-LSTM
CN111260124A (en) * 2020-01-11 2020-06-09 大连理工大学 Chaos time sequence prediction method based on attention mechanism deep learning
CN111915059A (en) * 2020-06-29 2020-11-10 西安理工大学 Method for predicting occupancy of Seq2Seq berth based on attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于注意力机制的车辆行为预测;蔡英凤等;《江苏大学学报(自然科学版)》;20200310(第02期);第6-11页 *

Also Published As

Publication number Publication date
CN114596726A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
Wu et al. An adaptive deep transfer learning method for bearing fault diagnosis
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
CN111915059B (en) Attention mechanism-based Seq2Seq berth occupancy prediction method
CN109583565B (en) Flood prediction method based on attention model long-time and short-time memory network
CN114596726B (en) Parking berth prediction method based on interpretable space-time attention mechanism
CN114220271A (en) Traffic flow prediction method, equipment and storage medium based on dynamic space-time graph convolution cycle network
CN109522961B (en) Semi-supervised image classification method based on dictionary deep learning
CN113554466B (en) Short-term electricity consumption prediction model construction method, prediction method and device
CN112949828A (en) Graph convolution neural network traffic prediction method and system based on graph learning
CN114676742A (en) Power grid abnormal electricity utilization detection method based on attention mechanism and residual error network
CN112396587B (en) Method for detecting congestion degree in bus compartment based on collaborative training and density map
CN114386324A (en) Ultra-short-term wind power segmented prediction method based on turning period identification
CN113554148A (en) BiLSTM voltage deviation prediction method based on Bayesian optimization
CN115100709B (en) Feature separation image face recognition and age estimation method
CN111898825A (en) Photovoltaic power generation power short-term prediction method and device
CN111222689A (en) LSTM load prediction method, medium, and electronic device based on multi-scale temporal features
CN113988426A (en) Electric vehicle charging load prediction method and system based on FCM clustering and LSTM
CN113065704A (en) Hyper-parameter optimization and post-processing method of non-invasive load decomposition model
CN112598165A (en) Private car data-based urban functional area transfer flow prediction method and device
CN115131618A (en) Semi-supervised image classification method based on causal reasoning
CN115759461A (en) Internet of things-oriented multivariate time sequence prediction method and system
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN113743008A (en) Fuel cell health prediction method and system
CN112927507B (en) Traffic flow prediction method based on LSTM-Attention
CN114580262A (en) Lithium ion battery health state estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230613

Address after: 710000 No. B49, Xinda Zhongchuang space, 26th Street, block C, No. 2 Trading Plaza, South China City, international port district, Xi'an, Shaanxi Province

Applicant after: Xi'an Huaqi Zhongxin Technology Development Co.,Ltd.

Address before: 710048 No. 5 Jinhua South Road, Shaanxi, Xi'an

Applicant before: XI'AN University OF TECHNOLOGY

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant