CN116050484A - Quantity prediction method, system, equipment and medium - Google Patents
Quantity prediction method, system, equipment and medium Download PDFInfo
- Publication number
- CN116050484A CN116050484A CN202310071372.2A CN202310071372A CN116050484A CN 116050484 A CN116050484 A CN 116050484A CN 202310071372 A CN202310071372 A CN 202310071372A CN 116050484 A CN116050484 A CN 116050484A
- Authority
- CN
- China
- Prior art keywords
- neural network
- basis function
- function neural
- radial basis
- improved
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000006870 function Effects 0.000 claims abstract description 101
- 238000013528 artificial neural network Methods 0.000 claims abstract description 94
- 239000013598 vector Substances 0.000 claims abstract description 40
- 238000001914 filtration Methods 0.000 claims abstract description 34
- 238000012937 correction Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000011478 gradient descent method Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 4
- 230000005284 excitation Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a quantity prediction method, a quantity prediction system, quantity prediction equipment and quantity prediction media, wherein the quantity prediction method comprises the following steps: initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into an improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data. The self-selection of network structure parameters is realized by adding momentum factors and learning factors; and filtering the whole model output by adopting a Kalman filter, and eliminating the influence of external disturbance in data on a model prediction result.
Description
Technical Field
The present application relates to the field of data analysis, and in particular, to a method, system, device, and medium for predicting a quantity.
Background
There are many methods for predicting the number, and the main idea is to analyze the relation and rule between the mining history data, so as to predict the number of a certain time period in the future, and the main methods can be roughly divided into two types: a data volume prediction model based on time sequence and a data volume prediction method based on neural network.
A data amount prediction model based on time series, which is a representative method is an autoregressive integrated moving average model (ARIMA), performs future amount prediction by analyzing the law that the amount changes with time, performs well in a period of time in which the time span is relatively large, but ARIMA models assume linear relationships among data in advance, and if there is a nonlinear relationship among data, the performance of the model is degraded.
The data volume prediction model based on the neural network is mainly an artificial neural network, and can well mine nonlinear relations existing among data, but the accuracy of the model is not high.
Disclosure of Invention
In order to solve the above problems, the present application proposes a quantity prediction method, a system, a device, and a medium, including:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into the improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
In one example, before the initializing improves radial basis function neural network parameters, the method further comprises: constructing an original radial basis function neural network; the original radial basis function neural network comprises an input layer, an implicit layer and an output layer; and adding the learning factor and the momentum factor into the original radial basis function neural network to obtain the improved radial basis function neural network.
In one example, after the learning factor and the momentum factor are added to the original radial basis function neural network, the model training formula of the improved radial basis function neural network is:
wherein x (k) is the output of the node k through the original radial basis function neural network input layer; Δx (k) is an output change value, α is a momentum factor, and η is a learning factor; e (k) is the error function of node k.
In one example, the error function may be calculated by the following formula: e (k) =0.5 x [ y (k) -y m (k)] 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein y (k) represents the time output of the neural network output layer node k, y m (k) Representing a desired output of the neural network output layer node k; the desired output may be calculated by the following formula:wherein w is j (k) Represents h j To y m Link weights between h j (x) The output of the j-th node which is the hidden layer; the output of the j-th node of the hidden layer can be calculated by the following formula: />Wherein C is j Represents the center vector of the j-th node of the hidden layer, and II represents the European norm, b j Base width vector representing j-th node of hidden layer, x= [ X ] 1 ,x 2 ,x 3 ,...,x n ] T Is the input vector.
In one example, after the obtaining the improved radial basis function neural network, the method further comprises: adding a Kalman filter to the improved radial basis function neural network; the Kalman filter is mainly divided into a time updating process and a prediction updating process.
In one example, the time update process may be calculated by the following formula: the prediction update process may be calculated by the following formula: /> P k|k =(I-K k H k ) k|k-1 The method comprises the steps of carrying out a first treatment on the surface of the Wherein F is a state transition matrix, B k For the model parameter matrix, +.>Representing state predictors,/->For state prediction value, P k| 、P k|-1 Representing covariance matrix, K k Represents the Kalman gain, R k ,Q k Representing the covariance matrix corresponding to noise, H k Representing the observation matrix.
In one example, after the constructing the original radial basis function neural network, the method further comprises:
and selecting a Gaussian function as the hidden layer excitation function.
The present application also provides a quantity prediction system, the system comprising: initializing a parameter module, and initializing improved radial basis function neural network parameters, wherein the parameters at least comprise a hidden layer center vector, a basis width vector, a network weight, a learning factor, a momentum factor and a covariance matrix corresponding to noise in a Kalman filtering algorithm; the input module is used for inputting the original data into the improved radial basis function neural network so as to obtain a first output value; the filtering processing module is used for carrying out Kalman filtering processing on the first output value to obtain a first correction value; and the iteration module is used for inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges so as to obtain a predicted value corresponding to the original data.
The present application also provides a quantity prediction apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform: initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into the improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
The present application also provides a non-volatile computer storage medium storing computer executable instructions, characterized in that the computer executable instructions are configured to: initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into the improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
The method provided by the application has the following beneficial effects: the momentum factor and the learning factor are added to the original radial basis function neural network structure, so that the network structure parameters are automatically selected; and filtering the whole model output by adopting a Kalman filter, and eliminating the influence of external disturbance in data on a model prediction result.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a method for predicting quantity according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a quantity prediction system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a quantity predicting device in an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart illustrating a method for predicting a quantity according to one or more embodiments of the present disclosure. The method can be applied to different data volume predictions, the process can be executed by computing devices in the corresponding field, and certain input parameters or intermediate results in the process allow manual intervention adjustment to help improve accuracy.
The implementation of the analysis method according to the embodiment of the present application may be a terminal device or a server, which is not particularly limited in this application. For ease of understanding and description, the following embodiments are described in detail with reference to a server.
It should be noted that the server may be a single device, or may be a system formed by a plurality of devices, that is, a distributed server, which is not specifically limited in this application.
As shown in fig. 1, an embodiment of the present application provides a quantity prediction method, including:
s101: initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm.
Firstly, to initialize each parameter of the improved radial basis function neural network, the parameters should include at least parameters such as hidden layer center vector, base width vector, network weight, learning factor, momentum factor, covariance matrix corresponding to noise in Kalman filtering algorithm, and the like.
The radial basis function neural network needs to be constructed and improved prior to initializing the parameters. First, an original radial basis function neural network is constructed, the original radial basis function neural network comprising an input layer, an hidden layer and an output layer. And adding a learning factor and a momentum factor into the original radial basis function neural network to obtain an improved radial basis function neural network.
Specifically, after the learning factor and the momentum factor are added, the model training formula of the radial basis function neural network is as follows: x (k) =x (k-1) +Δx (k)
Wherein x (k) is the output of the node k through the original radial basis function neural network input layer; Δx (k) is an output change value, α is a momentum factor, and η is a learning factor; e (k) is the error function of node k.
Further, assume that the input vector x= [ X ] 1 ,x 2 ,x 3 ,...,x n ] T Where n represents the number of input layer units, h= [ H ] 1 ,h 2 ,h 3 ,...,h m ] T For the hidden layer radial basis vector, wherein m represents the number of hidden layer units, a Gaussian function is selected as a hidden layer excitation function, and the output of the j-th node of the hidden layer of the network is as follows:
wherein C is j Represents the center vector of the j-th node of the hidden layer, and II represents the European norm, b j A base width vector representing the j-th node of the hidden layer.
The output of the network is represented by a weighted algebraic sum between the hidden layer node and the output node, and the output expression is:
wherein w is j (k) Represents h j To y m The link weights between them. The corrected network output node k error function is:
E(k)=0.5*[y(k)-y m (k)] 2
wherein y (k) represents the time output of the neural network output layer node k, y m (k) Representing the desired output of the neural network output layer node k.
In one embodiment, after the radial basis function neural network is constructed and improved, kalman data filtering is added to the model in order to eliminate the interference of the model on noise in the data, and the autoregressive state estimation of the model is realized through mutual updating and feedback between the measuring process and the predicting process, so that the data quantity of a future period can be predicted from a series of incomplete and noisy historical data. The Kalman filter is mainly divided into a time updating process and a prediction updating process, and the time updating is calculated by the following steps:
the prediction update process may be calculated by the following formula:
P k|k =(I-K k H k )P k|k-1
wherein F is a state transition matrix, B k As a matrix of the parameters of the model,representing state predictors,/->For state prediction value, P k| 、P k|-1 Representing covariance matrix, K k Represents the Kalman gain, R k ,Q k Representing the covariance matrix corresponding to noise, H k Representing the observation matrix.
S102: the raw data is input into the modified radial basis function neural network to obtain a first output value.
After initializing the parameters, the raw data may be input into the improved radial basis function neural network to obtain a first output value. The raw data here predicts the amount of data at a certain point in the future from the raw data. The first output value refers to the actual output value of the model.
S103: and carrying out Kalman filtering processing on the first output value to obtain a first correction value.
And (3) carrying out Kalman filtering on the actual output value of the model, and then inputting the calculated error correction quantity E (k) to the next round of control.
S104: and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
According to the gradient descent method, iterating the network output weight w at the current moment under the action of the momentum factors and the learning factors, and a node center vector C and a base width vector b.
As shown in fig. 2, the embodiment of the present application further provides a quantity prediction system, including:
initializing a parameter module 201, and initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm;
the input module 202 inputs the original data into the improved radial basis function neural network to obtain a first output value;
the filtering processing module 203 performs kalman filtering processing on the first output value to obtain a first correction value;
and the iteration module 204 is used for inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
As shown in fig. 3, the embodiment of the present application further provides a quantity predicting device, including:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into the improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
The embodiments also provide a non-volatile computer storage medium storing computer executable instructions configured to:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm; inputting the original data into the improved radial basis function neural network to obtain a first output value; carrying out Kalman filtering processing on the first output value to obtain a first correction value; and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not described in detail herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (10)
1. A method of quantity prediction, comprising:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm;
inputting the original data into the improved radial basis function neural network to obtain a first output value;
carrying out Kalman filtering processing on the first output value to obtain a first correction value;
and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
2. The method of claim 1, wherein before initializing the improved radial basis function neural network parameters, the method further comprises:
constructing an original radial basis function neural network; the original radial basis function neural network comprises an input layer, an implicit layer and an output layer;
and adding the learning factor and the momentum factor into the original radial basis function neural network to obtain the improved radial basis function neural network.
3. The method of claim 2, wherein the model training formula of the modified radial basis function neural network after adding the learning factor and the momentum factor to the original radial basis function neural network is:
x(k)=x(-1)+Δx()
wherein x (k) is the output of the node k through the original radial basis function neural network input layer; Δx (k) is an output change value, α is a momentum factor, and η is a learning factor; e (k) is the error function of node k.
4. A method according to claim 3, wherein the error function is calculated by the following formula:
E(k)=0.5*[y(k)-y m (k)] 2
wherein y (k) represents the time output of the neural network output layer node k, y m (k) Representing a desired output of the neural network output layer node k;
the desired output may be calculated by the following formula:
wherein w is j (k) Represents h j To y m Link weights between h j (k) The output of the j-th node which is the hidden layer;
the output of the j-th node of the hidden layer can be calculated by the following formula:
wherein C is j Represents the center vector of the j-th node of the hidden layer, and II represents the European norm, b j Base width vector representing j-th node of hidden layer, x= [ X ] 1 ,x 2 ,x 3 ,...,x n ] T Is the input vector.
5. A method according to claim 3, wherein after said deriving said improved radial basis function neural network, said method further comprises:
adding a Kalman filter to the improved radial basis function neural network;
the Kalman filter is mainly divided into a time updating process and a prediction updating process.
6. The method of claim 5, wherein the time update process is calculated by the following formula:
the prediction update process may be calculated by the following formula:
P k|k =(I-K k H k )P k|k-1
wherein F is a state transition matrix, B k As a matrix of the parameters of the model,representing state predictors,/->For state prediction value, P k| 、P k|-1 Representing covariance matrix, K k Represents the Kalman gain, R k ,Q k Representing the covariance matrix corresponding to noise, H k Representing the observation matrix.
7. The method of claim 2, wherein after the constructing the original radial basis function neural network, the method further comprises:
and selecting a Gaussian function as the hidden layer excitation function.
8. A quantity prediction system, the system comprising:
initializing a parameter module, and initializing improved radial basis function neural network parameters, wherein the parameters at least comprise a hidden layer center vector, a basis width vector, a network weight, a learning factor, a momentum factor and a covariance matrix corresponding to noise in a Kalman filtering algorithm;
the input module is used for inputting the original data into the improved radial basis function neural network so as to obtain a first output value;
the filtering processing module is used for carrying out Kalman filtering processing on the first output value to obtain a first correction value;
and the iteration module is used for inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges so as to obtain a predicted value corresponding to the original data.
9. A quantity predicting apparatus, characterized by comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm;
inputting the original data into the improved radial basis function neural network to obtain a first output value;
carrying out Kalman filtering processing on the first output value to obtain a first correction value;
and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
10. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to:
initializing improved radial basis function neural network parameters, wherein the parameters at least comprise hidden layer center vectors, basis width vectors, network weights, learning factors, momentum factors and covariance matrixes corresponding to noise in a Kalman filtering algorithm;
inputting the original data into the improved radial basis function neural network to obtain a first output value;
carrying out Kalman filtering processing on the first output value to obtain a first correction value;
and inputting the first correction value into the improved mirror image basis function neural network, and iterating through a gradient descent method until the error of the improved mirror image basis function neural network converges to obtain a predicted value corresponding to the original data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310071372.2A CN116050484A (en) | 2023-01-17 | 2023-01-17 | Quantity prediction method, system, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310071372.2A CN116050484A (en) | 2023-01-17 | 2023-01-17 | Quantity prediction method, system, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116050484A true CN116050484A (en) | 2023-05-02 |
Family
ID=86123904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310071372.2A Pending CN116050484A (en) | 2023-01-17 | 2023-01-17 | Quantity prediction method, system, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116050484A (en) |
-
2023
- 2023-01-17 CN CN202310071372.2A patent/CN116050484A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10410118B2 (en) | System and method for training neural networks | |
CN111950810B (en) | Multi-variable time sequence prediction method and equipment based on self-evolution pre-training | |
US11416654B2 (en) | Analysis apparatus using learned model and method therefor | |
JP6712644B2 (en) | Acoustic model learning device, method and program | |
Alessandri et al. | Robust receding-horizon state estimation for uncertain discrete-time linear systems | |
Schwedersky et al. | Nonlinear model predictive control algorithm with iterative nonlinear prediction and linearization for long short-term memory network models | |
KR20210117331A (en) | Legendre memory unit in regression neural network | |
Vrabie | Online adaptive optimal control for continuous-time systems | |
Liu et al. | Physics-informed neural networks for system identification of structural systems with a multiphysics damping model | |
CN112308337A (en) | Prediction method, prediction device and processor for probabilistic short-term load of power system | |
CN116050484A (en) | Quantity prediction method, system, equipment and medium | |
WO2019194128A1 (en) | Model learning device, model learning method, and program | |
Toshani et al. | Constrained generalised minimum variance controller design using projection‐based recurrent neural network | |
US20220188383A1 (en) | Horizon-based smoothing of forecasting model | |
Elloumi et al. | Design of self‐tuning regulator for large‐scale interconnected Hammerstein systems | |
CN113449433A (en) | Constraint optimization method and device for objective function corresponding to cement production process model | |
CN110543549B (en) | Semantic equivalence judgment method and device | |
CN113392100A (en) | System intelligent verification method, device and system based on particle swarm optimization neural network | |
KR20230012790A (en) | Method and apparatus for function optimization | |
CN112578458A (en) | Pre-stack elastic impedance random inversion method and device, storage medium and processor | |
CN111738407A (en) | Clock error prediction method, device, medium and terminal based on deep learning | |
JP6994572B2 (en) | Data processing system and data processing method | |
CN117252691A (en) | Time domain self-adaptive stock price prediction method and device based on multi-terminal time sequence model | |
US20230138990A1 (en) | Importance Sampling via Machine Learning (ML)-Based Gradient Approximation | |
Hua et al. | Reinforcement Learning and Feedback Control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |