CN107194460A - The quantum telepotation recurrent neural network method of Financial Time Series Forecasting - Google Patents

The quantum telepotation recurrent neural network method of Financial Time Series Forecasting Download PDF

Info

Publication number
CN107194460A
CN107194460A CN201710362965.9A CN201710362965A CN107194460A CN 107194460 A CN107194460 A CN 107194460A CN 201710362965 A CN201710362965 A CN 201710362965A CN 107194460 A CN107194460 A CN 107194460A
Authority
CN
China
Prior art keywords
mrow
node
memory
msub
rpnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710362965.9A
Other languages
Chinese (zh)
Inventor
孟力
吴铭实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201710362965.9A priority Critical patent/CN107194460A/en
Publication of CN107194460A publication Critical patent/CN107194460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/08Computing arrangements based on specific mathematical models using chaos models or non-linear system models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Abstract

The quantum telepotation recurrent neural network method of Financial Time Series Forecasting, is related to the analysis and prediction of time series.Chaos and Phase-space Reconstruction are applied first, chaos financial time series attractor dimension is calculated by saturation correlation dimension (G P) method, determine the structure of RPNN networks, then recurrent neural network RPNN trained by quantum telepotation QPSO algorithms, finally determine the dynamic optimal weights and threshold value of network, RPNN Simulation of Neural Network predicted value and actual value is reached minimal error precision.Solve the problems, such as easily to fall into local minimum based on gradient algorithm optimization RPNN neutral nets, the QPSO RPNN Optimization Prediction methods fast convergence rate set up, search have of overall importance, program calculation is succinct, efficient, precision of prediction is high, is had a wide range of applications in financial investment and social economy.

Description

The quantum telepotation recurrent neural network method of Financial Time Series Forecasting
Technical field
The present invention relates to the analysis of time series and prediction, more particularly, to a kind of quantum grain of Financial Time Series Forecasting Subgroup Optimal Recursive neural net method.
Background technology
The analysis of time series has important application value in numerous areas with prediction.Early stage is used for time series forecasting Analysis method is linear model mostly, and these models are theoretical with having certain limitation in method.Many number systems are respectively provided with Complicated nonlinear characteristic, introduces non-linear research paradigm and time series is analyzed and predicted, by nonlinear iteration, learns Model is practised, approximate description Chaos dynamic system is the inexorable trend of Forecast of Nonlinear Time Series theoretical developments[1]
Financial market due to affected by many factors thus be complex nonlinear dynamical system, what financial time series had The features such as non-stationary, weak chaos is the synthesis external manifestation of financial market complexity.Due to thinking inertia, Financial Research person Although recognize finance operation complexity, always deliberately avoiding this complexity in theoretical foundation and research method. For a long time, Financial Time Series Forecasting method is limited by the linear normal form commanded with EMH (EMH), a series of On preferable assumed condition, the fluctuation of assets price is attributed to random external sexual factor by linear normal form.This period internal ratio More representative linear prediction model has auto regressive moving average (ARMA) model, generilized auto regressive conditional heteroskedastic GARCH (Generalized ARCH) model, markov transfer process (Markov Switching Process) etc., but The market nonlinear characteristics such as Return on Assets distribution " spike thickness tail ", stability bandwidth cluster, long-term memory ", announce linear normal form Failure, introduce non-linear research paradigm and financial variable analyzed and predicted, by nonlinear iteration, learning model, closely It is the inevitable outcome of financial market theoretical developments like description Chaos dynamic system[2],[3]
Chaos is one of financial time series nonlinear characteristic.Chaotic prediction theory thinks:On the one hand, chaos has The presentation that certainty feature much to seem random is actually foreseeable;And on the other hand, chaos phenomenon is consolidated The exquisite sensitivity to primary condition having, and fundamentally limiting its long-term forecast effect.Thus, Chaos dynamic system exists Evolution trend in a short time is predictable, but long-term forecast is unrealistic[4]
Artificial neural network possesses superpower self-organizing and adaptive ability, and have to information preferable serious forgiveness with Associative memory, makes it have advantageous advantage in terms of chaos time sequence is predicted.Recurrent neural network (RPNN) is A kind of dynamic neural network designed specifically designed for Chaotic time series forecasting, is special multiple branches time delay nerve Network.RPNN has time delay multiple branches, simulates the temporal characteristicses of nonlinear dynamic system, possesses store function and connection Think memory capability[5],[6],[7],[8]
Quantum particle swarm optimization (Quantum-Behaved Particle Swarm Optimization, QPSO) It is, from quantum mechanics angle, Particles Moving state to be described using quantum uncertainty principle and wave function, one kind of foundation is new Type PSO optimized algorithms, this algorithm major advantage is the diversity that population is maintained on interested the problem of, experimental result table Bright, this method can improve the efficiency and convergence of problem solving[9],[10]
Bibliography:
[1]Ricardo de A.Araújo and Tiago A.E.Ferreira,2009,“An intelligent hybrid morphological-rank-linear method for financial time series prediction”,Neurocomputing,Vol.72,Issue.10-12,pp.2507-2524.
[2]Tim Bollerslev,Ray Y.Chou and Kenneth F.Kroner,1992,“ARCH Modeling in Finance:A Review of the Theory and Empirical Evidence”,Journal of Econometrics,Vol.52,Issues.1–2,pp.5-59.
[3]Yongmiao Hong,Yanhui Liu and Shouyang Wang,2009,“Granger causality in risk and detection of extreme risk spillover between financial markets”, Journal of Econometrics,Vol.150,Issue.2,pp.271-287.
[4]E E Peters.Chaos and order in the capital markets[J],International Journal of Theoretical and Applied Finance,1991,32(3):675-702.
[5] sequencing transaction research [D] Xiamen of the high prosperous based on RPNN-SAPSO Chaotic time series forecasting models is big Learn, 2015.
[6] the intelligent optimization recurrent neural network application for a patent for invention number of a kind of time series forecastings of Meng Li 201510288774.3.
[7]Lean Yu,Shouyang Wang and Kin Keung Lai,2008,“A neural-network- based nonlinear metamodeling approach to financial time series forecasting”, Applied Soft Computing,Vol.9,Issue.2,pp.563-574.
[8]Tsung-Jung Hsieh,Hsiao-Fen Hsiao and Wei-Chang Yeh,2011, “Forecasting Stock Markets Using Wavelet Transforms and Recurrent Neural Networks:An Integrated System Based on Artificial Bee Colony Algorithm”, Applied Soft Computing,Vol.11,Issue.2,pp.2510–2525.
[9] all Di, Sun Jun, cooperative particle swarm optimization algorithm [J] that palpus text ripple has quantum behavior is controlled and decision-making, 2011,26(04):582-586.
[10] Sun Jun quantum behavior particle swarm optimization algorithm research [D] Southern Yangtze University, 2009.
The content of the invention
It is an object of the invention to overcome the technical deficiency of existing Forecast of Nonlinear Time Series, there is provided pre- based on recurrence Survey device neutral net (Recurrent Predictor Neural Network, RPNN) incorporating quantum behavior Particle Swarm Optimization The time series predicting model of method (Quantum-Behaved Particle Swarm Optimization, QPSO), with chaos It is theoretical foundation with phase space reconfiguration, finance Nonlinear Time Series short-term forecast can be achieved, improves the precision of prediction financial time The quantum telepotation recurrent neural network method of sequence prediction.
The present invention comprises the following steps:
1) time series attractor dimension is calculated, it is that 5 dimensions, i.e. RPNN nodes are 5, RPNN to select Embedded dimensions Memory be 25, network has 5 threshold values and 75 weights;
2) number of particle is 60, and the state of each particle is by 80 dimensions, wherein 75 dimensions correspond to RPNN's Weights, 5 dimension correspondence RPNN threshold value;
3) during network training, each particle original state is assigned first;RPNN, which often produces a predicted value, will undergo 5 Beat, the corresponding time delay of each of which beat is 1 day;
4) the first beat:First node is with value that outside input data values r1 and the feedback of itself are on its memory Corresponding weights are multiplied by as input data, an output valve is then produced in the presence of activation primitive, and this is defeated Go out value to be stored in first memory, the value for being originally stored in first memory is stored in second memory, successively Bottom-up more new memory, the worthwhile of last memory discharges;
5) the second beat:Second node is with outside input data values r2With depositing for the feedback of itself and first node Value on reservoir is multiplied by corresponding weights as input data, produces an output valve in the presence of activation primitive, and with the One beat is similar, successively the memory more on new node, and the worthwhile of last memory discharges;
6) the 3rd beat:3rd node is with outside input data values r3With the feedback of itself and first node storage Value on device and second node memory, is multiplied by corresponding weights as input data and one is produced in the presence of activation primitive Individual output valve, successively memory more new node on similar with the first beat and the second beat, last same memory It is worthwhile to discharge;
7) the 4th beat:4th node is with outside input data values r4With the feedback of itself and first node storage Value on device, second node memory and the 3rd node memory, is multiplied by corresponding weights as input data in activation An output valve, and more new memory are produced in the presence of function;
8) the 5th beat:5th node is with outside input data values r5With the feedback of itself and first node storage Device, second node memory, the value of the 3rd node memory and the 4th node memory, are multiplied by corresponding weights conduct Input data produces an output valve in the presence of activation primitive;The output valve of wherein the 5th node is predicted value;
9) the output calculation formula of network is in each beat:
Wherein, yj(t) t node j output, r are representedj(t) the outside input value for being t node j, bj(t) it is section Point j threshold value, n is the number (i.e. n=5) of network node, Cik(t) be i-th of moment, k-th of node memory value, ωij (t) it is the corresponding weights of i-th of t, j-th of node memory;σj() is node j activation primitive, determines neuron J output;Activation primitive used in each beat is:
Wherein A is the amplitude of activation primitive,For slope, after repeatedly testing, A=1.5 is set,
10) value predicted is inputted into object function calculation error, wherein:
Object function is absolute error function:Wherein, S is total sample number, when h (t) is t Corresponding output valve is carved,For the corresponding actual value of t;
11) state of application QPSO algorithms more new particle, calculates the error of new output valve, is found by continuous iteration The minimum weights of error of sening as an envoy to and threshold value.
In step 11) in, the specific steps of the state of the QPSO algorithms more new particle can be:
(1) when QPSO trains RPNN, assign each particle one initial shape at random in [- 0.5,0.5] interval first State, the state of particle is fixed by 80 dimensions, i.e., 75 weights, the threshold values of 5.
(2) RPNN utilizes the weights and threshold value of the optimal particle determined by QPSO optimized algorithms and the representative of corresponding dimension Predicted value is calculated, predicted value is then inputted into object function calculation error.
(3) error target is set to add up absolute error Es<5%, if error is not reaching to target or iterations not When meeting condition, according to the particle evolution formula of quantum action optimization, the state of more new particle.
The particle evolution formula of quantum action optimization:
Wherein Xi,j(t+1) it is that the jth of i-th of particle ties up the state (i.e. the weights and threshold value at t+1 moment) at the t+1 moment, pi,j(t) be t attractor, and pi,j(t)=uj(t)·Pi,j(t)+[1-uj(t)]·Gj(t), Pi,j(t) it is i-th The history optimum state of the jth dimension of son, Gj(t) it is jth is tieed up in particle colony global optimum's state, uj(t) obey in (0,1) Interval equally distributed random number.α is shrinkage expansion coefficient, sets α=(1.0-0.5) × (MAXITER-t)/MAXITER+ 0.5;Cj(t) be t jth dimension average best condition,M is the number of particle, Pi,j(t) it is t The history optimum state of the jth dimension of i-th of particle, Ui,j(t) it is to obey in (0,1) interval equally distributed random number;
(4) weights of network and threshold value (i.e. the state of particle) are entered by the particle evolution formula of above-mentioned quantum action optimization After change, renewal, new output valve is calculated, and output valve is inputted into object function calculation error.Being found by continuous iteration makes The minimum weights of error and threshold value;
(5) weights for making error minimum are found and the training of network is just completed after threshold value, now RPNN has obtained optimal Recorded in network parameter, the memory that represent Nonlinear Mapping F, RPNN of the chaos attractor in phase space reconstruction when being Between sequence related information.Assuming that the timing node of the data of last in training sample is T, then input T-4 in a network, During the outside input value at T-3, T-2, T-1, T moment, the predicted value at T+1 moment just can be obtained, so far the model completes network Training and forecast function.
The RPNN training of the present invention uses QPSO mixing intelligent optimizing algorithms, has broken away from former gradient algorithm for activation primitive The limitation that high-order can be led, extends RPNN diversity.QPSO algorithms have Fast Convergent and searching characteristic, can be quick Convergence can be scanned in whole solution space again, will not be dispersed into infinite point, it is ensured that be not absorbed in local minimum.
The QPSO optimized algorithms of the present invention are than SAPSO (simulated annealing population) optimized algorithms excellent to RPNN neutral nets In terms of changing prediction, model parameter is few, succinct, precision of prediction is higher;Upper QPSO optimized algorithms than SAPSO, (move back by simulation in terms of programming Fiery population) optimized algorithm, written in code workload is few.
The present invention applies chaos and Phase-space Reconstruction first, and chaos gold is calculated by saturation correlation dimension (G-P) method Melt time series attractor dimension, determine RPNN (Recurrent Predictor Neural Network, abbreviation RPNN) net The structure of network, then passes through quantum telepotation QPSO (Quantum-Behaved Particle Swarm Optimization, abbreviation QPSO) algorithm to recurrent neural network RPNN train, finally determine network dynamic optimal weights and Threshold value, makes RPNN Simulation of Neural Network predicted value reach minimal error precision with actual value.The present invention is solved to be calculated based on gradient Method optimization RPNN neutral nets easily fall into local minimum problem, the QPSO-RPNN Optimization Prediction method convergence rates set up Hurry up, search for of overall importance, program calculation is succinct, efficiently, precision of prediction it is high, have in financial investment and social economy extensively Application.
Brief description of the drawings
Fig. 1 is QPSO algorithm flow charts.
Fig. 2 is RPNN structural representations.
Fig. 3 is RPNN network training results.
Fig. 4 is 10 card composite closing price Relative Errors in the sky.
Fig. 5 is 20 card composite closing price Relative Errors in the sky.
Embodiment
Following examples will the present invention is further illustrated with reference to accompanying drawing.
The embodiment of the present invention comprises the following steps:
1) time series attractor dimension is calculated, it is that 5 dimensions, i.e. RPNN nodes are 5 that Embedded dimensions are selected accordingly, RPNN memory is 25, and the network has 5 threshold values and 75 weights, and its structure chart is as shown in Figure 2.
2) number of particle is 60, and the state of each particle is described by 80 dimensions, wherein 75 dimension correspondences RPNN weights, 5 dimension correspondence RPNN threshold value.
3) during network training, each particle original state is assigned first;RPNN, which often produces a predicted value, will undergo 5 Beat, the corresponding time delay of each of which beat is 1 day.
4) the first beat:First node is with value that outside input data values r1 and the feedback of itself are on its memory Corresponding weights are multiplied by as input data, an output valve is then produced in the presence of activation primitive, and this is defeated Go out value to be stored in first memory, the value for being originally stored in first memory is stored in second memory, successively Bottom-up more new memory, the worthwhile of last memory discharges.
5) the second beat:Second node is with outside input data values r2With depositing for the feedback of itself and first node Value on reservoir is multiplied by corresponding weights as input data, produces an output valve in the presence of activation primitive, and with the One beat is similar, successively the memory more on new node, and the worthwhile of last same memory discharges.
6) the 3rd beat:3rd node is with outside input data values r3With the feedback of itself and first node storage Value on device and second node memory is multiplied by corresponding weights as input data and one is produced in the presence of activation primitive Individual output valve, successively memory more new node on similar with the first and second beats, last same memory it is worthwhile Discharge.
7) the 4th beat:4th node is with outside input data values r4With the feedback of itself and first node storage Value on device, second node memory and the 3rd node memory is multiplied by corresponding weights as input data in activation letter An output valve is produced in the presence of number, memory updating is similar to the above.
8) the 5th beat:5th node is with outside input data values r5With the feedback of itself and first node storage The value of device, second node memory, the 3rd node memory and the 4th node memory is multiplied by corresponding weights conduct Input data produces an output valve in the presence of activation primitive;The output valve of wherein the 5th node is predicted value.
9) the output calculation formula of network is in each above-mentioned beat:
Wherein, yj(t) t node j output, r are representedj(t) the outside input value for being t node j, bj(t) it is section Point j threshold value, n is the number (i.e. n=5) of network node, Cik(t) be i-th of moment, k-th of node memory value, ωij (t) it is the corresponding weights of i-th of t, j-th of node memory.σj() is node j activation primitive, determines neuron J output.Activation primitive used in each beat is:
Wherein A is the amplitude of activation primitive,For slope.After repeatedly testing, this method setting A=1.5,
10) value predicted is inputted into object function calculation error, wherein:
Object function is absolute error function:Wherein, S is total sample number, when h (t) is t Corresponding output valve is carved,For the corresponding actual value of t.
(11) state of application QPSO algorithms more new particle, calculates the error of new output valve.Sought by continuous iteration Find out the weights for making error minimum and threshold value.
In step 11) in, the specific steps of the state of the QPSO algorithms more new particle can be:
(1) when QPSO trains RPNN, assign each particle one initial shape at random in [- 0.5,0.5] interval first State, the state of particle is fixed by 80 dimensions, i.e., 75 weights, the threshold values of 5.
(2) RPNN utilizes the weights and threshold value of the optimal particle determined by QPSO optimized algorithms and the representative of corresponding dimension Predicted value is calculated, predicted value is then inputted into object function calculation error.
(3) error target is set to add up absolute error Es<5%, if error is not reaching to target or iterations not When meeting condition, according to the particle evolution formula of quantum action optimization, the state of more new particle.
The particle evolution formula of quantum action optimization:
Wherein Xi,j(t+1) it is that the jth of i-th of particle ties up the state (i.e. the weights and threshold value at t+1 moment) at the t+1 moment, pi,j(t) be t attractor, and pi,j(t)=uj(t)·Pi,j(t)+[1-uj(t)]·Gj(t), Pi,j(t) it is i-th The history optimum state of the jth dimension of son, Gj(t) it is jth is tieed up in particle colony global optimum's state, uj(t) obey in (0,1) Interval equally distributed random number.α is shrinkage expansion coefficient, this method setting α=(1.0-0.5) × (MAXITER-t)/ MAXITER+0.5;Cj(t) be t jth dimension average best condition,M is the number of particle, Pi,j (t) be i-th of particle of t jth dimension history optimum state, Ui,j(t) be obey (0,1) it is interval it is equally distributed with Machine number.
(4) weights of network and threshold value (i.e. the state of particle) are entered by the particle evolution formula of above-mentioned quantum action optimization After change, renewal, new output valve is calculated, and output valve is inputted into object function calculation error.Being found by continuous iteration makes The minimum weights of error and threshold value.
(5) weights for making error minimum are found and the training of network is just completed after threshold value, now RPNN has obtained optimal Recorded in network parameter, the memory that represent Nonlinear Mapping F, RPNN of the chaos attractor in phase space reconstruction when being Between sequence related information.Assuming that the timing node of the data of last in training sample is T, then input T-4 in a network, During the outside input value at T-3, T-2, T-1, T moment, the predicted value at T+1 moment just can be obtained.So far the model completes network Training and forecast function.QPSO algorithm flows are shown in Fig. 1.
Specific embodiment given below:
Card composite closing price is research object above, and data source is in Wind information financial terminals.
Data interval is on the November 1st, 1 day 1 of September in 2013, totally 771 data.
It regard preceding 751 data (2013 on September 1, to 2016 on September 27) as multiple branches time delay recurrence god Training sample through network RPNN, and with QPSO optimized algorithm training networks.
As shown in figure 3, the RPNN trained is emulated, (now RPNN has obtained optimal network parameter, represents mixed Nonlinear Mapping F of the ignorant attractor in phase space reconstruction), compare the fit solution of simulation value and sample, examine the general of network Change ability.
It is predicted with the RPNN trained.20 data are used as forecast sample (September 28 days to 2016 11 in 2016 afterwards The moon 1), it is predicted using single step dynamical fashion, judges the estimated performance of network;
Predicted value and sample are contrasted, related precision index, quantitative evaluation RPNN estimated performance is calculated.
For the prediction effect of quantitative analysis QPSO-RPNN chaos Financial Time Series Forecasting models, using prediction phase Error criterion is evaluated;
Wherein yiWithIt is predicted value and desired value respectively.Relative Error index shows prediction effect closer to 0% Better;It is equal to 0% when predicting error free.Figure 4 and 5 show the Relative Error of 10 days and 20 days respectively.

Claims (2)

1. the quantum telepotation recurrent neural network method of Financial Time Series Forecasting, it is characterised in that including following step Suddenly:
1) time series attractor dimension is calculated, it is that 5 dimensions, i.e. RPNN nodes are 5 to select Embedded dimensions, and RPNN's deposits Reservoir is 25, and network has 5 threshold values and 75 weights;
2) number of particle is 60, and the state of each particle is by 80 dimensions, wherein 75 dimensions correspond to RPNN weights, 5 dimension correspondence RPNN threshold value;
3) during network training, each particle original state is assigned first;RPNN often produces a predicted value and undergoes 5 beats, The corresponding time delay of each of which beat is 1 day;
4) the first beat:First node is that value on its memory is multiplied by with outside input data values r1 and the feedback of itself Then corresponding weights produce an output valve in the presence of activation primitive, the output valve are deposited as input data Into first memory, the value for being originally stored in first memory is stored in second memory, it is bottom-up successively More new memory, the worthwhile of last memory discharges;
5) the second beat:Second node is with outside input data values r2With the feedback of itself and the memory of first node On value be multiplied by corresponding weights as input data, an output valve, and and first segment are produced in the presence of activation primitive The memory on more new node successively is clapped, the worthwhile of last memory discharges;
6) the 3rd beat:3rd node is with outside input data values r3With the feedback of itself and first node memory and Value on second node memory, is multiplied by corresponding weights as input data, and one is produced in the presence of activation primitive Memory in output valve, with the first beat and the second beat successively more new node, the worthwhile of last memory discharges Fall;
7) the 4th beat:4th node is with outside input data values r4With the feedback of itself and first node memory, Value on two node memories and the 3rd node memory, is multiplied by corresponding weights as input data in activation primitive Effect is lower to produce an output valve, and more new memory;
8) the 5th beat:5th node is with outside input data values r5With the feedback of itself and first node memory, The value of two node memories, the 3rd node memory and the 4th node memory, is multiplied by corresponding weights as input Data produce an output valve in the presence of activation primitive;The output valve of wherein the 5th node is predicted value;
9) the output calculation formula of network is in each beat:
<mrow> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&amp;PlusMinus;</mo> <mi>&amp;alpha;</mi> <mo>&amp;CenterDot;</mo> <mo>|</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&amp;CenterDot;</mo> <mi>l</mi> <mi>n</mi> <mo>&amp;lsqb;</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>U</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
Wherein, yj(t) t node j output, r are representedj(t) the outside input value for being t node j, bj(t) it is node j Threshold value, n is the number (i.e. n=5) of network node, Cik(t) be i-th of moment, k-th of node memory value, ωij(t) For the corresponding weights of i-th of t, j-th of node memory;σj() is node j activation primitive, determines neuron j's Output;Activation primitive used in each beat is:
Wherein A is the amplitude of activation primitive,For slope, after repeatedly testing, A=1.5 is set,
10) value predicted is inputted into object function calculation error, wherein:
Object function is absolute error function:Wherein, S is total sample number, and h (t) is t pair The output valve answered,For the corresponding actual value of t;
11) state of application QPSO algorithms more new particle, calculates the error of new output valve, is found and sent as an envoy to by continuous iteration The minimum weights of error and threshold value.
2. the quantum telepotation recurrent neural network method of Financial Time Series Forecasting as claimed in claim 1, its feature It is in step 11) in, the state of the QPSO algorithms more new particle is concretely comprised the following steps:
(1) when QPSO trains RPNN, assign each particle one original state at random in [- 0.5,0.5] interval first, grain The state of son is fixed by 80 dimensions, i.e., 75 weights, the threshold values of 5;
(2) RPNN utilizes the weights and threshold calculations of the optimal particle determined by QPSO optimized algorithms and the representative of corresponding dimension Go out predicted value, predicted value is then inputted into object function calculation error;
(3) error target is set to add up absolute error Es<5%, if error is not reaching to target or iterations is unsatisfactory for bar During part, according to the particle evolution formula of quantum action optimization, the state of more new particle;
The particle evolution formula of quantum action optimization:
<mrow> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&amp;PlusMinus;</mo> <mi>&amp;alpha;</mi> <mo>&amp;CenterDot;</mo> <mo>|</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&amp;CenterDot;</mo> <mi>l</mi> <mi>n</mi> <mo>&amp;lsqb;</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>U</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
Wherein Xi,j(t+1) it is weights and threshold value that the jth of i-th particle ties up state, i.e. t+1 moment at the t+1 moment, pi,j (t) be t attractor, and pi,j(t)=uj(t)·Pi,j(t)+[1-uj(t)]·Gj(t), Pi,j(t) it is i-th of particle Jth dimension history optimum state, Gj(t) it is jth is tieed up in particle colony global optimum's state, uj(t) obey in (0,1) area Between equally distributed random number;α is shrinkage expansion coefficient, sets α=(1.0-0.5) × (MAXITER-t)/MAXITER+0.5; Cj(t) be t jth dimension mean state,M is the number of particle, Pi,j(t) it is t i-th The history optimum state of the jth dimension of son, Ui,j(t) it is to obey in (0,1) interval equally distributed random number;
(4) state of the weights and threshold value, i.e. particle of network, according to quantity sub- action optimization particle evolution formula evolve, update after, New output valve is calculated, and output valve is inputted into object function calculation error, finds make error minimum by continuous iteration Weights and threshold value;
(5) weights for making error minimum are found and the training of network is just completed after threshold value, now RPNN has obtained optimal network What is recorded in parameter, the memory that represent Nonlinear Mapping F, RPNN of the chaos attractor in phase space reconstruction is time sequence The related information of row;Assuming that the timing node of the data of last in training sample is T, then input T-4, T-3 in a network, During the outside input value at T-2, T-1, T moment, the predicted value at T+1 moment is obtained, so far the model completes network training and prediction Function.
CN201710362965.9A 2017-05-22 2017-05-22 The quantum telepotation recurrent neural network method of Financial Time Series Forecasting Pending CN107194460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710362965.9A CN107194460A (en) 2017-05-22 2017-05-22 The quantum telepotation recurrent neural network method of Financial Time Series Forecasting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710362965.9A CN107194460A (en) 2017-05-22 2017-05-22 The quantum telepotation recurrent neural network method of Financial Time Series Forecasting

Publications (1)

Publication Number Publication Date
CN107194460A true CN107194460A (en) 2017-09-22

Family

ID=59874391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710362965.9A Pending CN107194460A (en) 2017-05-22 2017-05-22 The quantum telepotation recurrent neural network method of Financial Time Series Forecasting

Country Status (1)

Country Link
CN (1) CN107194460A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009636A (en) * 2017-11-16 2018-05-08 华南师范大学 Deep learning ANN Evolutionary method, apparatus, medium and computer equipment
CN108428023A (en) * 2018-05-24 2018-08-21 四川大学 Trend forecasting method based on quantum Weighted Threshold repetitive unit neural network
CN109525548A (en) * 2018-09-25 2019-03-26 平安科技(深圳)有限公司 A kind of white list updating method based on cost function, device and electronic equipment
CN110361966A (en) * 2018-06-23 2019-10-22 四川大学 A kind of trend forecasting method based on two hidden-layer quantum wire cycling element neural network
CN114363262A (en) * 2022-01-05 2022-04-15 西安交通大学 Chaotic dynamic congestion prediction system and method under air-space-ground integrated network

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009636A (en) * 2017-11-16 2018-05-08 华南师范大学 Deep learning ANN Evolutionary method, apparatus, medium and computer equipment
CN108009636B (en) * 2017-11-16 2021-12-07 华南师范大学 Deep learning neural network evolution method, device, medium and computer equipment
CN108428023A (en) * 2018-05-24 2018-08-21 四川大学 Trend forecasting method based on quantum Weighted Threshold repetitive unit neural network
CN108428023B (en) * 2018-05-24 2022-03-15 四川大学 Trend prediction method based on quantum weighted threshold repetitive unit neural network
CN110361966A (en) * 2018-06-23 2019-10-22 四川大学 A kind of trend forecasting method based on two hidden-layer quantum wire cycling element neural network
CN110361966B (en) * 2018-06-23 2022-05-27 四川大学 Trend prediction method based on double-hidden-layer quantum circuit circulation unit neural network
CN109525548A (en) * 2018-09-25 2019-03-26 平安科技(深圳)有限公司 A kind of white list updating method based on cost function, device and electronic equipment
CN109525548B (en) * 2018-09-25 2021-10-29 平安科技(深圳)有限公司 White list updating method and device based on cost function and electronic equipment
CN114363262A (en) * 2022-01-05 2022-04-15 西安交通大学 Chaotic dynamic congestion prediction system and method under air-space-ground integrated network
CN114363262B (en) * 2022-01-05 2023-08-22 西安交通大学 Chaotic dynamic congestion prediction system and method under space-air-ground integrated network

Similar Documents

Publication Publication Date Title
Li et al. Prediction for tourism flow based on LSTM neural network
CN107194460A (en) The quantum telepotation recurrent neural network method of Financial Time Series Forecasting
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
Zheng et al. An accurate GRU-based power time-series prediction approach with selective state updating and stochastic optimization
Papageorgiou et al. Application of fuzzy cognitive maps to water demand prediction
CN110826774B (en) Bus load prediction method and device, computer equipment and storage medium
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN109214579B (en) BP neural network-based saline-alkali soil stability prediction method and system
CN104037761B (en) AGC power multi-objective random optimization distribution method
Rizwan et al. Artificial intelligence based approach for short term load forecasting for selected feeders at madina saudi arabia
CN107886160A (en) A kind of BP neural network section water demand prediction method
CN106168829A (en) Photovoltaic generation output tracing algorithm based on the RBF BP neutral net that ant group algorithm improves
CN109583588A (en) A kind of short-term wind speed forecasting method and system
Robati et al. Inflation rate modeling: Adaptive neuro-fuzzy inference system approach and particle swarm optimization algorithm (ANFIS-PSO)
Mellios et al. A multivariate analysis of the daily water demand of Skiathos Island, Greece, implementing the artificial neuro-fuzzy inference system (ANFIS)
Showkati et al. Short term load forecasting using echo state networks
Shresthamali et al. Power management of wireless sensor nodes with coordinated distributed reinforcement learning
CN115759458A (en) Load prediction method based on comprehensive energy data processing and multi-task deep learning
CN114202063A (en) Fuzzy neural network greenhouse temperature prediction method based on genetic algorithm optimization
Sharma et al. Synergism of recurrent neural network and fuzzy logic for short term energy load forecasting
Sarangi et al. Load Forecasting Using Artificial Neural Network: Performance Evaluation with Different Numbers of Hidden Neurons.
Coroama et al. A study on wind energy generation forecasting using connectionist models
Hao et al. Short-term Wind Speed Forecasting Based on Weighted Spatial Correlation and Improved GWO-GBRT algorithm
Kanović et al. Optimization of ship lock control system using swarm-based techniques
Hossain et al. Cascading Neural Network with Particle Swarm Optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170922

WD01 Invention patent application deemed withdrawn after publication