CN114565021A - Financial asset pricing method, system and storage medium based on quantum circulation neural network - Google Patents

Financial asset pricing method, system and storage medium based on quantum circulation neural network Download PDF

Info

Publication number
CN114565021A
CN114565021A CN202210094394.6A CN202210094394A CN114565021A CN 114565021 A CN114565021 A CN 114565021A CN 202210094394 A CN202210094394 A CN 202210094394A CN 114565021 A CN114565021 A CN 114565021A
Authority
CN
China
Prior art keywords
quantum
data
neural network
vqc
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210094394.6A
Other languages
Chinese (zh)
Inventor
李晓瑜
刘恒宇
朱钦圣
吴昊
胡勇
昌燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yuanjiang Technology Co ltd
Original Assignee
Sichuan Yuanjiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yuanjiang Technology Co ltd filed Critical Sichuan Yuanjiang Technology Co ltd
Priority to CN202210094394.6A priority Critical patent/CN114565021A/en
Publication of CN114565021A publication Critical patent/CN114565021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a financial asset pricing method, a system and a storage medium based on a quantum circulation neural network, belonging to the technical field of quantum computing. The method and the device have the advantages that the collected financial data are subjected to pre-classification processing by using a quantum k nearest neighbor algorithm, so that continuous financial time sequence data are converted into bounded and discrete data, the repeatability is better, and the QLSTM learning performance after pre-classification is better. The financial data are calculated based on the quantum circulation neural network, and the prediction accuracy and the calculation rate can be improved. And by introducing Bayesian model regularization processing, model parameters of the quantum circulation neural network can be effectively and further corrected, and the overfitting problem of the quantum circulation neural network is solved, so that the prediction accuracy is improved.

Description

Financial asset pricing method, system and storage medium based on quantum circulation neural network
Technical Field
The invention relates to the technical field of quantum computing, in particular to a financial asset pricing method and system based on a quantum circulating neural network and a storage medium.
Background
The financial industry is now facing explosive data growth and how to process this data is a significant challenge facing financial institutions. The traditional machine learning algorithm is applied to the financial field very early, wherein the recurrent neural network algorithm is commonly used for forecasting the time sequence data of stocks and the like due to good analysis capability of the time sequence problem, however, the stock market data transformation is very rapid, the calculation complexity is increased by forecasting the next step trend of the market before the market data transformation, and extremely small probability events are extremely easy to be brought into the forecasting range, so that the traditional recurrent neural network often cannot respond in time when facing a complex market environment, the forecasting accuracy is not high, and the trouble caused by corresponding high-dimensional calculation is also larger.
Quantum computing has powerful parallel computing capability, and the quantity of quantum bits of the quantum computing can realize exponential computing acceleration, so that the application of quantum algorithms in the field of finance has an application value which cannot be estimated. The quantum algorithm is applied to the prediction of binary options, the credit evaluation and the like, and the potential of quantum computation in the financial field is well illustrated. Financial asset pricing (generally including non-arbitrage pricing and equilibrium pricing) is an important component in finance, which is mainly directed to a time series learning problem in finance. By combining the quantum machine learning algorithm with the quantum machine learning algorithm, the asset pricing regression problem with extremely high analysis requirements and computing power requirements on data can be effectively solved. The prior art proposes a quantum binomial option pricing model, which is a simple version of the existing quantum financial model The same result of (c). These simplifications make the respective theories not only easy to analyze, but also easier to implement on a computer. On the basis, the scholars can learn from
Figure RE-GDA0003596354780000021
Considering the market from the point of view of the equation, the key information is that the Black-Scholes-Merton equation is actually
Figure RE-GDA0003596354780000022
Special case of the equation, where the market is assumed to be valid. At the same time, researchers have shown that quantum computer algorithms have square root advantages over traditional algorithms. Therefore, the quantum algorithm has a good application prospect in the financial field such as asset pricing, and the technical problem that the technical staff in the field needs to solve is the technical problem of how to combine quantum computation with the financial problem to realize accurate prediction of the financial problem.
Disclosure of Invention
The invention aims to solve the problem that the prior art cannot accurately predict rapidly-changing financial data, and provides a financial asset pricing method, a system and a storage medium based on a quantum cycle neural network.
The purpose of the invention is realized by the following technical scheme: the system specifically comprises a pre-classification processing unit, a data conversion unit, a quantum circulation neural network and an optimization unit which are connected in sequence;
The pre-classification processing unit is used for performing pre-classification processing on the financial data based on a quantum k nearest neighbor algorithm;
the data conversion unit is used for converting one-dimensional data obtained by pre-classification processing into multi-dimensional tensor data;
the quantum circulation neural network comprises a feature extraction module and a classification module;
the characteristic extraction module comprises a variational quantum circuit VQC, and the variational quantum circuit VQC comprises an encoding layer, a variable layer and a quantum measurement layer which are sequentially connected; the coding layer is used for coding the multidimensional tensor data to obtain quantum state data; the variable layer is used for performing unitary quantum operation on the quantum state data; the quantum measurement layer measures an expected value of the probability of each quantum bit, one quantum bit is operated through the Poly-Z operation, and a characteristic vector is obtained through a nonlinear excitation function and a hyperbolic tangent function;
the classification module classifies based on the feature vectors to obtain a prediction result;
the optimization unit is used for optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain the final prediction result.
In one example, the feature extraction module specifically includes 6 variational quantum circuits VQC and a forgetting gate f tAn input gate itA storage unit ctAn output gate OtA hidden state htHidden state h at last momentt-1And the input vector xtAnd is input into a first variational quantum circuit VQC1And a fourth variational quantum circuit VQC1And the output is a four-vector quantity obtained from the measured value at the end of each variation quantum circuit, and forgetting and updating are determined through a nonlinear activation function.
In one example, the variable layer includes a plurality of quantum CNOT gates for generating a multiple quantum entanglement for each pair of fixedly adjacent 1 and 2 qubits and a single quantum bit rotation gate; the single-quantum-bit revolving door is used for 3 rotating angles { alpha ] along the directions of x, y and z axesiiiAnd the method is not fixed in advance, and is updated in an iterative optimization process based on a gradient descent method.
The application also comprises a financial asset pricing method based on the quantum cycle neural network, and the method comprises the following steps:
performing pre-classification processing on the financial data based on a quantum k nearest neighbor algorithm;
converting one-dimensional data obtained by pre-classification processing into multi-dimensional tensor data;
encoding the multidimensional tensor data to obtain quantum state data;
performing unitary quantum operation on the quantum state data;
measuring the expected value of each quantum bit, operating one quantum bit through a Poly-Z operation, and obtaining a characteristic vector through a nonlinear excitation function and a hyperbolic tangent function;
Classifying based on the feature vectors to obtain a prediction result;
and optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain a final prediction result.
In an example, the pre-sorting process specifically includes:
establishing an initial superposition state | psi > based on a target value range corresponding to data to be classified;
determining a threshold value theta of K adjacent values based on quantum calculation;
the probability amplitude of the condition-satisfying item in the superposition state is maximum, the probability amplitudes of other items are reduced, the sum of the squares of the total probabilities is always normalized, and the Grover iteration of the repetition state is carried out, wherein the iteration times are
Figure RE-GDA0003596354780000041
Second, N represents the total number of samples of the training sample set; k represents the set nearest neighbor number;
and solving the prediction classification result of the quantum state data set.
In one example, the quantum state data expression is:
Figure RE-GDA0003596354780000042
wherein the content of the first and second substances,
Figure RE-GDA0003596354780000043
representing each ground state with each quantum qiComplex amplitude of qi∈{0,1};
Figure RE-GDA0003596354780000044
Is a square of
Figure RE-GDA0003596354780000045
Is measured probability of, and
Figure RE-GDA0003596354780000046
in an example, the performing a unitary quantum operation on the quantum state data specifically includes:
multiple quantum entanglement for each pair of fixedly adjacent 1 and 2 qubits based on CNOT gates, and rotation of gates { R } based on single qubit i=R(αiii) 3 rotation angles in the directions along the x, y and z axes { alpha }iiiAnd (4) not fixing in advance, and updating in an iterative optimization process based on a gradient descent method.
In one example, the calculation formula for obtaining the feature vector is:
ft=σ(VQC1(vt))
it=σ(VQC2(vt))
Figure RE-GDA0003596354780000047
Figure RE-GDA0003596354780000048
ot=σ(VQC4(vt))
ht=VQC5t*tanh(ct))
yt=VQC6t*tanh(ct))
wherein, ftIndicating a forgetting gate; i.e. itRepresenting an input gate;
Figure RE-GDA0003596354780000051
representing the current cell state; c. CtRepresents a memory cell; otAn output gate is shown; h istRepresenting a hidden state; y istTo representA feature vector; a sigma nonlinear excitation function; tanh represents a hyperbolic tangent function; v. oftIndicating a hidden state h at time tt-1For which input vector xtAn output of (d); VQCiRepresenting a variational quantum circuit VQC.
In an example, the optimizing the prediction result based on the bayesian model specifically includes:
and (3) carrying out Bayesian optimization on a training data set D corresponding to the prediction result by taking the weight of the quantum circulating neural network as a random variable to obtain a calculation formula of the tested probability density P (x | D, alpha, beta, M) as follows:
Figure RE-GDA0003596354780000052
wherein x represents all weight values and bias amounts contained in the quantum circulating neural network; both α and β represent coefficients; m represents the number of the selected quantum neural circulation network layers and the neuron of each layer; p (x | α, M) represents the conditional probability of x under α, M conditions; p (D | α, β, M) represents the conditional probability of x under the α, β, M condition;
And enabling the noise to accord with standard normal distribution, and updating a tested probability density calculation formula according to the likelihood function to obtain:
Figure RE-GDA0003596354780000053
ZF () represents a function with respect to α and β; f (x) represents a defined normalization index;
and calculating to obtain coefficients alpha and beta corresponding to the quantum circulation neural network with optimal performance based on the probability density after Bayesian analysis, thereby realizing the optimization processing of the quantum circulation neural network.
It should be further noted that the technical features corresponding to the above examples can be combined with each other or replaced to form a new technical solution.
The present invention also includes a storage medium having stored thereon computer instructions operable to perform the steps of the quantum-recurrent neural-network-based financial asset pricing method formed from any one or more of the above-described example compositions.
Compared with the prior art, the invention has the beneficial effects that:
the method and the device have the advantages that the collected financial data are subjected to pre-classification processing by using a quantum k nearest neighbor algorithm, so that continuous financial time sequence data are converted into bounded and discrete data, the repeatability is better, and the QLSTM learning performance after pre-classification is better. The financial data are calculated based on the quantum circulating neural network, and because the quantum bit has uncertainty, the quantum bit can be in a superposition state, so that the ignored extremely-small-probability events can be brought into a learning range under the general condition, and the prediction accuracy can be improved; furthermore, due to quantum coherent superposition, information carried by the quantum bit is the order of magnitude of 2 exponential levels of the classical bit, so that the operation rate can be greatly improved to adapt to the characteristic of rapid change of financial data, and a more accurate prediction result is obtained. The Bayesian model regularization processing is introduced, so that model parameters of the quantum circulation neural network can be effectively corrected, the overfitting problem of the quantum circulation neural network is solved, and the prediction accuracy is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention.
FIG. 1 is a schematic diagram of a feature extraction module according to an example of the present invention;
FIG. 2 is a schematic diagram of a variational quantum circuit VQC structure according to an example of the present invention;
FIG. 3 is a flow chart of a method in an example of the invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships described based on the drawings, and are only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
Furthermore, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The method is used for forecasting the pricing of the financial assets, such as realizing the pricing forecasting of the financial assets based on the BS formula of option pricing. Specifically, the BS formula is as follows:
SN(d1)-e-rtKN(d2)
wherein, N (d)1)、N(d2) Probability of being considered as two options; n (d)1) Indicating asset or nothing, if the stock price is exceededReturning a unit of stock after strike; n (d)2) Show case or nothing, i.e. if the stock price exceeds strike, K units of cash are returned on the right day, and the cash is e after the cash is folded -rtK. In summary, d1 describes how sensitive the option is to the stock price, and d2 describes the likelihood that the option was last executed. And selecting data with obvious characteristics from the financial data in different periods, and predicting the probability of the option right through quantum calculation.
In one example, the financial asset pricing system based on the quantum circulation neural network comprises a pre-classification processing unit, a data conversion unit, the quantum circulation neural network QLSTM and an optimization unit which are connected in sequence;
the system comprises a pre-classification processing unit, a data processing unit and a data processing unit, wherein the pre-classification processing unit is used for performing pre-classification processing on financial data based on a quantum k nearest neighbor algorithm; the data conversion unit is used for converting the one-dimensional data obtained by the pre-classification processing into multi-dimensional tensor data; the sub-circulation neural network comprises a feature extraction module and a classification module; the characteristic extraction module comprises a variational quantum circuit VQC, and the variational quantum circuit VQC comprises an encoding layer, a variable layer and a quantum measurement layer which are sequentially connected; the encoding layer is used for encoding the multidimensional tensor data to obtain quantum state data; the variable layer is used for performing unitary quantum operation on the quantum state data; the quantum measurement layer measures an expected value of the probability of each quantum bit, one quantum bit is operated through the Poly-Z operation, and a characteristic vector is obtained through a nonlinear excitation function and a hyperbolic tangent function; the classification module classifies based on the feature vectors to obtain a prediction result; the optimization unit is used for optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain the final prediction result. The historical data is a data set obtained based on the change of the historical financial data, and the target value corresponding to each data is determined.
According to the method, the collected financial data are subjected to pre-classification processing by using a quantum k nearest neighbor algorithm, so that continuous financial time series data are converted into bounded and discrete data, the repeatability is better, and the QLSTM learning performance after pre-classification is better. The financial data are calculated based on the quantum circulation neural network, and because the quantum bit has uncertainty, the quantum bit can be in a superposition state, so that the ignored minimum probability event can be brought into a learning range under the general condition, and the prediction accuracy can be improved; furthermore, due to quantum coherent superposition, information carried by the quantum bit is the order of magnitude of 2 exponential levels of the classical bit, so that the operation rate can be greatly improved to adapt to the characteristic of rapid change of financial data, and a more accurate prediction result is obtained. By introducing Bayesian model regularization processing, the model parameters of the quantum circulation neural network can be effectively further corrected, and the overfitting problem of the quantum circulation neural network is solved, so that the prediction accuracy is improved.
In one example, as shown in fig. 1, the feature extraction module is used for extracting financial data features and implementing data compression, and specifically includes 6 variation sub-circuits VQC and a forgetting gate f tAn input gate itA storage unit ctAn output gate otA hidden state htAt the previous moment, the hidden state ht-1And the input vector xtAnd is input into a first variational quantum circuit VQC1And a fourth variational quantum circuit VQC1And the output is a four-vector quantity obtained from the measured value at the end of each variation quantum circuit, and forgetting and updating are determined through a nonlinear activation function. More specifically, in FIG. 2
Figure RE-GDA0003596354780000091
Respectively expressing element-by-element multiplication and addition, wherein the specific calculation formula of the feature extraction module is as follows:
ft=σ(VQC1(vt))
it=σ(VQC2(vt))
Figure RE-GDA0003596354780000092
Figure RE-GDA0003596354780000093
ot=σ(VQC4(vt))
ht=VQC5t*tanh(ct))
yt=VQC6t*tanh(ct))
wherein the content of the first and second substances,
Figure RE-GDA0003596354780000094
representing the current cell state.
In one example, as shown in fig. 2, the variational quantum circuit VQC includes sequentially connected encoding layers (H, R)y,RzGate), variable layer, and quantum measurement layer.
Specifically, the classical data processed by the coding layer by the quantum circuit needs to be coded into quantum states, and the quantum state of a general N qubit can be expressed as follows:
Figure RE-GDA0003596354780000101
wherein the content of the first and second substances,
Figure RE-GDA0003596354780000102
representing each ground state with each quantum qiComplex amplitude of qi∈{0,1};
Figure RE-GDA0003596354780000103
Is a square of
Figure RE-GDA0003596354780000104
Is measured probability of, and
Figure RE-GDA0003596354780000105
the application converts the classical input vector into a rotation angle for guiding the rotation of a single quantum bit, and selects the arctan function so that the input value is not limited to between [ -1, 1], but in the whole real number R, which is also the range of the arctan. Further, the input vector of the present application has N dimensions, so that 2N rotation angles can be generated therefrom, and the first angle θ i,1 refers to rotation along the y-axis by applying Ry (θ i,1) gate. θ i,2 refers to rotation along the z-axis through the Rz (θ i,2) gate.
Specifically, the variable layer includes a plurality of quantum CNOT gates for producing a multi-quantum entanglement for each pair of fixedly adjoining 1 and 2 qubits, and a single-quantum-bit rotation gate; single-quantum-bit revolving door for 3 rotation angles { alpha ] in directions along x, y and z axesiiiAnd (4) not fixing in advance, repeatedly calculating for multiple times, increasing the learning depth of the layer, and updating in an iterative optimization process based on a gradient descent method. It should be further noted that the number of qubits and the number of measurements can be adjusted to suit different data processing scenarios (the number of dashed boxes in fig. 3 can be varied to add different parameters, and the number of additions depends on the simulation capabilities of the experimental computer).
Specifically, the end of each VQC is a quantum measurement layer, and the expected value of each qubit is taken into account by computational-based measurements. The use of an equivalent sub-simulation software such as Qiskit allows us to perform numerical calculations on a classical computer, and in theory, the results should be close to those obtained by simulation at the 0 noise limit.
The application also includes a financial asset pricing method based on the quantum circulation neural network, which has the same inventive concept as the financial asset pricing system based on the quantum circulation neural network, and as shown in fig. 3, the financial asset pricing method based on the quantum circulation neural network comprises the following steps:
S1: performing pre-classification processing on financial data based on a quantum k nearest neighbor algorithm;
s2: converting one-dimensional data obtained by pre-classification processing into multi-dimensional tensor data;
s3: encoding the multidimensional tensor data to obtain quantum state data;
s4: performing unitary quantum operation on the quantum state data;
s5: measuring the expected value of each quantum bit, operating one quantum bit through a Poly-Z operation, and obtaining a characteristic vector through a nonlinear excitation function and a hyperbolic tangent function;
s6: classifying based on the feature vectors to obtain a prediction result;
s7: and optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain the final prediction result.
In one example, data is typically continuously changing over time in an actual asset pricing prediction problem, and the price fluctuation interval for an asset is unbounded, i.e., from negative infinity to positive infinity. In a complex practical situation, the repeatability of the learned data is very low, and the input requirement of the QLSTM system is a bounded and discrete sequence, so that the spread and the fall are divided into different value intervals by the quantum k nearest neighbor algorithm QKNN, the sequence is discretized and bounded, and the training data is subjected to presorting treatment (the spread and the fall are different and the spread and the fall are different) to obtain a better learning sequence with stronger characteristics, so that the prediction performance of the QLSTM is improved. Let us assume a training sample set S, each sample of which belongs to one of the pre-assigned classes ω 1 or ω 2 or ω 3. The method comprises the steps that pre-classification processing is carried out, namely elements in a test sample set are correctly classified into omega N through a quantum K nearest neighbor algorithm, wherein N represents the total number of samples in a training sample set; θ is a threshold value. The pre-classification processing specifically comprises the following steps:
S11: creating an initial superposition state | ψ > that contains all possible values;
s12: determining a threshold value theta of K adjacent values based on quantum calculation;
s13: the method aims at solving the problems that the probability amplitude of the condition-satisfying item in the superposed state is maximum (because the probability amplitude of each possible item in the superposed state is influenced by the iteration times and is a periodic function of the iteration times, the threshold value when the item with the maximum probability amplitude is K is found, otherwise the iteration is inaccurate, and the probability amplitude of the condition-satisfying item in the superposed state is maximum), the probability amplitudes of other items are reduced, and the sum of the squares of the total probabilities is always normalizedPerforming Grover iteration in a repeated state, wherein the iteration number is
Figure RE-GDA0003596354780000121
Second, N represents the total number of samples of the training sample set; k represents the set nearest neighbor number and the iteration number is an integer;
s14: and solving the prediction classification result of the quantum state data set. Specifically, the quantum system is measured finally, the uncertain state is collapsed into a fixed state with the maximum probability amplitude finally, pre-classification processing of data is realized based on the fixed state, namely the fixed state belongs to which class, the fixed state is classified into which class, and finally the obtained sequence numbers are obtained.
Further, the QKNN classification results in a one-dimensional data, such as asset worth changes with time sequence, and the input of the LSTM system is generally a three-dimensional tensor, so the present application reconstructs (using tensor operation) the data into a three-dimensional tensor and then inputs the data into the LSTM system.
In one example, the quantum state data expression is:
Figure RE-GDA0003596354780000122
wherein the content of the first and second substances,
Figure RE-GDA0003596354780000123
representing each ground state with each quantum qiComplex amplitude of qi∈{0,1};
Figure RE-GDA0003596354780000124
Is a square of
Figure RE-GDA0003596354780000125
Is measured probability of, and
Figure RE-GDA0003596354780000126
in an example, performing a unitary quantum operation on quantum state data specifically includes:
multiple quantum entanglement for each pair of fixedly adjacent 1 and 2 qubits based on CNOT gates, and rotation of gates { R } based on single qubiti=R(αiii) 3 rotation angles in the directions along the x, y and z axes { alpha }iiiAnd the method is not fixed in advance, and is updated in an iterative optimization process based on a gradient descent method.
In one example, the calculation formula of the feature vector is:
ft=σ(VQC1(vt))
it=σ(VQC2(vt))
Figure RE-GDA0003596354780000127
Figure RE-GDA0003596354780000131
ot=σ(VQC4(vt))
ht=VQC5t*tanh(ct))
yt=VQC6t*tanh(ct))
wherein f istIndicating a forgetting gate; i.e. itRepresenting an input gate;
Figure RE-GDA0003596354780000132
representing the current cell state; c. CtRepresents a memory cell; otAn output gate is shown; h istRepresenting a hidden state; y istRepresenting a feature vector; a sigma nonlinear excitation function; tanh represents a hyperbolic tangent function; v. oftIndicating a hidden state h at time tt-1For which input vector xtAn output of (d); VQCiRepresenting a variational quantum circuit VQC.
In one example, the quantum cycle neural network can be further improved by optimizing the prediction result based on the Bayesian model. Specifically, the regularization term is first analyzed, assuming that the target output of QLSTM is generated by the function:
tq=g(pq)+εq
Wherein g () is an unknown function, also the objective function sought by the present application, and epsilon is a noise source distributed randomly and independently, the optimization objective is a QLSTM that can infinitely approximate function g () with the noise impact minimized.
In particular, the standard performance metric for neural network training is generally defined as the sum of the squares of the errors on the training set, i.e.
Figure RE-GDA0003596354780000133
Wherein a isqRepresenting an input as tqThe output of the time-network, ED, represents the sum of the squares of the errors on the training data. A regularization term containing an approximation function derivative is added in the formula, so that a function can be obtained more smoothly, and the regularization term can be written into a form of a network weight square sum under a certain condition:
Figure RE-GDA0003596354780000134
where EW refers to the sum of the squares of the weights in the neural network. Regularization is a priori information, and optimization of the output result is a maximum a posteriori estimation from the point of view of the Bayesian model, i.e., the distribution of model parameters is deduced from the data assuming that the distribution exists. The regularization term corresponds to prior information contained in the posterior estimation, and if the maximum posterior estimation of the Bayes model is subjected to maximum likelihood estimation, the problem of optimizing the output result is converted into a form for optimizing a loss function and the regularization term in the neural network. On the basis, Bayesian regularization processing is performed.
Specifically, assuming that the weights in the recurrent neural network are random variables, bayes is known for a given training data set:
Figure RE-GDA0003596354780000141
wherein x represents all weight values and bias quantities contained in the quantum circulation neural network; both α and β represent coefficients; m represents the number of the selected quantum neural circulation network layers and the neurons of each layer; p (x | α, M) represents the conditional probability of x under α, M conditions; p (D | α, β, M) represents the conditional probability of x under the α, β, M condition. Assuming that the noise is normally distributed, we can get:
Figure RE-GDA0003596354780000142
the likelihood function described above describes a set of values for a particular network weight that maximizes the weight of the likelihood function given the likelihood of the occurrence of the data set. Rewriting the post-test probability density according to the above assumptions as
Figure RE-GDA0003596354780000143
Where ZF () is a function of α and β; f (x) is a defined regularization indicator, so the problem can now be translated into: maximizing the above-mentioned re-test density function is equivalent to minimizing the regularization performance index β ED+αEW. The parameter α/β is inversely proportional to the variance of the prior distribution of network weights. If the variance is large, it means that the values of the network weights are uncertain, and therefore they may be very large, then the parameters will be small and the regularization ratio will be small. This will allow the network weights to be larger and the network function can have more variation. The larger the variance of the prior density of the network weights, the more the network functions can be varied.
If the parameters alpha and beta are estimated based on Bayesian analysis, a Bayesian formula is needed, and because the currently estimated alpha and beta are, the probability density is rewritten into the conditional probability of alpha and beta in the training set and M
Figure RE-GDA0003596354780000151
Then it can be concluded that:
Figure RE-GDA0003596354780000152
i.e. the likelihood function at this time is the normalization factor. And Taylor expansion of the objective function around the minima point (with quadratic form) with the expansion:
Figure RE-GDA0003596354780000153
wherein x isMPIs the minimum point, H is the blackout matrix of F (x), and fitting the expansion into the probability density function yields:
Figure RE-GDA0003596354780000154
and the standard form of the standard normal distribution in the model is
Figure RE-GDA0003596354780000161
The two formulae can be combined to obtain:
Figure RE-GDA0003596354780000162
then the conditional probability density function of the training set is substituted to obtain
Figure RE-GDA0003596354780000163
The optimal values for α and β (at the minimum points) obtained by the maximum a posteriori estimation are:
Figure RE-GDA0003596354780000164
Figure RE-GDA0003596354780000165
wherein, gamma is the number of effective parameters, and the value is n (the number of all network parameters) minus the product of the optimal value of alpha and the rank of the inverse matrix of the black plug matrix, so as to solve the coefficients alpha and beta, and further realize the optimization processing of the quantum cycle neural network.
The present application further includes a storage medium having the same inventive concept as embodiment 1, and having stored thereon computer instructions which, when executed, perform the steps of the above-described quantum-cycle-neural-network-based financial asset pricing method.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (10)

1. Financial asset pricing system based on quantum circulation neural network, its characterized in that: the system comprises a pre-classification processing unit, a data conversion unit, a quantum circulation neural network and an optimization unit which are connected in sequence;
The pre-classification processing unit is used for performing pre-classification processing on the financial data based on a quantum k nearest neighbor algorithm;
the data conversion unit is used for converting one-dimensional data obtained by pre-classification processing into multi-dimensional tensor data;
the quantum circulating neural network comprises a feature extraction module and a classification module;
the characteristic extraction module comprises a variational quantum circuit VQC, and the variational quantum circuit VQC comprises an encoding layer, a variable layer and a quantum measurement layer which are sequentially connected; the encoding layer is used for encoding the multidimensional tensor data to obtain quantum state data; the variable layer is used for performing unitary quantum operation on the quantum state data; the quantum measurement layer measures an expected value of the probability of each quantum bit, one quantum bit is operated through the Poly-Z operation, and a characteristic vector is obtained through a nonlinear excitation function and a hyperbolic tangent function;
the classification module classifies based on the feature vectors to obtain a prediction result;
the optimization unit is used for optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain the final prediction result.
2. The quantum-recurrent-neural-network-based financial asset pricing system of claim 1, wherein: the feature extraction module specifically comprises 6 variational quantum circuits VQC and a forgetting gate f tAn input gate itA storage unit ctAn output gate otA hidden state htAt the previous moment, the hidden state ht-1And the input vector xtAnd is input into a first variational quantum circuit VQC1And a fourth variational quantum circuit VQC1The outputs are four vectors obtained from the measured values at the end of each variational quantum circuit, andforgetting and updating are determined via a nonlinear activation function.
3. The quantum-recurrent-neural-network-based financial asset pricing system of claim 1, wherein: the variable layer comprises a plurality of quantum CNOT gates and single-quantum-bit rotation gates, and the CNOT gates are used for generating multi-quantum entanglement for each pair of quantum bits fixedly adjacent to 1 and 2; the single-quantum-bit revolving door is used for 3 rotating angles { alpha ] along the directions of x, y and z axesiiiAnd the method is not fixed in advance, and is updated in an iterative optimization process based on a gradient descent method.
4. The financial asset pricing method based on the quantum circulating neural network is characterized by comprising the following steps: which comprises the following steps:
performing pre-classification processing on the financial data based on a quantum k nearest neighbor algorithm;
converting one-dimensional data obtained by pre-classification processing into multi-dimensional tensor data;
encoding the multidimensional tensor data to obtain quantum state data;
Performing unitary quantum operation on the quantum state data;
measuring the expected value of each quantum bit, operating one quantum bit through a Poly-Z operation, and obtaining a characteristic vector through a nonlinear excitation function and a hyperbolic tangent function;
classifying based on the feature vectors to obtain a prediction result;
and optimizing the prediction result based on the Bayesian model, and reversely correcting the parameters of the quantum circulation neural network based on the historical data and the optimization result to obtain the final prediction result.
5. The quantum-recurrent-neural-network-based financial asset pricing method of claim 4, wherein: the pre-classification processing specifically includes:
establishing an initial superposition state | psi > based on a target value range corresponding to data to be classified;
determining a threshold value theta of K adjacent values based on quantum calculation;
the probability amplitude of the condition-satisfying item in the superposition state is maximum, the probability amplitudes of other items are reduced, the sum of the squares of the total probabilities is always normalized, and the Grover iteration of the repetition state is carried out, wherein the iteration times are
Figure RE-FDA0003596354770000021
Second, N represents the total number of samples of the training sample set; k represents the set nearest neighbor number;
and solving the prediction classification result of the quantum state data set.
6. The quantum-recurrent-neural-network-based financial asset pricing method of claim 4, wherein: the quantum state data expression is as follows:
Figure RE-FDA0003596354770000031
Wherein, the first and the second end of the pipe are connected with each other,
Figure RE-FDA0003596354770000032
representing each ground state and each quantum qiComplex amplitude of qi∈{0,1};
Figure RE-FDA0003596354770000033
Is a square of
Figure RE-FDA0003596354770000034
Is measured with probability of, and
Figure RE-FDA0003596354770000035
7. the quantum-recurrent-neural-network-based financial asset pricing method of claim 4, wherein: the performing unitary quantum operation on the quantum state data specifically includes:
based on CNOT gate to each pair of fixed neighborsQubits connected to 1 and 2 produce multiple quantum entanglement and spin gates based on a single qubit { R }i=R(αiii) 3 rotation angles in the directions along the x, y and z axes { alpha }iiiAnd the method is not fixed in advance, and is updated in an iterative optimization process based on a gradient descent method.
8. The quantum-recurrent-neural-network-based financial asset pricing method of claim 4, wherein: the calculation formula for obtaining the feature vector is as follows:
ft=σ(VQC1(vt))
it=σ(VQC2(vt))
Figure RE-FDA0003596354770000036
Figure RE-FDA0003596354770000037
ot=σ(VQC4(vt))
ht=VQC5t*tanh(ct))
yt=VQC6t*tanh(ct))
wherein f istIndicating a forgetting gate; i.e. itRepresenting an input gate;
Figure RE-FDA0003596354770000038
representing the current cell state; c. CtRepresents a memory cell; otAn output gate is shown; h istRepresenting a hidden state; y istRepresenting a feature vector; a sigma nonlinear excitation function; tanh represents a hyperbolic tangent function; v. oftIndicating a hidden state h at time tt-1For which input vector xtAn output of (d); VQCiRepresenting a variational quantum circuit VQC.
9. The quantum-recurrent-neural-network-based financial asset pricing method of claim 4, wherein: the optimizing the prediction result based on the Bayesian model specifically comprises:
And (3) carrying out Bayesian optimization on a training data set D corresponding to the prediction result by taking the weight of the quantum circulation neural network as a random variable to obtain a calculation formula of the tested probability density P (x | D, alpha, beta, M) as follows:
Figure RE-FDA0003596354770000041
wherein x represents all weight values and bias quantities contained in the quantum circulation neural network; both α and β represent coefficients; m represents the number of the selected quantum neural circulation network layers and the neurons of each layer; p (D | x, β, M) represents a likelihood function; p (x | α, M) represents the conditional probability of x under α, M conditions; p (D | α, β, M) represents the conditional probability of x under the α, β, M condition;
and enabling the noise to accord with standard normal distribution, and updating a tested probability density calculation formula according to the likelihood function to obtain:
Figure RE-FDA0003596354770000042
ZF () represents a function with respect to α and β; f (x) represents a defined normalization index;
and calculating to obtain coefficients alpha and beta corresponding to the quantum circulation neural network with optimal performance based on the probability density after Bayesian analysis, thereby realizing the optimization processing of the quantum circulation neural network.
10. A storage medium having stored thereon computer instructions, characterized in that: the computer instructions when executed perform the steps of the quantum-circulant neural network-based financial asset pricing method of any of claims 4-9.
CN202210094394.6A 2022-01-26 2022-01-26 Financial asset pricing method, system and storage medium based on quantum circulation neural network Pending CN114565021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210094394.6A CN114565021A (en) 2022-01-26 2022-01-26 Financial asset pricing method, system and storage medium based on quantum circulation neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210094394.6A CN114565021A (en) 2022-01-26 2022-01-26 Financial asset pricing method, system and storage medium based on quantum circulation neural network

Publications (1)

Publication Number Publication Date
CN114565021A true CN114565021A (en) 2022-05-31

Family

ID=81713700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210094394.6A Pending CN114565021A (en) 2022-01-26 2022-01-26 Financial asset pricing method, system and storage medium based on quantum circulation neural network

Country Status (1)

Country Link
CN (1) CN114565021A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147167A (en) * 2022-09-01 2022-10-04 合肥本源量子计算科技有限责任公司 Snowball option quantum estimation method, snowball option quantum estimation device, snowball option quantum estimation medium, and electronic device
CN115144934A (en) * 2022-06-29 2022-10-04 合肥本源量子计算科技有限责任公司 Weather prediction method based on variational quantum line and related equipment
CN115907019A (en) * 2023-01-09 2023-04-04 苏州浪潮智能科技有限公司 Quantum computer, quantum network and time sequence data prediction method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115144934A (en) * 2022-06-29 2022-10-04 合肥本源量子计算科技有限责任公司 Weather prediction method based on variational quantum line and related equipment
CN115144934B (en) * 2022-06-29 2023-11-03 合肥本源量子计算科技有限责任公司 Meteorological prediction method based on variable component sub-line and related equipment
CN115147167A (en) * 2022-09-01 2022-10-04 合肥本源量子计算科技有限责任公司 Snowball option quantum estimation method, snowball option quantum estimation device, snowball option quantum estimation medium, and electronic device
CN115907019A (en) * 2023-01-09 2023-04-04 苏州浪潮智能科技有限公司 Quantum computer, quantum network and time sequence data prediction method
CN115907019B (en) * 2023-01-09 2023-11-07 苏州浪潮智能科技有限公司 Quantum computer for weather prediction

Similar Documents

Publication Publication Date Title
Qin et al. XGBoost optimized by adaptive particle swarm optimization for credit scoring
Guo et al. Matrix product operators for sequence-to-sequence learning
Wu Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system
CN109242223B (en) Quantum support vector machine evaluation and prediction method for urban public building fire risk
CN114565021A (en) Financial asset pricing method, system and storage medium based on quantum circulation neural network
Nair et al. Implicit mixtures of restricted Boltzmann machines
CN111382930B (en) Time sequence data-oriented risk prediction method and system
Alotaibi Ensemble technique with optimal feature selection for Saudi stock market prediction: a novel hybrid red deer-grey algorithm
Karim et al. Random satisfiability: A higher-order logical approach in discrete Hopfield Neural Network
Li et al. Heterogeneous ensemble learning with feature engineering for default prediction in peer-to-peer lending in China
Abbas et al. On quantum ensembles of quantum classifiers
Hao et al. A bi‐level ensemble learning approach to complex time series forecasting: Taking exchange rates as an example
Kalia et al. Surrogate-assisted multi-objective genetic algorithms for fuzzy rule-based classification
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
Xue et al. Estimating state of health of lithium-ion batteries based on generalized regression neural network and quantum genetic algorithm
Pandey et al. A credit risk assessment on borrowers classification using optimized decision tree and KNN with bayesian optimization
Jodlbauer et al. Analytical comparison of cross impact steady state, DEMATEL, and page rank for analyzing complex systems
CN112990598A (en) Reservoir water level time sequence prediction method and system
CN116739100A (en) Vulnerability detection method of quantum neural network and automatic driving vulnerability detection method
Ferreira et al. Data selection in neural networks
Perales-González et al. Negative correlation hidden layer for the extreme learning machine
Pearce et al. Censored quantile regression neural networks for distribution-free survival analysis
CN115564155A (en) Distributed wind turbine generator power prediction method and related equipment
Shervani-Tabar et al. Physics-constrained predictive molecular latent space discovery with graph scattering variational autoencoder
CN112884028A (en) System resource adjusting method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination