CN110674921A - Method for constructing quantum feedforward neural network based on classical training - Google Patents

Method for constructing quantum feedforward neural network based on classical training Download PDF

Info

Publication number
CN110674921A
CN110674921A CN201910628555.3A CN201910628555A CN110674921A CN 110674921 A CN110674921 A CN 110674921A CN 201910628555 A CN201910628555 A CN 201910628555A CN 110674921 A CN110674921 A CN 110674921A
Authority
CN
China
Prior art keywords
quantum
neural network
neuron
feedforward neural
classical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910628555.3A
Other languages
Chinese (zh)
Inventor
郭国平
赵健
吴玉椿
郭光灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910628555.3A priority Critical patent/CN110674921A/en
Publication of CN110674921A publication Critical patent/CN110674921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Feedback Control In General (AREA)

Abstract

The present disclosure provides a method for constructing a quantum feedforward neural network based on classical training, comprising: step 1: giving a clear definition of quantum neurons; step 2: selecting a specific activation function and then representing a model of the quantum neuron by using a quantum circuit; and step 3: based on the quantum neuron model in the step 2, providing a quantum feedforward neural network model; and step 4: a classical training method is provided, the effectiveness of the classical training method is quantitatively analyzed, the construction of a quantum feedforward neural network based on classical training is completed, the problem that the definite definition of the quantum neural network in the prior art is not uniform is relieved through the method, and a quantum neural network model does not have the condition that the input, the output and the weight are quantum states at the same time; and the implementation of the activation function has no specific quantum wire representation; the quantum neural network model has no ductility; and the technical problems of lack of theoretical analysis on the effectiveness of the training process in the quantum neural network and the like.

Description

Method for constructing quantum feedforward neural network based on classical training
Technical Field
The invention relates to the technical field of quantum computation and neural networks, in particular to a method for constructing a quantum feedforward neural network based on classical training.
Background
The artificial neural network dates back to the neuron model proposed by McCulloch-Pitts (M-P) of 1943 for the earliest. Rosenblatt adds a training process on the basis of M-P neurons, thereby proposing a perceptron model. So far, the artificial neural network not only has a perfect theoretical basis, but also plays an important role in practical application, and the artificial neural network covers the fields of pattern recognition, classification problems, multivariate data analysis and the like.
The idea of quantum neural networks was first proposed in 1995 by Kak and is a model of the combination of classical artificial neural networks and quantum computing. At present, a plurality of quantum neural network models are developed, and some are classical neural networks which realize potential acceleration capability by applying quantum computation; some are completely described by actual physical devices; some are quantum perceptron models; the input and output of quantum neurons in some quantum neural network models are quantum states, and a network is built in a mode that the output of the quantum neurons is used as the input of neurons in the next layer; some quantum neural network models have no training process; some quantum neural network models have no specific training process, only abstract mathematical expressions, and so on.
Disclosure of Invention
Technical problem to be solved
Based on the problems, the present disclosure provides a method for constructing a quantum feedforward neural network based on classical training, so as to alleviate the problem that the well-defined definition of the quantum neural network in the prior art is not uniform, and a quantum neural network model does not have the input, output and weight which are quantum states at the same time; and the implementation of the activation function has no specific quantum wire representation; the quantum neural network model has no ductility; and the technical problems of lack of theoretical analysis on the effectiveness of the training process in the quantum neural network and the like.
(II) technical scheme
The present disclosure provides a method for constructing a quantum feedforward neural network based on classical training, comprising:
step 1: giving a clear definition of quantum neurons;
step 2: selecting a specific activation function and then representing a model of the quantum neuron by using a quantum circuit;
and step 3: based on the quantum neuron model in the step 2, providing a quantum feedforward neural network model; and
and 4, step 4: and (3) providing a classical training method, and quantitatively analyzing the effectiveness of the classical training method to complete the construction of the quantum feedforward neural network based on the classical training.
In the embodiment of the present disclosure, in step 1, the definition map F is an n-variable quantum neuron, which is represented as follows:
Figure BDA0002126956380000021
Figure BDA00021269563800000213
f(<x|w>) Is the output of a quantum neuron, quantum state | x>Is the input to the quantum neuron, f is the activation function,
Figure BDA0002126956380000022
representing the direct product state of n particles, x and w respectively represent column vectors,
Figure BDA0002126956380000023
representing a 2n dimensional hilbert space over a complex field.
In the disclosed embodiments, the output of the quantum neuron is simultaneously taken as the state of the quantum neuron.
In the disclosed embodiment, the activation function f is expressed as follows:
Figure BDA0002126956380000024
Figure BDA00021269563800000214
wherein the content of the first and second substances,
Figure BDA0002126956380000025
Figure BDA0002126956380000026
representing a matrix and a column vector.
In the disclosed embodiment, in step 2, the input of a given neuron is given
Figure BDA0002126956380000027
And weight
Figure BDA0002126956380000028
Then a is the inner product of the input and the weight<x|w>(ii) a Re alpha and Im alpha respectively represent the real part and the imaginary part of alpha, and ideally, the inverse cosine value of the real part of a and the inverse cosine value of the imaginary part of a are both 2 pi/2tInteger multiples of (d), i.e.:andcan be accurately represented as t-bit fractional numbers of binary system, respectively in | phir>,|φi>In the initial state, the corresponding pair Gr,GiPerforming phase estimation to obtain real part information of a and imaginary part information of a; order to
Figure BDA00021269563800000211
Figure BDA00021269563800000212
Performing quantum Fourier transform;
Figure BDA0002126956380000031
then introduce an auxiliary bit |0>R is performed by controlled rotationY(arccos-Rea) and RZ(arccos-Ima) transformation, then performing
Figure BDA0002126956380000032
The transformation of (1); thus, ideally, a specific activation function f of a quantum neuron is given0Namely:
Figure BDA0002126956380000034
in the disclosed embodiment, the activation function is chosen as f0Output of time quantum neuron | d>Has the display expression:
Figure BDA0002126956380000035
in the embodiment of the present disclosure, in step 2, in the case of non-ideal conditions, t bits in the first register are measured respectively to obtain the quantum neuron output
Figure BDA0002126956380000036
The state of the quantum neuron is random, and according to the quantum phase estimation method, the following steps can be stated: on the premise of t determination and given success rate 1-sigma in quantum circuit, obtaining
Figure BDA0002126956380000037
And | d>Distance closeness of (2):
order to
Figure BDA0002126956380000038
Then
Figure BDA0002126956380000039
In the embodiment of the present disclosure, in step 3, the output of the quantum neuron is used as the input of the next quantum neuron, and a quantum feedforward neural network model is constructed in a classical feedforward neural network manner.
In the embodiment of the disclosure, in step 4, given the scale of the quantum feedforward neural network of the K-th layer, the number of quantum neurons of the K-th layer is pkK, the number of each layer of quantum neurons is at most p; to pair
Figure BDA00021269563800000310
Order to
Figure BDA00021269563800000311
This has a 1-sigma success rate to get the error of the output state:
Figure BDA00021269563800000312
where e is the margin of error.
(III) advantageous effects
From the technical scheme, the method for constructing the quantum feedforward neural network based on the classical training has at least one or part of the following beneficial effects:
(1) giving a definition of quantum neurons, and forming a quantum feedforward neural network by the quantum neurons;
(2) the quantum circuit is specific and definite;
(3) the ductility is good, and a quantum circuit scheme of a quantum feedforward neural network of any scale is provided;
(4) the effectiveness of classical training was quantitatively analyzed.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing a quantum feedforward neural network based on classical training according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a quantum neuron according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a quantum circuit structure of a quantum neuron under an ideal state.
Fig. 4 is a schematic diagram of a quantum circuit structure of a quantum neuron in a normal state.
Fig. 5 is a schematic structural diagram of a quantum feedforward neural network according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a quantum feedforward neural network and a corresponding quantum circuit structure according to an embodiment of the disclosure.
FIG. 7 is the view of FIG. 6 in an embodiment of the disclosure
Figure BDA0002126956380000041
The detailed structure of (1) is shown schematically.
Detailed Description
The present disclosure provides a method for constructing a quantum feedforward neural network based on classical training, which provides a definition of quantum neurons, and the quantum feedforward neural network is formed by the quantum neurons. The input, the output and the weight of each neuron of the quantum feedforward neural network are quantum states, and the activation function of each neuron is realized by a specific quantum circuit. The quantum feedforward neural network has extensibility, and a quantum circuit scheme of the quantum feedforward neural network with any scale is provided. The quantum feedforward neural network is trained in a classical training mode, and the effectiveness of the classical training is quantitatively analyzed.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
The description herein includes definitions, formulas and meanings of mathematical symbols referred to in the quantum wire diagrams. In the formula we use the capital letters a, B.. to represent a matrix, the lower case letters x, y.. to represent a column vector, and the greek letters α, β.. to represent a scalar. For a scalar α, let us note that Re α and Im α represent the real and imaginary parts of α, respectively. Given a column vector x, we use xTRepresenting its transpose, xtRepresents its conjugate transpose; this applies to the case of a given matrix. We use
Figure BDA0002126956380000051
Representing 2 in the real number domainnHilbert space of dimension, using
Figure BDA0002126956380000052
2 in complex fieldnHubert space of dimension. We say the quantum state
Figure BDA0002126956380000053
When, | x>It is understood to be a normalized vector. Furthermore, we remember
Figure BDA0002126956380000054
In an embodiment of the present disclosure, a method for constructing a quantum feedforward neural network based on classical training is provided, and as shown in fig. 1, the method for constructing a quantum feedforward neural network based on classical training includes the following steps:
step 1: giving a clear definition of quantum neurons;
in the disclosed embodiments, a definition of a quantum neuron is given: order to
Figure BDA0002126956380000055
Representing the quantum state of n particles
Figure BDA0002126956380000056
Representing the direct product state of the n particles. Note the bookThere is one mapping f:
Figure BDA0002126956380000058
this defines the mapping F as an n-variable quantum neuron:
Figure BDA0002126956380000059
Figure BDA00021269563800000511
in a quantum neuron, the quantum state | x>Input, called quantum neuron, quantum state | wi>Called weight, f is an activation function, f: (<x|w>) Is the output of the quantum neuron, while taking the output of the quantum neuron as the state of the quantum neuron, as shown in fig. 2, generally the input state is allowed to be an entangled state.
The input and the weight are quantum states, the activation function maps the inner product of the input and the weight into a single-particle quantum state, and the single-particle quantum state is called a quantum neuron output related to the activation function and is also called a quantum neuron state. In this definition, a quantum neuron maps an input quantum state to an output quantum state as a vector function from hilbert space to another hilbert space.
Step 2: selecting an activation function and representing a model of the quantum neuron by using a quantum circuit;
input of a given neuron
Figure BDA0002126956380000061
And weight
Figure BDA0002126956380000062
Let a be the inner product of the input and the weight<x|w>。
In the disclosed embodiment, a quantum wire representation of a quantum neuron in an ideal case is given, as shown in fig. 3. When in an ideal case, the ideal case means that the inverse cosine value of the real part of a and the inverse cosine value of the imaginary part of a are both 2 pi/2tInteger multiples of (d), i.e.:andcan be accurately expressed as t-bit decimal numbers in binary system.
Are respectively given by | phir>,|φi>In the initial state, the corresponding pair Gr,GiPerforming phase estimation to obtain a real value of aThe section information and the imaginary part information of a. Order to
Figure BDA0002126956380000065
Performing a quantum fourier transform (FT denotes quantum fourier transform);
Figure BDA0002126956380000067
Figure BDA0002126956380000068
then introduce an auxiliary bit |0>R is performed by controlled rotationY(arccos-Rea) and RZ(arccos-Ima) transformation, then performing
Figure BDA0002126956380000069
And (4) transforming. Thus, in the ideal case as shown in FIG. 3, a specific activation function f of a quantum neuron is given0Namely:
Figure BDA00021269563800000611
the activation function is chosen to be f0Output of time quantum neuron | d>Has the display expression:
Figure BDA00021269563800000612
in the embodiments of the present disclosure, when a general case is referred to, the general case refers to a non-ideal case. As shown in FIG. 4, in a general case, the construction method is to measure t bits in the first register of two circuit phase estimates respectively on the basis of an ideal case to obtain the quantum neuron output
Figure BDA00021269563800000615
The state of the quantum neuron is random, and according to the quantum phase estimation method, the following steps can be stated: on the premise of t determination and given success rate 1-sigma in quantum circuit, obtaining
Figure BDA00021269563800000614
And | d>Distance closeness of (2):
order toThen
Figure BDA0002126956380000072
During the measurement process, there is no need to record or store the measurement results. The quantum neuron allows output states of a plurality of ports to be identical by introducing auxiliary bits, and the output states are the basis for constructing a quantum feedforward neural network.
And step 3: based on the quantum neuron model provided in the step 2, providing a quantum feedforward neural network model;
in the embodiment of the present disclosure, in the step 3, as shown in fig. 5, the input of the quantum feedforward neural network is
Figure BDA0002126956380000073
For convenience of description, as shown in fig. 5, the input is a direct product state | x>=|x1,...,xn>. Here, it is stated that | x>And the layer 0 of the quantum feedforward neural network is formed. In the illustration, each node represents a neuron and each line represents a corresponding weight. Assume that the diagram is a K-layer quantum feedforward neural network, which includes (K-1) hidden layers and an output layer, where the output layer includes s quantum neurons.
According to the method for connecting the neurons by the feedforward neural network, the quantum wires of the quantum feedforward neural network are generated for the basic module by the quantum wires of the quantum neurons. As shown in FIG. 6, this is a quantum feedforward neural network of a specific scale and corresponding quantum wire representation, tableThe quantum feedforward neural network model has an explicit representation of quantum wires. As shown in FIG. 7, from
Figure BDA0002126956380000074
Figure BDA0002126956380000075
Shows the quantum gate in the quantum circuit in FIG. 6In the specific form of (a) or (b),
Figure BDA0002126956380000077
can be analogized in structure
Figure BDA0002126956380000078
Thus obtaining the product.
Step 4, mule: and (3) providing a classical training method, and quantitatively analyzing the effectiveness of the classical training method to complete the construction of the quantum feedforward neural network based on the classical training.
Since the output of the quantum feedforward neural network in the general case is random, a classical training method is proposed here: training is still performed according to the output in the ideal situation. By analyzing the quantum phase estimation method layer by layer, the accumulated error (the error refers to the distance between the quantum state of the output layer in an ideal state and the quantum state of the output layer in a general state) can be obtained by using an iterative method. Through proper parameter selection, the size of the error can be controlled quantitatively, and therefore the effectiveness of the classical training method is judged.
The conclusion of this quantitative analysis is:
given the size of the K-layer quantum feedforward neural network, as shown in fig. 5. Suppose the number of quantum neurons in the k-th layer is pkK is 1, k, and the number of quantum neurons in each layer is at most p. To pair
Figure BDA0002126956380000081
Order to
Figure BDA0002126956380000082
This has a 1-sigma success rate to get the error of the output state:
Figure BDA0002126956380000083
e is the margin of error, a quantity greater than 0.
The method for carrying out quantitative effectiveness analysis on the classical training method is a guarantee for experimentally constructing the quantum feedforward neural network. Specifically, the training method puts forward the deterministic requirement on the bit number of the quantum circuit on the premise of determining the parameters (scale, error of output state and success rate) of the quantum feedforward neural network.
So far, the embodiments of the present disclosure have been described in detail with reference to the accompanying drawings. It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. Further, the above definitions of the various elements and methods are not limited to the various specific structures, shapes or arrangements of parts mentioned in the examples, which may be easily modified or substituted by those of ordinary skill in the art.
From the above description, those skilled in the art should have clear understanding of the method of constructing a quantum feedforward neural network based on classical training in the present disclosure.
In summary, the present disclosure provides a method for constructing a quantum feedforward neural network based on classical training, which provides a definition of quantum neurons from which the quantum feedforward neural network is constructed. The input, the output and the weight of each neuron of the quantum feedforward neural network are quantum states, and the activation function of each neuron is realized by a specific quantum circuit. The quantum feedforward neural network has extensibility, and a quantum circuit scheme of the quantum feedforward neural network with any scale is provided. The quantum feedforward neural network is trained in a classical training mode, and the effectiveness of the classical training is quantitatively analyzed.
It should also be noted that directional terms, such as "upper", "lower", "front", "rear", "left", "right", and the like, used in the embodiments are only directions referring to the drawings, and are not intended to limit the scope of the present disclosure. Throughout the drawings, like elements are represented by like or similar reference numerals. Conventional structures or constructions will be omitted when they may obscure the understanding of the present disclosure.
And the shapes and sizes of the respective components in the drawings do not reflect actual sizes and proportions, but merely illustrate the contents of the embodiments of the present disclosure. Furthermore, in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.
Unless otherwise indicated, the numerical parameters set forth in the specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by the present disclosure. In particular, all numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term "about". Generally, the expression is meant to encompass variations of ± 10% in some embodiments, 5% in some embodiments, 1% in some embodiments, 0.5% in some embodiments by the specified amount.
Furthermore, the word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
The use of ordinal numbers such as "first," "second," "third," etc., in the specification and claims to modify a corresponding element does not by itself connote any ordinal number of the element or any ordering of one element from another or the order of manufacture, and the use of the ordinal numbers is only used to distinguish one element having a certain name from another element having a same name.
In addition, unless steps are specifically described or must occur in sequence, the order of the steps is not limited to that listed above and may be changed or rearranged as desired by the desired design. The embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e., technical features in different embodiments may be freely combined to form further embodiments.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Also in the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (9)

1. A method for constructing a quantum feedforward neural network based on classical training, comprising:
step 1: giving a clear definition of quantum neurons;
step 2: selecting a specific activation function and then representing a model of the quantum neuron by using a quantum circuit;
and step 3: based on the quantum neuron model in the step 2, providing a quantum feedforward neural network model; and
and 4, step 4: and (3) providing a classical training method, and quantitatively analyzing the effectiveness of the classical training method to complete the construction of the quantum feedforward neural network based on the classical training.
2. The method for constructing a quantum feedforward neural network based on classical training as claimed in claim 1, wherein in step 1, the definition mapping F is a quantum neuron with n-variable, and is expressed as follows:
F:
Figure FDA0002126956370000011
Figure FDA0002126956370000012
f(<x|w>) Is the output of a quantum neuron, quantum state | x>Is the input to the quantum neuron, f is the activation function,representing the direct product state of n particles, x and w respectively represent column vectors,
Figure FDA0002126956370000014
representing a 2n dimensional hilbert space over a complex field.
3. The method for constructing the quantum feedforward neural network based on the classical training as claimed in claim 2, and taking the output of the quantum neuron as the state of the quantum neuron.
4. The method for constructing the quantum feedforward neural network based on the classical training as claimed in claim 2, wherein the activation function f is expressed as follows:
f:
Figure FDA0002126956370000016
wherein the content of the first and second substances,
Figure FDA0002126956370000022
representing a matrix and a column vector.
5. The method for constructing a quantum feedforward neural network based on classical training as claimed in claim 1, wherein in step 2, the input of a given neuron
Figure FDA0002126956370000023
And weight
Figure FDA0002126956370000024
Then a is the inner product of the input and the weight<x|w>(ii) a Re alpha and Im alpha respectively represent the real part and the imaginary part of alpha, and ideally, the inverse cosine value of the real part of a and the inverse cosine value of the imaginary part of a are both 2 pi/2tInteger multiples of (d), i.e.:
Figure FDA0002126956370000025
and
Figure FDA0002126956370000026
can be accurately represented as t-bit fractional numbers of binary system, respectively in | phir>,|φi>In the initial state, the corresponding pair Gr,GiPerforming phase estimation to obtain real part information of a and imaginary part information of a; order to
Figure FDA0002126956370000027
Performing quantum Fourier transform;
Figure FDA0002126956370000028
Figure FDA0002126956370000029
then introduce an auxiliary bit |0>R is performed by controlled rotationY(arccos-Rea) and RZ(arccoS-Ima) transformation, then execution
Figure FDA00021269563700000210
The transformation of (1); thus, ideally, a specific activation function f of a quantum neuron is given0Namely:
f0
Figure FDA00021269563700000211
Figure FDA00021269563700000212
6. the method of claim 5, wherein the activation function is selected as f0Output of time quantum neurond>Has the display expression:
Figure FDA00021269563700000213
7. the method according to claim 1, wherein in step 2, in the case of non-ideal conditions, t bits in the first register are measured to obtain the quantum neuron output
Figure FDA0002126956370000031
The state of the quantum neuron is random, and according to the quantum phase estimation method, the following steps can be stated: on the premise of t determination and given success rate 1-sigma in quantum circuit, obtaining
Figure FDA0002126956370000032
And | d>Distance closeness of (2):
order to
Figure FDA0002126956370000033
Then
Figure FDA0002126956370000034
8. The method for constructing the quantum feedforward neural network based on the classical training as claimed in claim 1, wherein in step 3, the output of the quantum neuron is used as the input of the next quantum neuron, and the quantum feedforward neural network model is constructed in the classical feedforward neural network mode.
9. The method for constructing the quantum feedforward neural network based on the classical training as claimed in claim 1, wherein in step 4, given the scale of the quantum feedforward neural network with K layers, the number of quantum neurons in the K layer is pkK, the number of each layer of quantum neurons is at most p; to pair
Figure FDA0002126956370000035
Figure FDA0002126956370000036
Order to
Figure FDA0002126956370000037
This has a 1-sigma success rate to get the error of the output state:
where e is the margin of error.
CN201910628555.3A 2019-07-11 2019-07-11 Method for constructing quantum feedforward neural network based on classical training Pending CN110674921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910628555.3A CN110674921A (en) 2019-07-11 2019-07-11 Method for constructing quantum feedforward neural network based on classical training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910628555.3A CN110674921A (en) 2019-07-11 2019-07-11 Method for constructing quantum feedforward neural network based on classical training

Publications (1)

Publication Number Publication Date
CN110674921A true CN110674921A (en) 2020-01-10

Family

ID=69068845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910628555.3A Pending CN110674921A (en) 2019-07-11 2019-07-11 Method for constructing quantum feedforward neural network based on classical training

Country Status (1)

Country Link
CN (1) CN110674921A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368920A (en) * 2020-03-05 2020-07-03 中南大学 Quantum twin neural network-based binary classification method and face recognition method thereof
CN112101518A (en) * 2020-08-05 2020-12-18 华南理工大学 Quantum system capable of simulating any nonlinear activation function
CN112418387A (en) * 2020-11-18 2021-02-26 北京百度网讯科技有限公司 Quantum data processing method and apparatus
CN112633509A (en) * 2020-12-08 2021-04-09 北京百度网讯科技有限公司 Method for determining distance between quantum data and quantum device
WO2022077797A1 (en) * 2020-10-14 2022-04-21 腾讯科技(深圳)有限公司 Quantum circuit determining method and apparatus, device, and storage medium
CN114444701A (en) * 2022-02-01 2022-05-06 上海图灵智算量子科技有限公司 Training quantum circuit and data embedding method
CN114446414A (en) * 2022-01-24 2022-05-06 电子科技大学 Reverse synthetic analysis method based on quantum circulating neural network
CN113159303B (en) * 2021-03-02 2023-07-21 重庆邮电大学 Quantum circuit-based artificial neuron construction method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368920A (en) * 2020-03-05 2020-07-03 中南大学 Quantum twin neural network-based binary classification method and face recognition method thereof
CN111368920B (en) * 2020-03-05 2024-03-05 中南大学 Quantum twin neural network-based classification method and face recognition method thereof
CN112101518A (en) * 2020-08-05 2020-12-18 华南理工大学 Quantum system capable of simulating any nonlinear activation function
CN112101518B (en) * 2020-08-05 2023-02-10 华南理工大学 Quantum system capable of simulating any nonlinear activation function
WO2022077797A1 (en) * 2020-10-14 2022-04-21 腾讯科技(深圳)有限公司 Quantum circuit determining method and apparatus, device, and storage medium
CN112418387A (en) * 2020-11-18 2021-02-26 北京百度网讯科技有限公司 Quantum data processing method and apparatus
CN112633509A (en) * 2020-12-08 2021-04-09 北京百度网讯科技有限公司 Method for determining distance between quantum data and quantum device
CN113159303B (en) * 2021-03-02 2023-07-21 重庆邮电大学 Quantum circuit-based artificial neuron construction method
CN114446414A (en) * 2022-01-24 2022-05-06 电子科技大学 Reverse synthetic analysis method based on quantum circulating neural network
CN114446414B (en) * 2022-01-24 2023-05-23 电子科技大学 Reverse synthetic analysis method based on quantum circulation neural network
CN114444701A (en) * 2022-02-01 2022-05-06 上海图灵智算量子科技有限公司 Training quantum circuit and data embedding method
CN114444701B (en) * 2022-02-01 2023-10-27 上海图灵智算量子科技有限公司 Training quantum circuit and data embedding method

Similar Documents

Publication Publication Date Title
CN110674921A (en) Method for constructing quantum feedforward neural network based on classical training
Chui et al. Deep nets for local manifold learning
Huynh et al. Regularized online sequential learning algorithm for single-hidden layer feedforward neural networks
Wang et al. Sparse convex clustering
Kusumoto et al. Experimental quantum kernel trick with nuclear spins in a solid
Damle et al. SCDM-k: Localized orbitals for solids via selected columns of the density matrix
Banica et al. Flat matrix models for quantum permutation groups
Yang et al. Nonparametric multiple expectile regression via ER-Boost
Sauer et al. Vecchia-approximated deep Gaussian processes for computer experiments
JP7509152B2 (en) Information processing system, information processing method, and information processing program
CN112884153A (en) Method and related device for processing data
Liao et al. A deep convolutional neural network module that promotes competition of multiple-size filters
JP7068299B2 (en) Feature amount selection device, feature amount selection method and feature amount selection program
Nomer et al. Neural knapsack: a neural network based solver for the knapsack problem
CN114358111A (en) Object clustering model obtaining method, object clustering method and device
Bremner et al. Canonical forms of 2× 2× 2 and 2× 2× 2× 2 arrays over 𝔽2 and 𝔽3
Chavent et al. A sliced inverse regression approach for a stratified population
Kepner et al. Graphchallenge. org sparse deep neural network performance
Sotelo Quantum computing: What, why, who
CN112101518B (en) Quantum system capable of simulating any nonlinear activation function
Eck et al. Combining envelope methodology and aster models for variance reduction in life history analyses
Tian et al. Performance evaluation of regression splines for propensity score adjustment in post-market safety analysis with multiple treatments
Mordant Transporting probability measures
Iserte et al. A study on the performance of distributed training of data-driven CFD simulations
CN115511070A (en) Model training method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110

RJ01 Rejection of invention patent application after publication