CN112613571A - Quantum neural network method, system and medium for image recognition - Google Patents
Quantum neural network method, system and medium for image recognition Download PDFInfo
- Publication number
- CN112613571A CN112613571A CN202011601932.3A CN202011601932A CN112613571A CN 112613571 A CN112613571 A CN 112613571A CN 202011601932 A CN202011601932 A CN 202011601932A CN 112613571 A CN112613571 A CN 112613571A
- Authority
- CN
- China
- Prior art keywords
- quantum
- neural network
- data
- learning
- quantum state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000008569 process Effects 0.000 claims abstract description 22
- 238000005259 measurement Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000012512 characterization method Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims abstract description 10
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 19
- 239000002096 quantum dot Substances 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 10
- 239000000126 substance Substances 0.000 claims description 10
- 230000009471 action Effects 0.000 claims description 5
- 238000011478 gradient descent method Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N10/00—Quantum computing, i.e. information processing based on quantum-mechanical phenomena
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computational Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition, which relate to the technical field of quantum neural network methods, and the method comprises the following steps: quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result; constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation; and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification. The invention can make the structural logic of the quantum neural network clearer, is easy to realize and has higher learning efficiency.
Description
Technical Field
The invention relates to the technical field of quantum neural network methods, in particular to a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition.
Background
The quantum neural network method is the combination of a quantum method and an artificial neural network. The quantum computing utilizes the quantum effect of the superposition and entanglement of the micro world to solve the computing problem in mathematics. Compared with the classical method, the quantum method can provide exponential acceleration capability in searching the hidden subgroup problem and geometric acceleration capability in searching the problem. Therefore, the focus of quantum method research has been on how to better expand such exponential acceleration and geometric acceleration capabilities, however, among other problems, such as machine learning, preliminary studies show that quantum methods have other advantages, such as better classification accuracy, deeper knowledge mining capability, and higher memory capability.
However, with the arrival of the big data era and the moore's law moving to the physical limit, the quantum neural network method grows, and the quantum neural network can better solve the bottleneck problem encountered at present with larger information capacity and high-efficiency parallel computing capability. Interpretation of terms: quantum state encoding: i.e. traditional numerical information is encoded into corresponding quantum states in a certain protocol. PCA: the principal component analysis technique, also called principal component analysis technique, aims to convert multiple indexes into a few comprehensive indexes by using the idea of dimension reduction. The MNIST data set is a standard handwritten digital picture set and is mainly applied to a standard data set of model training of image recognition. PCA: each data of the MNIST is understood as a 28 × 28-784-dimensional data in the sense of dimension reduction processing.
Aiming at the prior art, a quantum neural network is also researched internationally, as shown in fig. 1, a quantum neural network structure in the prior art is shown, and it can be seen from the figure that a quantum gate G used by the network is a random unitary transformation, a two-bit quantum gate is a controlled G gate, and it is known that the existing quantum neural network structure has unclear logic, is not easy to implement, and has low learning efficiency.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition, which can make the structural logic of the quantum neural network clearer, are easy to realize and have higher learning efficiency.
According to the quantum neural network method, system and medium for image recognition provided by the invention, the scheme is as follows:
in a first aspect, a quantum neural network method for image recognition is provided, the method comprising:
quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation;
and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
Preferably, the quantum state encoding comprises:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
According to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
wherein the content of the first and second substances,andrepresenting the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state beingConvolution of 7 initial qubits in quantum state after encoding for one complete picture information
Preferably, the constructing the quantum neural network comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0,
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a process of forward propagation and,
defining a loss function:
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learnAnd) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
wherein the key is to solveWherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
wherein the content of the first and second substances,presentation pairCalculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein, L represents the maximum number of layers of the network; through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Preferably, the measurement characterization includes:
and 4 quantum bits are used for characterization, wherein the 4 quantum bits can characterize 24-16 quantum states, and ten numbers of (0-9)10 quantum states are taken to characterize 0-9, wherein the quantum state with the highest probability represents the corresponding output result after forward propagation.
In a second aspect, there is provided a quantum neural network system for image recognition, the system comprising:
a quantum state encoding module: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network module: optimizing the learning process of the quantum neural network through decomposition calculation;
a measurement characterization module: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
Preferably, the quantum state encoding module includes:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
And converting the numerical value information after the pretreatment into angle information corresponding to the operation of the revolving door. Namely:
wherein the content of the first and second substances,andrepresenting the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state beingConvolution of 7 initial qubits in quantum state after encoding for one complete picture information
Preferably, the constructing the quantum neural network module comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0,
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a forward propagation process.
Defining a loss function:
wherein D represents a data set needing to be learned, M represents the quantity of data to be learned in each batch, and theta represents a parameter needing to be learned of the network (And) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
wherein the key is to solveWherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
wherein the content of the first and second substances,presentation pairCalculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein, L represents the maximum number of layers of the network; through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Preferably, the measurement result characterization module includes:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
Compared with the prior art, the invention has the following beneficial effects:
1. the PCA process is introduced, so that the dimensionality is obviously reduced, and the image identification capability with higher accuracy can be realized through fewer quantum bits;
2. compared with any unitary gate, the common revolving gate and CNot gate are easy to realize, and meanwhile, the logic structure is simple, the realization is easy, and the learning efficiency is high;
3. by the image recognition of the MNIST data set through the network, higher image recognition accuracy can be achieved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a diagram of a quantum neural network structure most closely related to the present invention;
FIG. 2 is an overall flow chart of the present invention;
FIG. 3 is a diagram of corresponding quantum state encoding;
FIG. 4 is a diagram of a quantum neural network architecture employed in the present invention;
fig. 5 is a diagram of a quantum neural network structure employed in the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The embodiment of the invention provides a quantum neural network method for image recognition, which comprises three parts as shown in figure 2: quantum state coding, quantum neural network construction and measurement result characterization.
As shown in fig. 3, first, quantum state encoding is performed, and before encoding, MNIST dataset data is subjected to a preprocessing, and a PCA processing is performed on the dataset, where the PCA processing may be understood as a dimension reduction processing, and each data of MNIST is a 28 × 28-784-dimensional data. After PCA processing, we remain the most dominant 7 dimensions, which can be specified in this embodimentAdjusting according to experience, then carrying out normalization processing, and normalizing the data to (-1, 1) to obtain data x ═ x0,x1,…,x6This is due to the need to perform an inverse trigonometric function operation to convert the numerical information to angular information. And converting the numerical value information after the pretreatment into angle information corresponding to the operation of the revolving door. Namely:
wherein the content of the first and second substances,andrepresenting the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state beingConvolution of 7 initial qubits in quantum state after encoding for one complete picture information
Referring to fig. 4 and 5, the construction diagram of the quantum neural network is composed of k layers of intermediate layers and 1 layer of output layers, wherein the intermediate layers are composed of a single quantum gate layer and a double quantum gate layer group layer with a fixed structure, and the single quantum gate is composed of Rz(θz)·Ry(θy) The double quantum gate is a fixed CNot gate. The output layer is a layer Rz(θz)·Ry(θy) Single quantum gate of (2).
And secondly, constructing a quantum neural network, wherein the construction of the quantum neural network is a core part of a quantum neural network method, and the structure of the network determines the final learning efficiency and the image identification accuracy of the network. Referring to the network structures shown in fig. 4 and 5, we need to discuss several parts of forward propagation, loss function definition and back propagation optimization for a neural network.
First, for forward propagation, according to the network structure:
U=Uout·Uent·Uk…Uent·U0,
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor the double-bit-action gate set, U, of the structure of FIG. 5kRepresents the single bit gate operator, U, of the k-th layer in FIG. 40Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a forward propagation process.
Then a loss function is defined:
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learnAnd) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
wherein the key is to solveWherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
according to the definition of the pi (x; theta) function, the following is obtained:
wherein the content of the first and second substances,presentation pairCalculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Finally, for the characterization of the measurement result, since the MNIST dataset is labeled with 0-9 ten digits, 4 qubits can be used for characterization, and 4 qubits can characterize 24-16 quantum states, and (0-9)10 quantum states are taken to represent 0-9 ten digits, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
The embodiment of the invention provides a quantum neural network method for image recognition, and the invention introduces a PCA process, so that the dimensionality is obviously reduced, and the image recognition capability with higher accuracy can be realized through fewer quantum bits. Compared with the quantum neural network shown in the figure 1, the quantum neural network adopts the commonly used revolving gate and CNot gate, and compared with any unitary gate, the quantum neural network is easy to realize, simple in logic structure, easy to realize and high in learning efficiency. The image recognition of the MNIST data set through the network can achieve 91% of image recognition accuracy rate, and the accuracy rate is higher.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (9)
1. A quantum neural network method for image recognition, the method comprising:
quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation;
and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
2. The method of claim 1, wherein the quantum state encoding comprises:
step 1: before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
step 2: after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
And step 3: according to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
wherein the content of the first and second substances,andrepresenting the rotation angles of the ith qubit in the z and y directions, respectively;
3. The method of claim 1, wherein constructing the quantum neural network comprises:
step 1: forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0where U is the overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>, | where | ψinPhi and phioutThe input quantum state and the output quantum state are respectively represented;
this is a forward propagation process;
step 2: defining a loss function:
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learnAnd) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
wherein the key is to solveWherein t represents the t-th iteration, and eta represents the learning rate;
from the definition of the loss function, it is possible to obtain:
and step 3: and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
wherein the content of the first and second substances,presentation pairCalculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
4. The method of claim 1, wherein the measurement characterization comprises:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
5. A quantum neural network system for image recognition, the system comprising:
a quantum state encoding module: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network module: optimizing the learning process of the quantum neural network through decomposition calculation;
a measurement characterization module: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
6. The system of claim 5, wherein the quantum state encoding module comprises:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
According to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
wherein the content of the first and second substances,andrepresenting the rotation angles of the ith qubit in the z and y directions, respectively;
7. The system of claim 5, wherein the constructing the quantum neural network module comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0,
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>, | where | ψinPhi and phioutThe input quantum state and the output quantum state are respectively represented;
this is a forward propagation process;
defining a loss function:
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learnAnd) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
wherein the key is to solveWherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
wherein the content of the first and second substances,presentation pairCalculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
8. The system of claim 5, wherein the measurement characterization module comprises:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011601932.3A CN112613571A (en) | 2020-12-29 | 2020-12-29 | Quantum neural network method, system and medium for image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011601932.3A CN112613571A (en) | 2020-12-29 | 2020-12-29 | Quantum neural network method, system and medium for image recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112613571A true CN112613571A (en) | 2021-04-06 |
Family
ID=75249027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011601932.3A Pending CN112613571A (en) | 2020-12-29 | 2020-12-29 | Quantum neural network method, system and medium for image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112613571A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033703A (en) * | 2021-04-21 | 2021-06-25 | 北京百度网讯科技有限公司 | Quantum neural network training method and device, electronic device and medium |
CN113255747A (en) * | 2021-05-14 | 2021-08-13 | 山东英信计算机技术有限公司 | Quantum multichannel convolutional neural classification method, system, terminal and storage medium |
CN113361664A (en) * | 2021-08-10 | 2021-09-07 | 北京航空航天大学 | Image recognition system and method based on quantum convolution neural network |
CN114764620A (en) * | 2021-12-31 | 2022-07-19 | 合肥本源量子计算科技有限责任公司 | Quantum convolution manipulator |
CN114821217A (en) * | 2021-06-28 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Image identification method and device based on quantum classical hybrid neural network |
CN114821147A (en) * | 2021-04-29 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Display method and device for quantum image recognition |
CN114819166A (en) * | 2022-05-27 | 2022-07-29 | 北京大学 | Evolution method and device of quantum system |
CN114863167A (en) * | 2022-04-22 | 2022-08-05 | 苏州浪潮智能科技有限公司 | Method, system, equipment and medium for image recognition and classification |
CN115630690A (en) * | 2022-12-21 | 2023-01-20 | 量子科技长三角产业创新中心 | Sampling method and related device for quantum neural network structure optimization |
-
2020
- 2020-12-29 CN CN202011601932.3A patent/CN112613571A/en active Pending
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033703A (en) * | 2021-04-21 | 2021-06-25 | 北京百度网讯科技有限公司 | Quantum neural network training method and device, electronic device and medium |
CN114821147A (en) * | 2021-04-29 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Display method and device for quantum image recognition |
CN114821147B (en) * | 2021-04-29 | 2023-08-08 | 本源量子计算科技(合肥)股份有限公司 | Quantum image recognition display method and device |
CN113255747A (en) * | 2021-05-14 | 2021-08-13 | 山东英信计算机技术有限公司 | Quantum multichannel convolutional neural classification method, system, terminal and storage medium |
CN114821217B (en) * | 2021-06-28 | 2023-08-08 | 本源量子计算科技(合肥)股份有限公司 | Image recognition method and device based on quantum classical hybrid neural network |
CN114821217A (en) * | 2021-06-28 | 2022-07-29 | 合肥本源量子计算科技有限责任公司 | Image identification method and device based on quantum classical hybrid neural network |
CN113361664A (en) * | 2021-08-10 | 2021-09-07 | 北京航空航天大学 | Image recognition system and method based on quantum convolution neural network |
CN114764620A (en) * | 2021-12-31 | 2022-07-19 | 合肥本源量子计算科技有限责任公司 | Quantum convolution manipulator |
CN114764620B (en) * | 2021-12-31 | 2024-04-09 | 本源量子计算科技(合肥)股份有限公司 | Quantum convolution operator |
CN114863167A (en) * | 2022-04-22 | 2022-08-05 | 苏州浪潮智能科技有限公司 | Method, system, equipment and medium for image recognition and classification |
CN114863167B (en) * | 2022-04-22 | 2024-02-02 | 苏州浪潮智能科技有限公司 | Image recognition and classification method, system, equipment and medium |
CN114819166A (en) * | 2022-05-27 | 2022-07-29 | 北京大学 | Evolution method and device of quantum system |
CN115630690A (en) * | 2022-12-21 | 2023-01-20 | 量子科技长三角产业创新中心 | Sampling method and related device for quantum neural network structure optimization |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112613571A (en) | Quantum neural network method, system and medium for image recognition | |
Li et al. | A quantum deep convolutional neural network for image recognition | |
Zhang et al. | An overview on restricted Boltzmann machines | |
CN111191002B (en) | Neural code searching method and device based on hierarchical embedding | |
CN109086802B (en) | Image classification method based on eight-element convolution neural network | |
CN112464003B (en) | Image classification method and related device | |
Di et al. | Amplitude transformed quantum convolutional neural network | |
CN111008224A (en) | Time sequence classification and retrieval method based on deep multitask representation learning | |
CN109740758B (en) | Quantum computation-based nuclear method | |
Menaga et al. | Deep learning: a recent computing platform for multimedia information retrieval | |
CN115795065A (en) | Multimedia data cross-modal retrieval method and system based on weighted hash code | |
WO2022222854A1 (en) | Data processing method and related device | |
Srinivas et al. | A comprehensive survey of techniques, applications, and challenges in deep learning: A revolution in machine learning | |
CN115062727A (en) | Graph node classification method and system based on multi-order hypergraph convolutional network | |
Altares-López et al. | AutoQML: Automatic generation and training of robust quantum-inspired classifiers by using evolutionary algorithms on grayscale images | |
Inkeaw et al. | Density based semi-automatic labeling on multi-feature representations for ground truth generation: Application to handwritten character recognition | |
Khan et al. | Efficiently processing big data in real-time employing deep learning algorithms | |
ElAdel et al. | Fast DCNN based on FWT, intelligent dropout and layer skipping for image retrieval | |
Xia et al. | Efficient synthesis of compact deep neural networks | |
CN112735604B (en) | Novel coronavirus classification method based on deep learning algorithm | |
CN114219092A (en) | Data processing method and system | |
CN114511097A (en) | Mutual learning method and system based on quantum circuit | |
CN115115920A (en) | Data training method and device | |
CN114550849A (en) | Method for solving chemical molecular property prediction based on quantum graph neural network | |
Chen et al. | Defining and Extracting generalizable interaction primitives from DNNs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210406 |