CN112613571A - Quantum neural network method, system and medium for image recognition - Google Patents

Quantum neural network method, system and medium for image recognition Download PDF

Info

Publication number
CN112613571A
CN112613571A CN202011601932.3A CN202011601932A CN112613571A CN 112613571 A CN112613571 A CN 112613571A CN 202011601932 A CN202011601932 A CN 202011601932A CN 112613571 A CN112613571 A CN 112613571A
Authority
CN
China
Prior art keywords
quantum
neural network
data
learning
quantum state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011601932.3A
Other languages
Chinese (zh)
Inventor
叶永金
季阳
吴永政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 32 Research Institute
Original Assignee
CETC 32 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 32 Research Institute filed Critical CETC 32 Research Institute
Priority to CN202011601932.3A priority Critical patent/CN112613571A/en
Publication of CN112613571A publication Critical patent/CN112613571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computational Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition, which relate to the technical field of quantum neural network methods, and the method comprises the following steps: quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result; constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation; and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification. The invention can make the structural logic of the quantum neural network clearer, is easy to realize and has higher learning efficiency.

Description

Quantum neural network method, system and medium for image recognition
Technical Field
The invention relates to the technical field of quantum neural network methods, in particular to a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition.
Background
The quantum neural network method is the combination of a quantum method and an artificial neural network. The quantum computing utilizes the quantum effect of the superposition and entanglement of the micro world to solve the computing problem in mathematics. Compared with the classical method, the quantum method can provide exponential acceleration capability in searching the hidden subgroup problem and geometric acceleration capability in searching the problem. Therefore, the focus of quantum method research has been on how to better expand such exponential acceleration and geometric acceleration capabilities, however, among other problems, such as machine learning, preliminary studies show that quantum methods have other advantages, such as better classification accuracy, deeper knowledge mining capability, and higher memory capability.
However, with the arrival of the big data era and the moore's law moving to the physical limit, the quantum neural network method grows, and the quantum neural network can better solve the bottleneck problem encountered at present with larger information capacity and high-efficiency parallel computing capability. Interpretation of terms: quantum state encoding: i.e. traditional numerical information is encoded into corresponding quantum states in a certain protocol. PCA: the principal component analysis technique, also called principal component analysis technique, aims to convert multiple indexes into a few comprehensive indexes by using the idea of dimension reduction. The MNIST data set is a standard handwritten digital picture set and is mainly applied to a standard data set of model training of image recognition. PCA: each data of the MNIST is understood as a 28 × 28-784-dimensional data in the sense of dimension reduction processing.
Aiming at the prior art, a quantum neural network is also researched internationally, as shown in fig. 1, a quantum neural network structure in the prior art is shown, and it can be seen from the figure that a quantum gate G used by the network is a random unitary transformation, a two-bit quantum gate is a controlled G gate, and it is known that the existing quantum neural network structure has unclear logic, is not easy to implement, and has low learning efficiency.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a quantum neural network method, a quantum neural network system and a quantum neural network medium for image recognition, which can make the structural logic of the quantum neural network clearer, are easy to realize and have higher learning efficiency.
According to the quantum neural network method, system and medium for image recognition provided by the invention, the scheme is as follows:
in a first aspect, a quantum neural network method for image recognition is provided, the method comprising:
quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation;
and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
Preferably, the quantum state encoding comprises:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
According to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
Figure BDA0002869028680000021
Figure BDA0002869028680000022
wherein the content of the first and second substances,
Figure BDA0002869028680000023
and
Figure BDA0002869028680000024
representing the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state being
Figure BDA0002869028680000025
Convolution of 7 initial qubits in quantum state after encoding for one complete picture information
Figure BDA0002869028680000026
Preferably, the constructing the quantum neural network comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a process of forward propagation and,
defining a loss function:
Figure BDA0002869028680000027
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learn
Figure BDA0002869028680000028
And
Figure BDA0002869028680000029
) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
Figure BDA0002869028680000031
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
Figure BDA0002869028680000032
wherein the key is to solve
Figure BDA0002869028680000033
Wherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
Figure BDA0002869028680000034
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
Figure BDA0002869028680000035
Figure BDA0002869028680000036
wherein the content of the first and second substances,
Figure BDA0002869028680000037
presentation pair
Figure BDA0002869028680000038
Calculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein
Figure BDA0002869028680000039
Can be further decomposed into:
Figure BDA00028690286800000310
wherein, L represents the maximum number of layers of the network; through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Preferably, the measurement characterization includes:
and 4 quantum bits are used for characterization, wherein the 4 quantum bits can characterize 24-16 quantum states, and ten numbers of (0-9)10 quantum states are taken to characterize 0-9, wherein the quantum state with the highest probability represents the corresponding output result after forward propagation.
In a second aspect, there is provided a quantum neural network system for image recognition, the system comprising:
a quantum state encoding module: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network module: optimizing the learning process of the quantum neural network through decomposition calculation;
a measurement characterization module: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
Preferably, the quantum state encoding module includes:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
And converting the numerical value information after the pretreatment into angle information corresponding to the operation of the revolving door. Namely:
Figure BDA00028690286800000311
Figure BDA0002869028680000041
wherein the content of the first and second substances,
Figure BDA0002869028680000042
and
Figure BDA0002869028680000043
representing the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state being
Figure BDA0002869028680000044
Convolution of 7 initial qubits in quantum state after encoding for one complete picture information
Figure BDA0002869028680000045
Preferably, the constructing the quantum neural network module comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a forward propagation process.
Defining a loss function:
Figure BDA0002869028680000046
wherein D represents a data set needing to be learned, M represents the quantity of data to be learned in each batch, and theta represents a parameter needing to be learned of the network (
Figure BDA0002869028680000047
And
Figure BDA0002869028680000048
) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
Figure BDA0002869028680000049
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
Figure BDA00028690286800000410
wherein the key is to solve
Figure BDA00028690286800000411
Wherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
Figure BDA00028690286800000412
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
Figure BDA00028690286800000413
Figure BDA00028690286800000414
wherein the content of the first and second substances,
Figure BDA00028690286800000415
presentation pair
Figure BDA00028690286800000416
Calculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein
Figure BDA0002869028680000051
Can be further decomposed into:
Figure BDA0002869028680000052
wherein, L represents the maximum number of layers of the network; through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Preferably, the measurement result characterization module includes:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
Compared with the prior art, the invention has the following beneficial effects:
1. the PCA process is introduced, so that the dimensionality is obviously reduced, and the image identification capability with higher accuracy can be realized through fewer quantum bits;
2. compared with any unitary gate, the common revolving gate and CNot gate are easy to realize, and meanwhile, the logic structure is simple, the realization is easy, and the learning efficiency is high;
3. by the image recognition of the MNIST data set through the network, higher image recognition accuracy can be achieved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a diagram of a quantum neural network structure most closely related to the present invention;
FIG. 2 is an overall flow chart of the present invention;
FIG. 3 is a diagram of corresponding quantum state encoding;
FIG. 4 is a diagram of a quantum neural network architecture employed in the present invention;
fig. 5 is a diagram of a quantum neural network structure employed in the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The embodiment of the invention provides a quantum neural network method for image recognition, which comprises three parts as shown in figure 2: quantum state coding, quantum neural network construction and measurement result characterization.
As shown in fig. 3, first, quantum state encoding is performed, and before encoding, MNIST dataset data is subjected to a preprocessing, and a PCA processing is performed on the dataset, where the PCA processing may be understood as a dimension reduction processing, and each data of MNIST is a 28 × 28-784-dimensional data. After PCA processing, we remain the most dominant 7 dimensions, which can be specified in this embodimentAdjusting according to experience, then carrying out normalization processing, and normalizing the data to (-1, 1) to obtain data x ═ x0,x1,…,x6This is due to the need to perform an inverse trigonometric function operation to convert the numerical information to angular information. And converting the numerical value information after the pretreatment into angle information corresponding to the operation of the revolving door. Namely:
Figure BDA0002869028680000061
Figure BDA0002869028680000062
wherein the content of the first and second substances,
Figure BDA0002869028680000063
and
Figure BDA0002869028680000064
representing the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state being
Figure BDA0002869028680000065
Convolution of 7 initial qubits in quantum state after encoding for one complete picture information
Figure BDA0002869028680000066
Referring to fig. 4 and 5, the construction diagram of the quantum neural network is composed of k layers of intermediate layers and 1 layer of output layers, wherein the intermediate layers are composed of a single quantum gate layer and a double quantum gate layer group layer with a fixed structure, and the single quantum gate is composed of Rzz)·Ryy) The double quantum gate is a fixed CNot gate. The output layer is a layer Rzz)·Ryy) Single quantum gate of (2).
And secondly, constructing a quantum neural network, wherein the construction of the quantum neural network is a core part of a quantum neural network method, and the structure of the network determines the final learning efficiency and the image identification accuracy of the network. Referring to the network structures shown in fig. 4 and 5, we need to discuss several parts of forward propagation, loss function definition and back propagation optimization for a neural network.
First, for forward propagation, according to the network structure:
U=Uout·Uent·Uk…Uent·U0
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor the double-bit-action gate set, U, of the structure of FIG. 5kRepresents the single bit gate operator, U, of the k-th layer in FIG. 40Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>wherein, | ψin>And | ψout>Representing input and output quantum states, respectively;
this is a forward propagation process.
Then a loss function is defined:
Figure BDA0002869028680000067
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learn
Figure BDA0002869028680000068
And
Figure BDA0002869028680000069
) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
Figure BDA0002869028680000071
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
Figure BDA0002869028680000072
wherein the key is to solve
Figure BDA0002869028680000073
Wherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
Figure BDA0002869028680000074
according to the definition of the pi (x; theta) function, the following is obtained:
Figure BDA0002869028680000075
Figure BDA0002869028680000076
wherein the content of the first and second substances,
Figure BDA0002869028680000077
presentation pair
Figure BDA0002869028680000078
Calculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein
Figure BDA0002869028680000079
Can be further decomposed into:
Figure BDA00028690286800000710
wherein, L represents the maximum number of layers of the network;
through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
Finally, for the characterization of the measurement result, since the MNIST dataset is labeled with 0-9 ten digits, 4 qubits can be used for characterization, and 4 qubits can characterize 24-16 quantum states, and (0-9)10 quantum states are taken to represent 0-9 ten digits, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
The embodiment of the invention provides a quantum neural network method for image recognition, and the invention introduces a PCA process, so that the dimensionality is obviously reduced, and the image recognition capability with higher accuracy can be realized through fewer quantum bits. Compared with the quantum neural network shown in the figure 1, the quantum neural network adopts the commonly used revolving gate and CNot gate, and compared with any unitary gate, the quantum neural network is easy to realize, simple in logic structure, easy to realize and high in learning efficiency. The image recognition of the MNIST data set through the network can achieve 91% of image recognition accuracy rate, and the accuracy rate is higher.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (9)

1. A quantum neural network method for image recognition, the method comprising:
quantum state encoding: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network: optimizing the learning process of the quantum neural network through decomposition calculation;
and (3) characterization of measurement results: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
2. The method of claim 1, wherein the quantum state encoding comprises:
step 1: before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
step 2: after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
And step 3: according to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
Figure FDA0002869028670000011
Figure FDA0002869028670000012
wherein the content of the first and second substances,
Figure FDA0002869028670000013
and
Figure FDA0002869028670000014
representing the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state being
Figure FDA0002869028670000015
Convolution of 7 initial qubits in quantum state after encoding for one complete picture information
Figure FDA0002869028670000016
3. The method of claim 1, wherein constructing the quantum neural network comprises:
step 1: forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0where U is the overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>, | where | ψinPhi and phioutThe input quantum state and the output quantum state are respectively represented;
this is a forward propagation process;
step 2: defining a loss function:
Figure FDA0002869028670000017
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learn
Figure FDA0002869028670000018
And
Figure FDA0002869028670000019
) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
Figure FDA0002869028670000021
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
Figure FDA0002869028670000022
wherein the key is to solve
Figure FDA0002869028670000023
Wherein t represents the t-th iteration, and eta represents the learning rate;
from the definition of the loss function, it is possible to obtain:
Figure FDA0002869028670000024
and step 3: and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
Figure FDA0002869028670000025
Figure FDA0002869028670000026
wherein the content of the first and second substances,
Figure FDA0002869028670000027
presentation pair
Figure FDA0002869028670000028
Calculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein
Figure FDA0002869028670000029
Can be further decomposed into:
Figure FDA00028690286700000210
wherein, L represents the maximum number of layers of the network;
through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
4. The method of claim 1, wherein the measurement characterization comprises:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
5. A quantum neural network system for image recognition, the system comprising:
a quantum state encoding module: preprocessing MNIST data set data, and converting the MNIST data set data into angle information corresponding to the operation of the revolving door according to a processing result;
constructing a quantum neural network module: optimizing the learning process of the quantum neural network through decomposition calculation;
a measurement characterization module: and finding out the quantum state with the maximum probability through the measurement result to realize image identification.
6. The system of claim 5, wherein the quantum state encoding module comprises:
before encoding, carrying out a pretreatment on MNIST data set data, and carrying out PCA treatment on the data set;
after PCA processing, the most main 7-dimension is reserved, then normalization processing is carried out, the data are normalized to (-1, 1), and data x is obtained as { x ═ x0,x1,…,x6};
According to the numerical value information after the pretreatment, converting the numerical value information into angle information corresponding to the operation of the revolving door, namely:
Figure FDA0002869028670000031
Figure FDA0002869028670000032
wherein the content of the first and second substances,
Figure FDA0002869028670000033
and
Figure FDA0002869028670000034
representing the rotation angles of the ith qubit in the z and y directions, respectively;
this results in each encoded initial qubit quantum state being
Figure FDA0002869028670000035
For oneConvolution of complete picture information coded quanta state into 7 initial quanta bits
Figure FDA0002869028670000036
7. The system of claim 5, wherein the constructing the quantum neural network module comprises:
forward propagation:
according to the network structure:
U=Uout·Uent·Uk…Uent·U0
wherein U is an overall evolution matrix, UoutFor the last layer of single bit gate action, UentFor a double-bit active-gate collection, UkSingle bit gate operator, U, representing the k-th layer0Representing a layer 0 single bit gate operator;
the final output is:
U·|ψin>=|ψout>, | where | ψinPhi and phioutThe input quantum state and the output quantum state are respectively represented;
this is a forward propagation process;
defining a loss function:
Figure FDA0002869028670000037
wherein D represents a data set needing to be learned, and M represents the number of learning data in each batch; theta represents a parameter that the network needs to learn
Figure FDA0002869028670000038
And
Figure FDA0002869028670000039
) Y represents a label of learning data, and i represents the ith data of learning;
π (x; θ) represents the probability that the most significant bit measurement after forward propagation is 1, which is specifically defined as:
Figure FDA00028690286700000310
wherein, p (q)n-11, x, θ) represents the probability that the measured result appears to be 1 for the n-1 st qubit;
the learning process of the parameters is carried out by adopting a gradient descent method, namely:
Figure FDA00028690286700000311
wherein the key is to solve
Figure FDA00028690286700000312
Wherein t represents the t-th iteration, and eta represents the learning rate; from the definition of the loss function, it is possible to obtain:
Figure FDA00028690286700000313
and (3) back propagation optimization:
according to the definition of the pi (x; theta) function, the following is obtained:
Figure FDA00028690286700000314
Figure FDA00028690286700000315
wherein the content of the first and second substances,
Figure FDA00028690286700000316
presentation pair
Figure FDA00028690286700000317
Calculating partial derivatives, then calculating complex conjugates in a whole, wherein Re { } represents the real part of a complex number;
wherein
Figure FDA0002869028670000041
Can be further decomposed into:
Figure FDA0002869028670000042
wherein, L represents the maximum number of layers of the network; through the above definition and decomposition calculation, parameter learning and optimization in the network back propagation process are determined.
8. The system of claim 5, wherein the measurement characterization module comprises:
characterised by 4 qubits, 4 qubits being able to characterise 24Taking ten numbers from 0 to 9 for (0-9)10 quantum states representing the 16 quantum states, where the quantum state with the highest probability represents the corresponding output result after forward propagation.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN202011601932.3A 2020-12-29 2020-12-29 Quantum neural network method, system and medium for image recognition Pending CN112613571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011601932.3A CN112613571A (en) 2020-12-29 2020-12-29 Quantum neural network method, system and medium for image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011601932.3A CN112613571A (en) 2020-12-29 2020-12-29 Quantum neural network method, system and medium for image recognition

Publications (1)

Publication Number Publication Date
CN112613571A true CN112613571A (en) 2021-04-06

Family

ID=75249027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011601932.3A Pending CN112613571A (en) 2020-12-29 2020-12-29 Quantum neural network method, system and medium for image recognition

Country Status (1)

Country Link
CN (1) CN112613571A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033703A (en) * 2021-04-21 2021-06-25 北京百度网讯科技有限公司 Quantum neural network training method and device, electronic device and medium
CN113255747A (en) * 2021-05-14 2021-08-13 山东英信计算机技术有限公司 Quantum multichannel convolutional neural classification method, system, terminal and storage medium
CN113361664A (en) * 2021-08-10 2021-09-07 北京航空航天大学 Image recognition system and method based on quantum convolution neural network
CN114764620A (en) * 2021-12-31 2022-07-19 合肥本源量子计算科技有限责任公司 Quantum convolution manipulator
CN114821217A (en) * 2021-06-28 2022-07-29 合肥本源量子计算科技有限责任公司 Image identification method and device based on quantum classical hybrid neural network
CN114821147A (en) * 2021-04-29 2022-07-29 合肥本源量子计算科技有限责任公司 Display method and device for quantum image recognition
CN114819166A (en) * 2022-05-27 2022-07-29 北京大学 Evolution method and device of quantum system
CN114863167A (en) * 2022-04-22 2022-08-05 苏州浪潮智能科技有限公司 Method, system, equipment and medium for image recognition and classification
CN115630690A (en) * 2022-12-21 2023-01-20 量子科技长三角产业创新中心 Sampling method and related device for quantum neural network structure optimization

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033703A (en) * 2021-04-21 2021-06-25 北京百度网讯科技有限公司 Quantum neural network training method and device, electronic device and medium
CN114821147A (en) * 2021-04-29 2022-07-29 合肥本源量子计算科技有限责任公司 Display method and device for quantum image recognition
CN114821147B (en) * 2021-04-29 2023-08-08 本源量子计算科技(合肥)股份有限公司 Quantum image recognition display method and device
CN113255747A (en) * 2021-05-14 2021-08-13 山东英信计算机技术有限公司 Quantum multichannel convolutional neural classification method, system, terminal and storage medium
CN114821217B (en) * 2021-06-28 2023-08-08 本源量子计算科技(合肥)股份有限公司 Image recognition method and device based on quantum classical hybrid neural network
CN114821217A (en) * 2021-06-28 2022-07-29 合肥本源量子计算科技有限责任公司 Image identification method and device based on quantum classical hybrid neural network
CN113361664A (en) * 2021-08-10 2021-09-07 北京航空航天大学 Image recognition system and method based on quantum convolution neural network
CN114764620A (en) * 2021-12-31 2022-07-19 合肥本源量子计算科技有限责任公司 Quantum convolution manipulator
CN114764620B (en) * 2021-12-31 2024-04-09 本源量子计算科技(合肥)股份有限公司 Quantum convolution operator
CN114863167A (en) * 2022-04-22 2022-08-05 苏州浪潮智能科技有限公司 Method, system, equipment and medium for image recognition and classification
CN114863167B (en) * 2022-04-22 2024-02-02 苏州浪潮智能科技有限公司 Image recognition and classification method, system, equipment and medium
CN114819166A (en) * 2022-05-27 2022-07-29 北京大学 Evolution method and device of quantum system
CN115630690A (en) * 2022-12-21 2023-01-20 量子科技长三角产业创新中心 Sampling method and related device for quantum neural network structure optimization

Similar Documents

Publication Publication Date Title
CN112613571A (en) Quantum neural network method, system and medium for image recognition
Li et al. A quantum deep convolutional neural network for image recognition
Zhang et al. An overview on restricted Boltzmann machines
CN111191002B (en) Neural code searching method and device based on hierarchical embedding
CN109086802B (en) Image classification method based on eight-element convolution neural network
CN112464003B (en) Image classification method and related device
Di et al. Amplitude transformed quantum convolutional neural network
CN111008224A (en) Time sequence classification and retrieval method based on deep multitask representation learning
CN109740758B (en) Quantum computation-based nuclear method
Menaga et al. Deep learning: a recent computing platform for multimedia information retrieval
CN115795065A (en) Multimedia data cross-modal retrieval method and system based on weighted hash code
WO2022222854A1 (en) Data processing method and related device
Srinivas et al. A comprehensive survey of techniques, applications, and challenges in deep learning: A revolution in machine learning
CN115062727A (en) Graph node classification method and system based on multi-order hypergraph convolutional network
Altares-López et al. AutoQML: Automatic generation and training of robust quantum-inspired classifiers by using evolutionary algorithms on grayscale images
Inkeaw et al. Density based semi-automatic labeling on multi-feature representations for ground truth generation: Application to handwritten character recognition
Khan et al. Efficiently processing big data in real-time employing deep learning algorithms
ElAdel et al. Fast DCNN based on FWT, intelligent dropout and layer skipping for image retrieval
Xia et al. Efficient synthesis of compact deep neural networks
CN112735604B (en) Novel coronavirus classification method based on deep learning algorithm
CN114219092A (en) Data processing method and system
CN114511097A (en) Mutual learning method and system based on quantum circuit
CN115115920A (en) Data training method and device
CN114550849A (en) Method for solving chemical molecular property prediction based on quantum graph neural network
Chen et al. Defining and Extracting generalizable interaction primitives from DNNs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210406