CN112381233A - Data compression method and device, electronic equipment and storage medium - Google Patents

Data compression method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112381233A
CN112381233A CN202011316785.5A CN202011316785A CN112381233A CN 112381233 A CN112381233 A CN 112381233A CN 202011316785 A CN202011316785 A CN 202011316785A CN 112381233 A CN112381233 A CN 112381233A
Authority
CN
China
Prior art keywords
quantum
matrix
quantum state
information matrix
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011316785.5A
Other languages
Chinese (zh)
Inventor
王鑫
宋旨欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011316785.5A priority Critical patent/CN112381233A/en
Publication of CN112381233A publication Critical patent/CN112381233A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a data compression method, and relates to the technical field of quantum singular value decomposition and quantum machine learning. The specific implementation scheme is as follows: acquiring a pre-prepared orthogonal quantum state vector group and an original information matrix of data to be compressed, wherein the orthogonal quantum state vector group comprises a plurality of quantum state vectors which are orthogonal to each other; estimating a singular value sequence of the original information matrix based on the orthogonal quantum state vector set and the original information matrix using a quantum neural network, and calculating a loss of the quantum neural network based on the estimated singular value sequence; and under the condition that the loss meets a preset condition, calculating a compression information matrix of the data to be compressed based on the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence. The application also discloses a data compression device, an electronic device and a storage medium.

Description

Data compression method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of quantum computing, in particular to quantum singular value decomposition and quantum machine learning technology. More specifically, the application provides a data compression method, a data compression device, an electronic device and a storage medium.
Background
Quantum computers are moving toward scaling and practicality, and efficient compression and storage of data is an extremely important issue, similar to the classical way of information processing.
At present, classic data can be compressed on a classic computer, but the compression of quantum state data cannot be realized. However, there is still a large technical gap in implementing quantum data compression on quantum computers.
Disclosure of Invention
A data compression method, apparatus, electronic device and storage medium are provided.
According to a first aspect, there is provided a data compression method comprising: acquiring a pre-prepared orthogonal quantum state vector group and an original information matrix of data to be compressed, wherein the orthogonal quantum state vector group comprises a plurality of quantum state vectors which are orthogonal to each other; estimating a singular value sequence of the original information matrix based on the orthogonal quantum state vector set and the original information matrix using a quantum neural network, and calculating a loss of the quantum neural network based on the estimated singular value sequence; and under the condition that the loss meets a preset condition, calculating a compression information matrix of the data to be compressed based on the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence.
According to a second aspect, there is provided a data compression apparatus comprising: the device comprises an acquisition module, a compression module and a compression module, wherein the acquisition module is used for acquiring a pre-prepared orthogonal quantum state vector group and an original information matrix of data to be compressed, and the orthogonal quantum state vector group comprises a plurality of quantum state vectors which are orthogonal to each other; an estimation module to estimate a sequence of singular values of an original information matrix based on a set of orthogonal quantum state vectors and the original information matrix using a quantum neural network, and to compute a loss of the quantum neural network based on the estimated sequence of singular values; and the calculation module is used for calculating a compression information matrix of the data to be compressed on the basis of the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence under the condition that the loss meets the preset condition.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data compression method provided in accordance with the present application.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to execute a data compression method provided according to the present application.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of singular value decomposition of an information matrix of stored data on a classical computer according to one embodiment of the present application;
FIG. 2 is an exemplary system architecture to which the data compression methods and apparatus may be applied, according to one embodiment of the present application;
FIG. 3 is a flow diagram of a method of data compression according to an embodiment of the present application;
FIG. 4 is a flow diagram of a method of data compression according to another embodiment of the present application;
FIG. 5 is a flow diagram of a method of estimating a sequence of singular values of an original information matrix according to one embodiment of the present application;
FIG. 6 is a schematic diagram of an Aldamard test model according to an embodiment of the present application;
FIG. 7 is a flow diagram of a method of calculating a compression information matrix for data to be compressed according to one embodiment of the present application;
FIG. 8 is a schematic diagram of a data compression system according to an embodiment of the present application;
FIG. 9 is a block diagram of a data compression apparatus according to an embodiment of the present application; and
FIG. 10 is a block diagram of an electronic device for a method of data compression according to one embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Singular Value Decomposition (SVD) has been verified on a classical computer to be an efficient data compression technique and to enable principal component analysis of data.
FIG. 1 is a schematic diagram of singular value decomposition of an information matrix of stored data on a classical computer according to one embodiment of the present application.
As shown in FIG. 1, the principle of singular value decomposition is to store an information matrix M of datam×nDecomposed into three sub-matrices Um×m、Dm×nAnd
Figure BDA0002789551740000031
then
Figure BDA0002789551740000032
Wherein all r singular values of the matrix M are stored in decreasing order on the main diagonal of the matrix D
Figure BDA0002789551740000033
r represents the rank of the matrix, U and
Figure BDA0002789551740000034
is a unitary matrix, symbol
Figure BDA0002789551740000035
Representing a conjugate transpose operation of the matrix. When the square matrix M is reduced by matrix multiplication only by using the singular values with the number of the front T being less than or equal to r, the main information of the information matrix is extracted, and the compression of redundant information is completed. If the singular value decomposition principle on the classical computer can be realized on the near-term medium-scale Quantum equipment containing noise, Quantum data is compressed (Quantum data)compression) and leading edge Quantum Machine Learning (Quantum Machine Learning) are important technological innovations. Especially in recent quantum devices, the relatively limited number of qubits (within 50-100) leads to hardware memory limitations, and data compression itself is a fundamental issue that needs attention. The feasible quantum singular value decomposition is realized, and the feature extraction and quantum data compression capacity on a quantum computer can also be improved.
FIG. 2 is an exemplary system architecture 200 to which the data compression methods and apparatus may be applied, according to one embodiment of the present application. It should be noted that fig. 2 is only an example of a system architecture to which the embodiments of the present application may be applied to help those skilled in the art understand the technical content of the present application, and does not mean that the embodiments of the present application may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, a system architecture 200 according to this embodiment may include a classical device 210 (which may also be referred to as a classical computer) and a quantum device 220 (which may also be referred to as a quantum computer). The quantum states on the qubits, which are the units of measure for storing the quantum state vectors, may be manipulated in the quantum device 220 such that the quantum states change. The quantum neural network 221 may be included in the quantum state device 220, and the quantum neural network 221 may also be referred to as a parameterized quantum circuit model, and similar to a classical circuit, the parameterized quantum circuit model may be composed of a plurality of quantum logic gates for operating on a quantum bit, for example, the quantum logic gates may be rotation gates acting on a single quantum bit to rotate a quantum state vector of the single quantum bit by a certain angle, and the angle of rotation of the quantum state vector is used as a parameter of the quantum circuit model. Thus, the quantum neural network 221 includes a plurality of parameters, which may constitute a parameter vector.
The parameters of the quantum neural network 221 determine the output of the circuit model, and thus, for the raw quantum state data input to the quantum neural network 221, the output target quantum state data can be determined based on the determined parameter vector. Since the quantum state data is not reproducible, the quantum state data can be indirectly reproduced by saving the parameters of the quantum neural network 221. For example, when the quantum neural network 221 has a target parameter vector, the target quantum state data may be obtained by acting on the original quantum state data, and therefore, by saving the target parameter vector, when the target quantum state data needs to be obtained, the original quantum state data is input to the quantum neural network 221, so that the quantum neural network 221 outputs the target quantum state data.
Illustratively, the target quantum state data may be compressed data of original quantum state data, and the quantum neural network 221 obtained by adjusting parameters may be enabled to act on the original information matrix a of the input original quantum state data by the quantum neural network 221, and to output a compressed information matrix a' of the compressed target quantum state data. And calling a preset quantum state preparation program according to the compressed information matrix A' to prepare compressed target quantum state data, wherein the compressed target quantum state data extracts main information of the original quantum state data compared with the original quantum state data, and completes compression of redundant information such as noise of the original quantum state data, so that the storage space can be effectively saved. And the main information of the original quantum state data is extracted, so that the main component analysis of the data can be realized, and the main component analysis of the data is widely applied to a recommendation system, so that the method has important significance in realizing the compression of the quantum state data on a quantum computer.
The use of quantum device 220 may also enable the compression of classical data. Exemplarily, referring to fig. 2, an object B to be compressed in the classical device 201, which may be a data object such as an image, an icon, and a table, is acquired. In the quantum device 220, the original information matrix of the object B to be compressed is input into the quantum neural network 221, the compressed information matrix of the object B to be compressed can be output, and the compressed information matrix can be sent to the classical device, so that the compressed data object B' can be stored in the classical device, and the storage space can be reduced. Therefore, the implementation of data compression in quantum computers has been widely used in many fields such as compressed data and image processing.
It should be noted that most of the present quantum algorithms mainly process hermitian matrices and do not have good processing capability for non-hermitian matrices. The representation forms of the information matrixes of classical data such as images and tables are mostly non-Hermite matrixes, and the non-Hermite matrixes can be efficiently compressed according to the embodiment of the application, so that the main components are extracted, and the compression of quantum data or classical data is realized.
FIG. 3 is a flow diagram of a method of data compression according to one embodiment of the present application.
As shown in fig. 3, the data compression method 300 may include operations S310 to S330.
In operation S310, a pre-prepared orthogonal quantum state vector set and an original information matrix of data to be compressed are obtained, the orthogonal quantum state vector set including a plurality of quantum state vectors orthogonal to each other.
According to the embodiment of the application, in the classical data field, the data to be compressed can comprise images, graphs, tables and the like, and the data to be compressed can be represented as an information matrix. For example, an information matrix of an image is generated using a pixel value or a luminance value of each pixel in the image as an element of the information matrix. For another example, an information matrix of the table is generated by using data in the table as elements in the matrix. In the field of quantum states, an information matrix may be a state of a physical system represented in the form of a matrix. The information matrix may be an N × N square matrix, where N is 2N, and N is an integer equal to or greater than 1. If the information matrix is not a square matrix with a dimension of 2n × 2n, the information matrix may be converted into a square matrix with a dimension of 2n × 2n by padding zeros in rows or columns of the matrix, for example, the dimension of the information matrix M is 8 × 7, and a column of vectors with all 0 s may be padded in the information matrix as the last column of the information matrix, so as to obtain the information matrix M with a dimension of 8 × 8.
According to embodiments of the application, the rank of the matrix may represent the degree of redundancy of the information contained by the matrix. For example, for an information matrix M with a dimension of 8 × 8, the rank of the matrix M is 8 at most, and if two columns of data in the matrix M are the same, which indicates that the matrix has the same information, and the matrix is redundant information, the rank of the matrix is decreased by 1. Therefore, if T is the desired degree of compression to which data is compressed, the value of T should be set to a rank less than the matrix to achieve the effect of data compression. For example, an information matrix of data to be compressed may be referred to as an original information matrix M, and for the original information matrix M with a dimension of 8 × 8, if the rank of M is 8, the compression degree T may be set to 4.
According to the embodiment of the application, the quantum state can be in a vector representation form or a matrix representation form, the orthogonal quantum state vector set is a set of mutually orthogonal quantum state vectors, namely the orthogonal quantum state vector set comprises a plurality of quantum state vectors, and each two of the plurality of quantum state vectors are mutually orthogonal. The set of orthogonal quantum state vectors can mathematically represent a set of orthonormal bases. For example, the set of orthogonal quantum state vectors can be represented as
Figure BDA0002789551740000061
Wherein T represents the degree of preset data compression, T is less than the rank of the original information matrix, and T is an integer. PhijRepresents the jth quantum state vector in the orthogonal quantum state vector group, j is more than or equal to 1 and less than or equal to T, namely the orthogonal quantum state vector group has T quantum state vectors in total.
Illustratively, the quantum state vector may be represented as
Figure BDA0002789551740000062
Figure BDA0002789551740000063
And so on.
It should be noted that the quantum states can also be represented in a matrix form, and then a set of orthogonal quantum states can be represented as
Figure BDA0002789551740000064
And so on.
According to an embodiment of the application, the dimension of the quantum state vector is related to the dimension of the original information matrix M, and if the original information matrix M is an N × N matrix, the quantum state vector is an N-dimensional vector. For example, the dimension of the original information matrix M is 8 × 8, and the dimension of the quantum state vector is 8 × 1. The number of quantum state vectors in the quantum state vector group is related to a predetermined degree of compression, for example, the predetermined degree of compression T is 4, and the quantum state vector group includes 4 quantum state vectors, that is, the quantum state vector group includes 4 quantum state vectors with dimensions of 8 × 1.
In operation S320, a singular value sequence of the original information matrix is estimated based on the orthogonal quantum state vector group and the original information matrix using the quantum neural network, and a loss of the quantum neural network is calculated based on the estimated singular value sequence.
According to embodiments of the application, the quantum neural network may comprise a parameterized quantum circuit model, which may be composed of a plurality of single-quantum-bit spin gates and controlled back gates, with parameters of the parameterized quantum circuit model constituting parameters (or sets of parameters) of the quantum neural network. Illustratively, the quantum neural network includes parameterized quantum circuits U (α) and V (β) each consisting of a plurality of single-quantum-bit spin gates and controlled back-gates. U (α) is used for explanation: and U (alpha) comprises a plurality of rotating gates for operating single-quantum bits and a controlled back gate for operating double-quantum bits, wherein each rotating gate acts on the quantum bits to enable quantum state vectors in which the quantum bits are positioned to rotate by a certain angle, and the rotating angle is a parameter of the rotating gate. Parameter α of each revolving door in U (α)1、α2… … constitute a parameter vector alpha. The controlled back-gate may operate on the dual qubit and the operating rule may be based on the quantum state of the first qubit, either not operating on the quantum state of the second qubit or not operating on the quantum state of the second qubit. Controlled back-gating can, for example, use a matrix
Figure BDA0002789551740000071
It is shown that the quantum state matrix of U (α) can be obtained by reading the parameters of U (α). V (. beta.) is similar to U (. alpha.) and will not be described here. The parameters of the quantum neural network include parameters of U (α) and parameters of V (β).
According to the embodiment of the application, U (alpha) acts on a set of orthogonal quantum state vectors, and a quantum state matrix of U (alpha) can be read. V (beta) acts on a group of orthogonal quantum state vectors, and the quantity of the V (beta) can be readA sub-state matrix. Illustratively, U (α) acts at | ψj>. sup.j' >, denotes the j-th column vector of the read-out quantum state matrix of U (α). V (beta) acts at | ψj>. sup.j">, denotes the j-th column vector of the read quantum state matrix of V (β). If the dimension of the original information matrix M to be compressed is 8 x 8, | ψjThe dimension of > is 8 multiplied by 1, | ψj' > and | ψjThe dimension of "> is also 8X 1, | ψj' > conjugate transpose vector (dimension 1 × 8) multiplied by original information matrix M (dimension 8 × 8) multiplied by | ψjThe product result obtained by (dimension 8 × 1) may be a complex number, and the real part of the complex number is taken as the jth estimated singular value of the original information matrix M. Finally, T estimated singular values are obtained to form a singular value sequence which can be expressed as mj={m1,..,mT}。
According to the embodiment of the application, the loss function of the quantum neural network can be calculated according to the singular value sequence of the original information matrix M, the parameters of the quantum neural network can be adjusted based on the loss function, iterative training is carried out by using the adjusted parameters of the quantum neural network until the loss function reaches the preset condition, the quantum neural network is indicated to output the optimal singular value sequence, and the adjustment of the parameters of the quantum neural network is stopped. Here, the loss function may be designed as a weighted sum of each singular value, and the loss function is optimized by using a gradient descent method or other optimization methods with the maximum loss function as a target until the last round of training is finished when the loss function reaches a maximum, and the parameters of the quantum neural network after the last round of training are the optimal parameters.
S330, under the condition that the loss meets the preset condition, calculating a compression information matrix of the data to be compressed based on the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence.
According to the embodiment of the application, in the process of quantum neural network training, U (alpha) acts on a group of orthogonal quantum state vectors, and a quantum state matrix of U (alpha) is read and recorded as a first quantum state matrix. V (beta) acts on a set of orthogonal quantum state directionsAnd reading the quantum state matrix of the V (beta), and recording as a second quantum state matrix. Illustratively, the orthogonal quantum state vector first quantum state matrix is | ψ1′>、|ψ2′>……|ψT' the second quantum state matrix is a matrix of | ψj">, then | ψ1″>、|ψ2″>……|ψT"a matrix of constituent quantum states. And when the loss function meets the preset condition, the last round of training is finished, the parameters of the quantum neural network subjected to the last round of training are optimal parameters, and the singular value sequence output by the quantum neural network subjected to the last round of training is an optimal singular value sequence. The | ψ generated in the last round of training1′>、|ψ2′>……|ψT' as left singular vector of the original information matrix M. Phi1″>、|ψ2″>……|ψT"as the right singular vector of the original information matrix M. A diagonal matrix of the original information matrix M can be generated based on the optimal singular value sequence, values on a diagonal of the diagonal matrix are singular values in the singular value sequence, and the singular values are decreased progressively. Based on the matrix formed by the left singular direction, the matrix formed by the right singular direction and the diagonal matrix, an information matrix of the compressed data can be obtained, and the information matrix of the compressed data can be called as a compressed information matrix M'.
Illustratively, a matrix composed of left singular directions is denoted as U, a matrix composed of right singular directions is denoted as V, a diagonal matrix capable of generating an information matrix based on an optimal singular value sequence is denoted as D, and then a compressed information matrix of data to be compressed is denoted as D
Figure BDA0002789551740000081
According to an embodiment of the application, a quantum neural network is used to estimate singular values of an original information matrix based on an orthogonal quantum state vector group and the original information matrix, loss of the quantum neural network is determined based on the estimated singular values, and a compressed information matrix is calculated based on a quantum state matrix and a singular value sequence generated in a process of estimating singular values using the quantum neural network under the condition that the loss satisfies a preset condition. Because the quantum neural network outputs the optimal singular value sequence under the condition that the loss meets the preset condition, the compression information matrix is calculated based on the quantum state matrix generated by the quantum neural network meeting the preset condition in the process of outputting the optimal singular value sequence and the optimal singular value sequence, and the compression of quantum state data can be realized.
As can be understood by those skilled in the art, the embodiment of the application utilizes a parameterized quantum circuit which can be provided by recent quantum equipment, and designs a loss function by combining the variational principle of singular values, so that the quantum singular value decomposition problem is converted into a quantum machine learning problem. The feasible quantum singular value decomposition is realized, the feature extraction and quantum data compression capacity on a quantum computer can be improved, the efficient compression of a non-Hermite matrix is realized, the main components are extracted, and the compression of quantum data or classical data is realized.
It should be noted that the compression information matrix is calculated based on the quantum state matrix generated by the quantum neural network meeting the preset condition in the process of outputting the optimal singular value sequence and the optimal singular value sequence, and the compression of classical data can also be realized.
FIG. 4 is a flow diagram of a method of data compression according to another embodiment of the present application.
As shown in fig. 4, the data compression method 400 may include operations S410 to S440.
In operation S410, a pre-prepared orthogonal quantum state set including a plurality of quantum states orthogonal to each other and an original information matrix of data to be compressed are acquired.
According to the embodiment of the application, the data to be compressed can be images, graphs, tables and the like in the classical field, and can also be the state of a physical system represented in a matrix form in the quantum state field. The original information matrix M of the data to be compressed is an N × N square matrix, and N is 2nN is an integer of 1 or more, and the original information matrix M is a square matrix having dimensions of 8 × 8, for example. The data to be compressed may be compressed by setting a compression degree T in advance, where the compression degree T is smaller than the rank of the original information matrix, for example, T ═ 4.
The set of orthogonal quantum state vectors is a set of mutually orthogonal quantum state vectors, i.e. the set of orthogonal quantum state vectors comprises a plurality of quantum state vectors, each two of which are mutually orthogonal. The set of orthogonal quantum state vectors can mathematically represent a set of orthonormal bases. For example, the set of orthogonal quantum state vectors can be represented as
Figure BDA0002789551740000091
Where T represents a preset degree of data compression, | ψjRepresents the jth quantum state vector in the orthogonal quantum state vector group, j is more than or equal to 1 and less than or equal to T, namely the orthogonal quantum state vector group has T quantum state vectors in total.
In operation S420, a singular value sequence of the original information matrix is estimated based on the orthogonal quantum state vector group and the original information matrix using the quantum neural network, and a loss of the quantum neural network is calculated based on the estimated singular value sequence.
According to embodiments of the application, a quantum neural network may include parameterized quantum circuit models U (α) and V (β), U (α) acting on a set of orthogonal quantum state vectors, from which a quantum state matrix of U (α) may be read. V (beta) acts on a set of orthogonal quantum state vectors, and a quantum state matrix of V (beta) can be read. Illustratively, U (α) acts at | ψj>. sup.j' >, denotes the j-th column vector of the read-out quantum state matrix of U (α). V (beta) acts at | ψj>. sup.j">, denotes the j-th column vector of the read quantum state matrix of V (β). If the dimension of the original information matrix M to be compressed is 8 x 8, | ψjThe dimension of > is 8 multiplied by 1, | ψj' > and | ψjThe dimension of "> is also 8X 1, | ψjThe conjugate transpose of' > 1 × 8 in dimension times the original information matrix M8 × 8 in dimension times | ψjThe product result obtained by (dimension 8 × 1) may be a complex number, and the real part of the complex number is taken as the jth estimated singular value of the original information matrix M. Finally, T estimated singular values are obtained to form a singular value sequence which can be expressed as mj={m1,..,mT}。
According to an embodiment of the present application, a loss function of the quantum neural network may be calculated from the sequence of singular values of the original information matrix M, where a weighted sum of the individual singular values may be calculated as the loss function of the quantum neural network. Specifically, a weighted sum of the individual singular values in the singular value sequence is calculated as a loss of the quantum neural network, based on weights assigned in advance for the individual singular values in the singular value sequence. The weights pre-assigned to the singular values in the singular value sequence decrease in sequence, and may be denoted as q ═ q1,..,qT},q1>…>qT> 0, the estimated singular value sequence { m1,..,mTWeighted average according to prepared weight, and calculating to obtain accumulated loss function
Figure BDA0002789551740000101
In operation S430, it is determined whether the loss of the quantum neural network satisfies a preset condition. If so, operation S440 is performed, and if not, operation S420 is returned to.
According to the embodiment of the application, whether the loss function reaches the preset condition can be judged by judging whether the loss function reaches the maximum value or not by taking the maximum loss function as the target. During iterative training with the loss function maximum being a parameter of the target tuning quantum neural network, a gradient descent method or other optimization method can be used to optimize the loss function. For example, if the quantum neural network is trained in the current round and the difference between the loss function and the loss function of the previous round of training is smaller than a certain threshold, such as 0.001, it may be considered that the loss function is converged, that is, the loss function obtained in the current round of training is maximized, that is, the loss function satisfies a preset condition, and then operation S440 may be performed. Otherwise, the training is carried out again until the loss function meets the preset condition.
In operation S440, a compression information matrix of data to be compressed is calculated based on a quantum state matrix generated in the process of estimating a singular value sequence using a quantum neural network and the estimated singular value sequence.
According to an embodiment of the application, the loss function isAnd when the maximum value is reached, the last round of training is finished, the parameters of the quantum neural network subjected to the last round of training are optimal parameters, and the singular value sequence output by the quantum neural network subjected to the last round of training is an optimal singular value sequence. The | ψ generated in the last round of training1′>、|ψ2′>……|ψT' left singular vector as original information matrix M, | ψ1″>、|ψ2″>……|ψT"is used as the right singular vector of the original information matrix M, a diagonal matrix of the original information matrix M can be generated based on the optimal singular value sequence, the values on the diagonal of the diagonal matrix are the singular values in the singular value sequence, and the singular values are decreased progressively. Based on the matrix formed by the left singular direction, the matrix formed by the right singular direction and the diagonal matrix which can be generated based on the optimal singular value sequence, a compressed information matrix, namely a compressed information matrix M' of the data to be compressed can be obtained. Illustratively, the matrix of the left singular directional component is denoted as U, and the matrix of the right singular directional component is denoted as U
Figure BDA0002789551740000111
If the diagonal matrix of the information matrix can be generated based on the optimal singular value sequence and is marked as D, the compressed information matrix of the data to be compressed
Figure BDA0002789551740000112
According to the embodiment of the application, iterative training is carried out by using the maximum loss function as the parameter of the target adjustment quantum neural network, in the round of training in which the loss function reaches the maximum, the parameter of the quantum neural network reaches the optimum, and the singular value sequence output by the quantum neural network also reaches the optimum. And calculating to obtain a compressed information matrix based on a quantum state matrix generated in the one round of training process which enables the loss function to be maximum and a diagonal matrix generated by the singular value sequence, and realizing the compression of data.
FIG. 5 is a flow diagram of a method of estimating a sequence of singular values of an original information matrix according to one embodiment of the present application.
As shown in fig. 5, the method may include operations S521 through S523.
In operation S521, a first quantum state matrix is obtained based on the set of orthogonal quantum state vectors using a first parameterized quantum circuit model.
In operation S522, a second quantum state matrix is obtained based on the set of orthogonal quantum state vectors using a second parameterized quantum circuit model.
In operation S523, a singular value sequence of the original information matrix is estimated based on the first quantum state matrix, the original information matrix, and the second quantum state matrix.
According to an embodiment of the application, the first parameterized quantum circuit model may be U (α) acting at | ψj>. sup.j' >, j-th column vector of the read-out quantum state matrix of U (alpha), | ψ1′>、|ψ2′>……|ψT' A matrix of quantum states may be formed, which may be referred to as a first matrix of quantum states. If the dimension of the original information matrix M is 8 × 8, the quantum state vector | ψjThe dimension of' > is 8 × 1, the degree of compression T is 4, and the dimension of the first quantum state matrix is 8 × 4.
The second parameterized quantum circuit model may be V (β), which acts at | ψj>. sup.j">, denotes the j-th column vector of the read quantum state matrix of V (β). Phi1″>、|ψ2″>……|ψT"may constitute a matrix of quantum states, which may be referred to as a matrix of second quantum states. If the dimension of the original information matrix M is 8 × 8, the quantum state vector | ψjThe dimension of "> is 8 × 1, the degree of compression T is 4, and the dimension of the second quantum state matrix is 8 × 4.
Illustratively, according to the matrix multiplication rule, the conjugate transpose matrix (dimension is 4 × 8) of the first quantum state matrix is multiplied by the original information matrix M (dimension is 8 × 8) and multiplied by the second quantum state matrix (dimension is 8 × 4), resulting in a diagonal matrix, and the elements on the diagonal of the diagonal matrix are singular values.
Illustratively, it can also be understood from the product of a vector and a matrix. Each column | ψ of the first quantum state matrixjThe conjugate transpose vector of (dimension 1 × 8) is multiplied by the originalThe information matrix M (dimension 8 × 8) is multiplied by | ψj"(dimension is 8 × 1), the obtained product result may be a complex number, the real number part of the complex number may be taken as the estimated jth singular value, and finally T estimated singular values are obtained, which constitute a singular value sequence, which may be expressed as mj={m1,..,mTThe sequence of singular values may generate a diagonal matrix.
According to the embodiment of the application, the singular value estimation process can be realized through an adama test model.
FIG. 6 is a schematic diagram of an Aldamard test model according to one embodiment of the present application.
As shown in fig. 6, the hadamard test model 600 includes a first operator sub-model 610 and a second operator sub-model 620.
According to an embodiment of the present application, the circuit form of the first operation submodel 610 may be expressed as
Figure BDA0002789551740000121
First operator model pairs input orthogonal quantum state vector sets
Figure BDA0002789551740000122
Can be expressed as
Figure BDA0002789551740000123
The operation process is
Figure BDA0002789551740000124
Act on at
Figure BDA0002789551740000125
Obtain | psij' > conjugate transpose vector, V (beta) acts on
Figure BDA0002789551740000126
Obtain | psij″>,|ψjThe conjugate transpose of' > is multiplied by the original information matrix M and then by | ψj". It can be understood that since
Figure BDA0002789551740000127
And V (beta) act on
Figure BDA0002789551740000128
Thus, the input can be considered
Figure BDA0002789551740000129
There are two identical sets of orthogonal quantum state vectors. The second operator model 620 may be paired with
Figure BDA00027895517400001210
Taking the real number part of the output result to obtain singular value expressed as
Figure BDA00027895517400001211
Fig. 7 is a flowchart of a method of calculating a compression information matrix of data to be compressed according to one embodiment of the present application.
As shown in fig. 7, the method may include operations S731 to S732.
In operation S731, a conjugate transpose of a second quantum state matrix is determined.
In operation S732, a third quantum state matrix is generated based on the estimated sequence of singular values.
In operation S733, a product of the first, third, and conjugate transpose matrices of the second quantum state matrix is calculated as a compression information matrix of the data to be compressed.
According to the embodiment of the application, in the process of quantum neural network training, U (alpha) acts on a group of orthogonal quantum state vectors, a quantum state matrix of U (alpha) is read and is marked as a first quantum state matrix, and the orthogonal quantum state vector group can be represented as
Figure BDA0002789551740000131
The first quantum state matrix is | ψ1′>、|ψ2′>……|ψT' a matrix of constituent quantum states. V (beta) acts on a group of orthogonal quantum state vectors, and a quantum state matrix of the V (beta) is read and recorded as a second quantumThe state matrix and the second quantum state matrix are | psij">, then | ψ1″>、|ψ2″>……|ψT"a matrix of constituent quantum states.
And when the loss function meets the preset condition, the last round of training is finished, the parameters of the quantum neural network subjected to the last round of training are optimal parameters, and the singular value sequence output by the quantum neural network subjected to the last round of training is an optimal singular value sequence. The | ψ generated in the last round of training1′>、|ψ2′>……|ψT' A first quantum state matrix composed of left singular vectors is recorded as U, wherein the left singular vectors are used as left singular vectors of an original information matrix M. Phi1″>、|ψ2″>……|ψTThe conjugate transpose of the matrix is used as the right singular vector of the original information matrix M, the second quantum state matrix formed by the left singular vector is marked as V trunk, the diagonal matrix of the original information matrix M can be generated based on the optimal singular value sequence and is marked as D, and then the compressed information matrix of the data to be compressed
Figure BDA0002789551740000132
FIG. 8 is a block diagram of a data compression system according to an embodiment of the present application.
As shown in fig. 8, data compression system 800 includes classical apparatus 810 and quantum apparatus 820, where classical apparatus 810 includes setting module 811, calculation module 812, and optimization module 813, and quantum apparatus 820 includes first parameterized quantum circuit model U (α)821, second parameterized quantum circuit model V (β)822, and adama test model 823, where U (α) has a parameter vector α and V (β) has a parameter vector β. The first parameterized quantum circuit model U (α)821, the second parameterized quantum circuit model V (β)822, and the hadamard test model 823 constitute a quantum neural network, and parameters of the quantum neural network include α and β.
The steps of data compression based on the data compression system 800 are as follows:
step 1, obtaining an original information matrix M of an object to be compressed, where M is a square matrix with dimension 2n × 2n, and if M is not the square matrix with dimension 2n, M may be converted into the square matrix with dimension 2n by padding zeros in rows or columns of the matrix, for example, M is the square matrix with dimension 8 × 8. The data to be compressed can be images, graphs, tables and the like in the classical field, and can also be the state of a physical system represented in a matrix form in the quantum state field.
Step 2, on the classical device 810, the compression degree T of the object to be compressed is set by the setting module 811, and a set of decreasing weight sequences q ═ q is prepared1,..,qT},q1>…>qTT is greater than 0 and less than the rank of the original information matrix M, T being an integer, for example T may take 4. The set compression degree T and the weight sequence q are transmitted to the quantum device 820.
Step 3, preparing orthogonal quantum state vector set on quantum device 820
Figure BDA0002789551740000141
The set of quantum state vectors includes T quantum state vectors, each two of the T quantum state vectors being mutually orthogonal. Taking a square matrix with M being 8 × 8 as an example, the dimension of each quantum state vector is 8 × 1.
Step 4, on the quantum device 820, the first parameterized quantum circuit model U (α)821 acts on | ψj>. sup.j′>,|ψj' j-th column vector representing read-out quantum state matrix of U (alpha) | ψ1′>、|ψ2′>……|ψT' A matrix of quantum states may be formed, which may be referred to as a first matrix of quantum states. The second parameterized quantum circuit model V (β)822 acts on | ψj>. sup.j">, denotes the j-th column vector of the read quantum state matrix of V (β). Phi1″>、|ψ2″>……|ψT"may constitute a matrix of quantum states, which may be referred to as a matrix of second quantum states.
Step 5, converting | psij' >, original information matrices M and | ψj"> is input into the Alda test model 823, which operates as follows: phijThe conjugate transpose of' > is multiplied by the original information matrix M and then by | ψjAnd then multiply by the jthThe product result takes the real part as the singular value of the j-th estimate. Finally, T estimated singular values are obtained to form a singular value sequence which can be expressed as mj={m1,..,mT}。
Step 6, estimating the singular value sequence { m1,..,mTIs sent to classical device 810 where classical device 810 is prepared by calculation module 812 with a weight q ═ q { (q }1,..,qTCalculating the weighted sum to obtain the accumulated loss function
Figure BDA0002789551740000142
Step 7, the loss function L is optimized on the classical device 810 through an optimization module 813 gradient descent method or other optimization methods, so that the loss function L is maximized. And (4) performing iterative training on the quantum neural network by taking the loss function maximization as a target on the quantum device 820, and obtaining an optimal singular value sequence when the loss function reaches the maximum.
And 8, finishing the last round of training when the loss function reaches the maximum, taking the parameters of the quantum neural network subjected to the last round of training as optimal parameters, and recording the optimal parameters as alpha*And beta*. In the last round of training, the first parameterized quantum circuit model U (α)821 acts on | ψjPhi > outputj' > the first quantum state matrix is marked as U (alpha)*) The second parameterized quantum circuit model V (β)822 acts on | ψjPhi > outputj">, the first quantum state matrix of composition is denoted as
Figure BDA0002789551740000143
U(α*) Each column of (a) is a left singular vector of the original information matrix M,
Figure BDA0002789551740000144
each of the rows of (a) is a right singular vector of the original information matrix M.
Step 9, a matrix (i.e., U (alpha)) composed based on the left singular vectors*) A matrix of original information matrix M and right singular vectors (i.e., right singular vectors)
Figure BDA0002789551740000151
) Recovering the information matrix to obtain a compressed information matrix M'.
And step 10, aiming at the classical data field, the compressed information matrix M' is the information matrix of the compressed data, and the compression task is completed. Aiming at the field of quantum state data, the compressed quantum state data can be prepared by calling a preset quantum state preparation program according to the compressed information matrix M'.
FIG. 9 is a block diagram of a data compression apparatus according to an embodiment of the present application.
As shown in fig. 9, the data compression apparatus 900 may include an acquisition module 910, an estimation module 920, and a calculation module 930.
The obtaining module 910 is configured to obtain a pre-prepared orthogonal quantum state vector set and an original information matrix of data to be compressed, where the orthogonal quantum state vector set includes multiple quantum state vectors orthogonal to each other.
The estimation module 920 is configured to estimate a sequence of singular values of the original information matrix based on the set of orthogonal quantum state vectors and the original information matrix using the quantum neural network, and to compute a loss of the quantum neural network based on the estimated sequence of singular values.
The calculating module 930 is configured to calculate a compression information matrix of the data to be compressed based on the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence, when the loss satisfies a preset condition.
According to an embodiment of the present application, the data compression apparatus 900 further includes a determining module and an adjusting module.
The determining module is configured to determine whether the loss of the quantum neural network satisfies a preset condition after the estimating module 920 calculates the loss of the quantum neural network, execute the calculating module 930 if the loss satisfies the preset condition, and execute the adjusting module if the loss does not satisfy the preset condition.
The adjusting module is used for adjusting the parameters of the quantum neural network according to the loss of the quantum neural network, and returning to the execution estimating module 920.
According to an embodiment of the application, the estimation module 920 is configured to obtain a first quantum state matrix based on the set of orthogonal quantum state vectors using a first parameterized quantum circuit model; obtaining a second quantum state matrix based on the orthogonal quantum state vector set by using a second parameterized quantum circuit model; based on the first quantum state matrix, the original information matrix and the second quantum state matrix, a singular value sequence of the original information matrix is estimated.
According to an embodiment of the present application, the estimation module 920 is configured to calculate a product of a conjugate transpose vector of each column vector of the first quantum state matrix, the original information matrix, and each column vector of the second quantum state matrix; determining a real part of each product as an estimated singular value; based on each estimated singular value, a sequence of estimated singular values is obtained.
According to an embodiment of the present application, the calculating module 930 is configured to determine a conjugate transpose matrix of the second quantum state matrix; generating a third quantum state matrix based on the estimated sequence of singular values; and calculating the product of the first quantum state matrix, the third quantum state matrix and the conjugate transpose matrix of the second quantum state matrix to be used as a compression information matrix of the data to be compressed. The acquisition unit is used for acquiring a plurality of generated images for generating network output.
According to an embodiment of the present application, the estimation module 920 is configured to calculate a weighted sum of singular values in the singular value sequence as a loss of the quantum neural network according to weights pre-assigned to the singular values in the singular value sequence.
According to the embodiment of the application, the weights pre-allocated to the singular values in the singular value sequence are sequentially decreased.
According to an embodiment of the present application, the data compression apparatus 900 further includes a preparation module.
The preparation module is used for preparing the compressed data of the quantum state data according to the compressed information matrix.
According to an embodiment of the present application, the quantum state vectors in the orthogonal quantum state vector group are N-dimensional vectors, and the original information matrix is an N × N matrix, where N is 2N and N is an integer greater than or equal to 1.
According to an embodiment of the application, the number of quantum state vectors in the set of orthogonal quantum state vectors is less than the rank of the original information matrix.
Fig. 10 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 10, the electronic apparatus 1000 includes: one or more processors 1001, memory 1002, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 10 illustrates an example of one processor 1001.
The memory 1002 is a non-transitory computer readable storage medium provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the data compression methods provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the data compression method provided herein.
The memory 1002, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the data compression method in the embodiment of the present application (for example, the obtaining module 910, the estimating module 920, and the calculating module 930 shown in fig. 9). The processor 1001 executes various functional applications of the server and data processing, i.e., implements the data compression method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 1002.
The memory 1002 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device 1000 of the data compression method, and the like. Further, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely from the processor 1001, which may be connected to the electronic device 1000 for data compression methods via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device 1000 of the data compression method may further include: an input device 1003 and an output device 1004. The processor 1001, the memory 1002, the input device 1003, and the output device 1004 may be connected by a bus or other means, and the bus connection is exemplified in fig. 10.
The input device 1003 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus 1000 of the data compression method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 1004 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the quantum neural network is used for estimating singular values of an original information matrix based on an orthogonal quantum state vector group and the original information matrix, the loss of the quantum neural network is determined based on the estimated singular values, and under the condition that the loss meets a preset condition, the compressed information matrix is calculated based on the quantum state matrix and a singular value sequence generated in the process of estimating the singular values by using the quantum neural network. Because the quantum neural network outputs the optimal singular value sequence under the condition that the loss meets the preset condition, the compression information matrix is calculated based on the quantum state matrix generated by the quantum neural network meeting the preset condition in the process of outputting the optimal singular value sequence and the optimal singular value sequence, and the compression of quantum state data can be realized.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A method of data compression, comprising:
acquiring a pre-prepared orthogonal quantum state vector set and an original information matrix of data to be compressed, wherein the orthogonal quantum state vector set comprises a plurality of quantum state vectors which are orthogonal to each other;
estimating, using a quantum neural network, a sequence of singular values of the original information matrix based on the set of orthogonal quantum state vectors and the original information matrix, and calculating a loss of the quantum neural network based on the estimated sequence of singular values;
and under the condition that the loss meets a preset condition, calculating a compression information matrix of the data to be compressed based on a quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence.
2. The method of claim 1, further comprising: in the case where the loss does not satisfy the preset condition, adjusting parameters of the quantum neural network according to the loss of the quantum neural network, and returning to the step of estimating the sequence of singular values of the original information matrix based on the orthogonal quantum state vector group and the original information matrix using the quantum neural network.
3. The method of claim 1, wherein the quantum neural network comprises a first parameterized quantum circuit model and a second parameterized quantum circuit model;
the estimating, using a quantum neural network, a sequence of singular values of the original information matrix based on the set of orthogonal quantum state vectors and the original information matrix comprises:
obtaining a first quantum state matrix based on the set of orthogonal quantum state vectors using the first parameterized quantum circuit model;
obtaining a second quantum state matrix based on the set of orthogonal quantum state vectors using the second parameterized quantum circuit model;
estimating a sequence of singular values of the original information matrix based on the first quantum state matrix, the original information matrix, and the second quantum state matrix.
4. The method of claim 3, wherein the estimating the sequence of singular values of the original information matrix based on the first quantum state matrix, the original information matrix, and the second quantum state matrix comprises:
calculating a product of a conjugate transpose vector of each column vector of the first quantum state matrix, the original information matrix, and each column vector of the second quantum state matrix;
determining a real part of each product as an estimated singular value;
based on each estimated singular value, a sequence of estimated singular values is obtained.
5. The method of claim 3, wherein the quantum state matrices comprise the first quantum state matrix and the second quantum state matrix, and the computing a compression information matrix for the data to be compressed based on the quantum state matrices generated in estimating the sequence of singular values and the estimated sequence of singular values comprises:
determining a conjugate transpose matrix of the second quantum state matrix;
generating a third quantum state matrix based on the estimated sequence of singular values;
and calculating the product of the first quantum state matrix, the third quantum state matrix and the conjugate transpose matrix of the second quantum state matrix to be used as a compression information matrix of the data to be compressed.
6. The method of claim 1, wherein the computing the loss of the quantum neural network based on the estimated sequence of singular values comprises:
and calculating the weighted sum of each singular value in the singular value sequence according to the weight pre-allocated to each singular value in the singular value sequence, wherein the weighted sum is used as the loss of the quantum neural network.
7. The method of claim 6, wherein the pre-assigned weights for each singular value in the sequence of singular values are sequentially decremented.
8. The method of claim 1, the data to be compressed comprising quantum state data, the method further comprising:
and preparing the compressed data of the quantum state data according to the compressed information matrix.
9. The method of any one of claims 1 to 8, wherein the quantum state vectors in the set of orthogonal quantum state vectors are N-dimensional vectors and the original information matrix is an N x N matrix, where N is 2N and N is an integer greater than or equal to 1.
10. The method of any one of claims 1 to 8, wherein the number of quantum state vectors in the set of orthogonal quantum state vectors is less than the rank of the original information matrix.
11. A data compression apparatus comprising:
the device comprises an acquisition module, a compression module and a compression module, wherein the acquisition module is used for acquiring a pre-prepared orthogonal quantum state vector set and an original information matrix of data to be compressed, and the orthogonal quantum state vector set comprises a plurality of quantum state vectors which are orthogonal to each other;
an estimation module to estimate a sequence of singular values of the original information matrix based on the set of orthogonal quantum state vectors and the original information matrix using a quantum neural network, and to compute a loss of the quantum neural network based on the estimated sequence of singular values;
and the calculation module is used for calculating a compression information matrix of the data to be compressed on the basis of the quantum state matrix generated in the process of estimating the singular value sequence and the estimated singular value sequence under the condition that the loss meets a preset condition.
12. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 10.
13. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 10.
CN202011316785.5A 2020-11-20 2020-11-20 Data compression method and device, electronic equipment and storage medium Pending CN112381233A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011316785.5A CN112381233A (en) 2020-11-20 2020-11-20 Data compression method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011316785.5A CN112381233A (en) 2020-11-20 2020-11-20 Data compression method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112381233A true CN112381233A (en) 2021-02-19

Family

ID=74587272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011316785.5A Pending CN112381233A (en) 2020-11-20 2020-11-20 Data compression method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112381233A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379059A (en) * 2021-06-10 2021-09-10 北京百度网讯科技有限公司 Model training method for quantum data classification and quantum data classification method
CN113422609A (en) * 2021-06-02 2021-09-21 出门问问信息科技有限公司 Data compression method and device
CN113973090A (en) * 2021-10-18 2022-01-25 北谷电子有限公司 Apparatus and method for processing big data in communication network
CN113988303A (en) * 2021-10-21 2022-01-28 北京量子信息科学研究院 Quantum recommendation method, device and system based on parallel quantum intrinsic solver
CN115632660A (en) * 2022-12-22 2023-01-20 山东海量信息技术研究院 Data compression method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104078047A (en) * 2014-06-21 2014-10-01 西安邮电大学 Quantum compression method based on voice multiband excitation coding LSP parameter
CN109472364A (en) * 2018-10-17 2019-03-15 合肥本源量子计算科技有限责任公司 Processing method and processing device, storage medium and the electronic device of quantum program
US10426380B2 (en) * 2012-05-30 2019-10-01 Resmed Sensor Technologies Limited Method and apparatus for monitoring cardio-pulmonary health
CN110692067A (en) * 2017-06-02 2020-01-14 谷歌有限责任公司 Quantum neural network
WO2020047823A1 (en) * 2018-09-07 2020-03-12 Intel Corporation Convolution over sparse and quantization neural networks
CN111492427A (en) * 2017-12-21 2020-08-04 高通股份有限公司 Priority information for higher order ambisonic audio data
CN111563186A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Quantum data storage method, quantum data reading method, quantum data storage device, quantum data reading device and computing equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10426380B2 (en) * 2012-05-30 2019-10-01 Resmed Sensor Technologies Limited Method and apparatus for monitoring cardio-pulmonary health
CN104078047A (en) * 2014-06-21 2014-10-01 西安邮电大学 Quantum compression method based on voice multiband excitation coding LSP parameter
CN110692067A (en) * 2017-06-02 2020-01-14 谷歌有限责任公司 Quantum neural network
CN111492427A (en) * 2017-12-21 2020-08-04 高通股份有限公司 Priority information for higher order ambisonic audio data
WO2020047823A1 (en) * 2018-09-07 2020-03-12 Intel Corporation Convolution over sparse and quantization neural networks
CN109472364A (en) * 2018-10-17 2019-03-15 合肥本源量子计算科技有限责任公司 Processing method and processing device, storage medium and the electronic device of quantum program
CN111563186A (en) * 2020-04-30 2020-08-21 北京百度网讯科技有限公司 Quantum data storage method, quantum data reading method, quantum data storage device, quantum data reading device and computing equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN WANG等: "Variational Quantum Singular Value Decomposition", 《ARXIV》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422609A (en) * 2021-06-02 2021-09-21 出门问问信息科技有限公司 Data compression method and device
CN113379059A (en) * 2021-06-10 2021-09-10 北京百度网讯科技有限公司 Model training method for quantum data classification and quantum data classification method
CN113379059B (en) * 2021-06-10 2022-09-23 北京百度网讯科技有限公司 Model training method for quantum data classification and quantum data classification method
CN113973090A (en) * 2021-10-18 2022-01-25 北谷电子有限公司 Apparatus and method for processing big data in communication network
CN113973090B (en) * 2021-10-18 2023-12-08 北谷电子股份有限公司 Apparatus and method for processing big data in communication network
CN113988303A (en) * 2021-10-21 2022-01-28 北京量子信息科学研究院 Quantum recommendation method, device and system based on parallel quantum intrinsic solver
CN115632660A (en) * 2022-12-22 2023-01-20 山东海量信息技术研究院 Data compression method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN112381233A (en) Data compression method and device, electronic equipment and storage medium
CN111563186B (en) Quantum data storage method, quantum data reading method, quantum data storage device, quantum data reading device and computing equipment
CN110738321B (en) Quantum signal processing method and device
CN111182254B (en) Video processing method, device, equipment and storage medium
Cesa-Bianchi et al. Worst-Case Analysis of Selective Sampling for Linear Classification.
Ding et al. An efficient algorithm for generalized linear bandit: Online stochastic gradient descent and thompson sampling
CN111598247B (en) Quantum Gibbs state generation method and device and electronic equipment
Wang et al. Large-scale affine matrix rank minimization with a novel nonconvex regularizer
CN114374440B (en) Quantum channel classical capacity estimation method and device, electronic equipment and medium
CN112148975A (en) Session recommendation method, device and equipment
CN113255922B (en) Quantum entanglement quantization method and device, electronic device and computer readable medium
WO2023174036A1 (en) Federated learning model training method, electronic device and storage medium
CN114239840A (en) Quantum channel noise coefficient estimation method and device, electronic device and medium
CN112288483A (en) Method and device for training model and method and device for generating information
CN115345309A (en) Method and device for determining system characteristic information, electronic equipment and medium
CN112529058A (en) Image generation model training method and device and image generation method and device
WO2020224150A1 (en) System and method for quantum circuit simulation
CN111311000B (en) User consumption behavior prediction model training method, device, equipment and storage medium
CN111709514A (en) Processing method and device of neural network model
CN115118559A (en) Sparse channel estimation method, device, equipment and readable storage medium
WO2021223747A1 (en) Video processing method and apparatus, electronic device, storage medium, and program product
Jiang et al. Quantum image sharpness estimation based on the Laplacian operator
Lin et al. Generalized non-convex non-smooth sparse and low rank minimization using proximal average
Han et al. Non-negativity and dependence constrained sparse coding for image classification
Wang et al. Linear programming using diagonal linear networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination