CN114358317B - Data classification method based on machine learning framework and related equipment - Google Patents

Data classification method based on machine learning framework and related equipment Download PDF

Info

Publication number
CN114358317B
CN114358317B CN202210282555.4A CN202210282555A CN114358317B CN 114358317 B CN114358317 B CN 114358317B CN 202210282555 A CN202210282555 A CN 202210282555A CN 114358317 B CN114358317 B CN 114358317B
Authority
CN
China
Prior art keywords
quantum
logic gate
sub
parameter
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210282555.4A
Other languages
Chinese (zh)
Other versions
CN114358317A (en
Inventor
方圆
王汉超
李蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Origin Quantum Computing Technology Co Ltd
Original Assignee
Origin Quantum Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Origin Quantum Computing Technology Co Ltd filed Critical Origin Quantum Computing Technology Co Ltd
Priority to CN202210282555.4A priority Critical patent/CN114358317B/en
Publication of CN114358317A publication Critical patent/CN114358317A/en
Application granted granted Critical
Publication of CN114358317B publication Critical patent/CN114358317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a data classification method based on a machine learning framework and related equipment, wherein the machine learning framework comprises a quantum module, and the method comprises the following steps: calling the quantum module to create a classification model comprising a quantum program, wherein the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a qubit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the qubit to a target state; and inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result. By the technical scheme, the classification accuracy of the data to be classified can be improved.

Description

Data classification method based on machine learning framework and related equipment
Technical Field
The invention belongs to the technical field of quantum computing, and particularly relates to a data classification method based on a machine learning framework and related equipment.
Background
The machine learning model is widely applied to artificial intelligence research due to excellent performance, and the machine learning model can be obtained according with expectation by training the machine learning model by using the marked training data, so that the machine learning model can be applied to specific application work such as voice recognition, image recognition and the like. The machine learning model does not need to artificially set the standard for specific application work, and the corresponding work standard can be set by training the machine learning model, so that the machine learning model has better adaptability to different application work. With the development of quantum computing, more and more machine learning models containing quantum programs are provided.
In the related art, how to improve the classification accuracy of data to be classified by using a machine learning model is very important.
Disclosure of Invention
The invention aims to provide a data classification method based on a machine learning framework and related equipment, aiming at improving the classification accuracy of data to be classified.
In order to achieve the above object, in a first aspect of embodiments of the present invention, there is provided a data classification method based on a machine learning framework, where the machine learning framework includes a quantum module, the method includes:
calling the quantum module to create a classification model comprising a quantum program, wherein the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a qubit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the qubit to a target state;
and inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
Optionally, the invoking the quantum module to create a classification model including a quantum program includes:
calling the quantum module to create the first parameter-containing sub-logic gate, calling the quantum module to create the second parameter-containing sub-logic gate, and cascading the first parameter-containing sub-logic gate and the second parameter-containing sub-logic gate to obtain a computing unit;
creating a quantum program comprising at least two of the computational units;
and creating a classification model encapsulated with the quantum program.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
the calling the quantum module to create the first parameter-containing sub-logic gate comprises:
calling the angle coding unit to create the first parameter-containing sub-logic gate which takes the input data as self parameters.
Optionally, the first parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
the calling the quantum module to create the second argument-containing sub-logic gate comprises:
and calling the angle coding unit to create the second parameter-containing sub-logic gate for updating the parameters of the angle coding unit when the classification model is trained.
Optionally, the second parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the creating a quantum program comprising at least two of the computational units comprises:
and arranging at least two computing units according to a preset sequence to obtain a quantum program acting on one or more quantum bits.
Optionally, the creating a quantum program comprising at least two of the computational units comprises:
obtaining a multi-bit quantum logic gate;
and arranging the multi-bit quantum logic gate and at least two computing units according to a preset sequence to obtain a quantum program acting on a plurality of quantum bits.
Optionally, the machine learning framework further comprises a classical module, the method further comprising:
calling the classical module to create a training layer of the classification model and acquiring training data;
inputting the training data into the classification model to obtain output data;
and inputting the output data into the training layer to update the parameters of the second parameter-containing sub-logic gate to obtain the trained classification model.
In a second aspect of the embodiments of the present invention, there is provided a data classification apparatus based on a machine learning framework, where the machine learning framework includes a quantum module, and the apparatus includes:
the first creating module is used for calling the quantum module to create a classification model comprising a quantum program, the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a quantum bit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the quantum bit to a target state;
and the first input module is used for inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
Optionally, the first creating module is further configured to:
calling the quantum module to create the first inclusion parameter sub-logic gate, calling the quantum module to create the second inclusion parameter sub-logic gate, and cascading the first inclusion parameter sub-logic gate and the second inclusion parameter sub-logic gate to obtain a computing unit;
creating a quantum program comprising at least two of the computational units;
and creating a classification model encapsulated with the quantum program.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
the first creation module is further to:
calling the angle coding unit to create the first parameter-containing sub-logic gate which takes the input data as self parameters.
Optionally, the first parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
optionally, the first creating module is further configured to:
and calling the angle coding unit to create the second parameter-containing sub-logic gate for updating the parameters of the angle coding unit when the classification model is trained.
Optionally, the second parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the first creating module is further configured to:
and arranging at least two computing units according to a preset sequence to obtain a quantum program acting on one or more quantum bits.
Optionally, the first creating module is further configured to:
obtaining a multi-bit quantum logic gate;
and arranging the multi-bit quantum logic gate and at least two computing units according to a preset sequence to obtain a quantum program acting on a plurality of quantum bits.
Optionally, the machine learning framework further comprises a classical module, the apparatus further comprising:
the second creating module is used for calling the classical module to create a training layer of the classification model and acquiring training data;
the second input module is used for inputting the training data into the classification model to obtain output data;
and the updating module is used for inputting the output data into the training layer so as to update the parameters of the second parameter-containing sub-logic gate to obtain the trained classification model.
In a third aspect of embodiments of the present invention, a storage medium is provided, in which a computer program is stored, where the computer program is configured to perform the steps of the method according to any one of the above first aspects when the computer program runs.
In a fourth aspect of embodiments of the present invention, there is provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the steps of the method according to any one of the first aspect.
Based on the technical scheme, the classification model comprising at least two computing units is created by calling the quantum module, the same data to be classified is input into the first parameter-containing sub-logic gate of each computing unit as input data, and when the classification model is operated, the same data to be classified can be subjected to multiple operation processing through a quantum program, so that the classification accuracy of the data to be classified is improved.
Drawings
Fig. 1 is a block diagram illustrating a hardware structure of a computer terminal of a data classification method based on a machine learning framework according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method for machine learning framework-based data classification in accordance with an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of data classification based on a machine learning framework including step S21 according to an exemplary embodiment.
FIG. 4 is a quantum wire diagram illustrating a quantum procedure in a classification model according to an exemplary embodiment.
Fig. 5 is another quantum wire diagram illustrating a quantum process in a classification model according to an exemplary embodiment.
Fig. 6 is another quantum wire diagram illustrating a quantum process in a classification model according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating a data classification method based on a machine learning framework according to an exemplary embodiment, including step S212.
Fig. 8 is a flowchart illustrating a data classification method based on a machine learning framework according to an exemplary embodiment, including step S22.
FIG. 9 is another flow diagram illustrating a method for machine learning framework-based data classification in accordance with an exemplary embodiment.
Fig. 10 is another quantum wire diagram illustrating a quantum process in a classification model according to an exemplary embodiment.
FIG. 11 is a classification accuracy graph of a classification model shown in accordance with an exemplary embodiment.
Fig. 12 is a block diagram illustrating a data classification apparatus based on a machine learning framework according to an example embodiment.
Detailed Description
The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
The embodiment of the invention firstly provides a data classification method based on a machine learning framework, and the method can be applied to electronic equipment, such as computer terminals, specifically common computers, quantum computers and the like.
This will be described in detail below by way of example as it would run on a computer terminal. Fig. 1 is a block diagram illustrating a hardware structure of a computer terminal of a data classification method based on a machine learning framework according to an exemplary embodiment. As shown in fig. 1, the computer terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing a machine learning framework-based data classification method based on quantum wires, and optionally may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the data classification method based on the machine learning framework in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 can further include memory located remotely from the processor 102, which can be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It should be noted that a true quantum computer is a hybrid structure, which includes two major components: one part is a classic computer which is responsible for executing classic calculation and control; the other part is quantum equipment which is responsible for running quantum programs so as to realize quantum computation. The quantum program is a string of instruction sequences which can run on a quantum computer and are written by a quantum language such as a Qrun language, so that the support of the operation of the quantum logic gate is realized, and the quantum computation is finally realized. In particular, a quantum program is a sequence of instructions that operate quantum logic gates in a time sequence.
In practical applications, due to the limited development of quantum device hardware, quantum computation simulation is usually required to verify quantum algorithms, quantum applications, and the like. The quantum computing simulation is a process of realizing the simulation operation of a quantum program corresponding to a specific problem by means of a virtual architecture (namely a quantum virtual machine) built by resources of a common computer. In general, it is necessary to build quantum programs for a particular problem. The quantum program referred in the embodiment of the invention is a program written in a classical language for representing quantum bits and evolution thereof, wherein the quantum bits, quantum logic gates and the like related to quantum computation are all represented by corresponding classical codes.
A quantum circuit, which is an embodiment of a quantum program and also a weighing sub-logic circuit, is the most common general quantum computation model, and represents a circuit that operates on a quantum bit under an abstract concept, and the circuit includes the quantum bit, a circuit (timeline), and various quantum logic gates, and finally, a result is often read through a quantum measurement operation.
Unlike conventional circuits that are connected by metal lines to pass either voltage or current signals, in quantum circuits, the lines can be viewed as being connected by time, i.e., the state of a qubit evolves naturally over time, in the process being operated on as indicated by the hamiltonian until a logic gate is encountered.
The quantum program refers to the total quantum circuit, wherein the total number of the quantum bits in the total quantum circuit is the same as the total number of the quantum bits of the quantum program. It can be understood that: a quantum program may consist of quantum wires, measurement operations for quantum bits in the quantum wires, registers to hold measurement results, and control flow nodes (jump instructions), and a quantum wire may contain tens to hundreds or even thousands of quantum logic gate operations. The execution process of the quantum program is a process executed for all the quantum logic gates according to a certain time sequence. It should be noted that timing is the time sequence in which the single quantum logic gate is executed.
It should be noted that in the classical calculation, the most basic unit is a bit, and the most basic control mode is a logic gate, and the purpose of the control circuit can be achieved through the combination of the logic gates. Similarly, the way qubits are handled is quantum logic gates. The quantum states can be evolved using quantum logic gates, which are the basis for forming quantum circuits, including single-bit quantum logic gates, such as Hadamard gates (H-gates, Hadamard gates), pauli-X gates (X-gates, pauli-X gates), pauli-Y gates (Y-gates, pauli-Y gates), pauli-Z gates (Z-gates, pauli-Z gates),RX gates (RX revolving gates), RY gates (RY revolving gates), RZ gates (RZ revolving gates), and the like; multi-bit quantum logic gates such as CNOT gates, CR gates, isswap gates, Toffoli gates, etc. Quantum logic gates are typically represented using unitary matrices, which are not only matrix-form but also an operation and transformation. The function of a general quantum logic gate on a quantum state is calculated by multiplying a unitary matrix by a vector corresponding to a quantum state right vector. For example, quantum state right vector |0>The corresponding vector may be
Figure 233401DEST_PATH_IMAGE001
Quantum state right vector |1>The corresponding vector may be
Figure 249898DEST_PATH_IMAGE002
Fig. 2 is a flowchart illustrating a method of data classification based on a machine learning framework including quantum modules, as shown in fig. 2, according to an example embodiment, the method including:
and S21, calling the quantum module to create a classification model comprising a quantum program, wherein the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a quantum bit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the quantum bit to a target state.
S22, inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
In particular, the machine learning framework integrates a plurality of function sets for creating and training the machine learning model, and functions in the function sets can be conveniently called through a defined interface to realize related operations of the machine learning model. The quantum module included in the machine learning framework can be configured to create a quantum computation layer in the machine learning model, the quantum computation layer is a program module including a quantum program, and can be used for realizing quantum computation corresponding to the quantum program, and the quantum computation layer is obtained by packaging the quantum program according to a certain standard, so that the quantum computation layer is convenient to use when the machine learning model is created and trained. The quantum program is a program for realizing quantum computation, and the quantum program can be obtained by calling a quantum module to create a quantum logic gate which acts on a quantum bit in a specific sequence, and the quantum program is encapsulated to obtain a quantum computation layer.
In one possible implementation, the machine learning framework may include:
a data structure module configured to create tensor data for input to a machine learning model and perform an operation on the tensor data;
a quantum module configured to create a quantum computing layer for creating a machine learning model;
a classical module configured to create a classical computation layer for creating a machine learning model, an abstract class layer for encapsulating the quantum computation layer and the classical computation layer, a training layer for training to optimize the machine learning model.
Specifically, the data structure module defines a data structure of tensor data, and input data can be converted into tensor data by calling the data structure module for inputting the machine learning model for forward calculation. Of course, the data structure module may also be configured to perform operations on tensor data, for example, the data structure module may also define mathematical operations and logical operations between tensor data, and further may invoke the data structure module to create a classical computation layer of the machine learning model based on an operational relationship between tensor data, for example, a fully-connected layer of a classical neural network defines a relationship between data x input to the fully-connected layer and data y output to the fully-connected layer by a function y = w × x + b, where w and b are parameters, and the fully-connected layer may be constructed by converting data x, parameter w, and parameter b into tensor data and invoking the data structure module to perform operations corresponding to the function on the tensor data.
In one possible implementation, the data structure module may be configured to arrange the input data in a preset data structure to create tensor data for input to the machine learning model, and create tensor data for input to the machine learning model that is arranged in the preset data structure and numerically determined. For example, new tensor data may be created by arranging the acquired data 1,2,3 in a preset list data structure [1,2,3], or tensor data in which all the elements arranged in the preset list data structure [1,1,1] are 1 may be created directly.
The tensor data may include, in addition to the data values arranged in the preset data structure, information of other tensor data for which the data values are calculated and a gradient function of the tensor data including the data values with respect to the other tensor data, where the information of the other tensor data for which the data values are calculated may include variables, data value storage addresses, data values, and the like of the other tensor data, as long as it indicates that the other tensor data corresponding node is a predecessor node of the tensor data corresponding node for which the data values are calculated. Taking the above functional relationship y = w x + b as an example, for tensor data y, which includes data values corresponding to y, such as [1,2,3], and also includes information of tensor data of w, x, b of y and gradient functions of y with respect to w, x, b, respectively, in a possible implementation, the information may include data value storage addresses of w, x and b, and the tensor data y includes gradient functions x of y with respect to w, gradient functions w of y with respect to x, and gradient functions 1 of y with respect to b, and further, when training the machine learning model, the data values of y with respect to w, x and b may be obtained from tensor data y directly, and the data values of w, x, b and corresponding gradient functions are calculated by these data values and corresponding gradient functions, x, b gradient values.
Specifically, for the classic module, a classic computation layer of the machine learning model can be created by calling the classic module, the classic computation layer is a classic computation part in the machine learning model and can be obtained by packaging a created classic computation program through the classic module according to a certain standard, and therefore the classic computation layer is convenient to use when the machine learning model is trained. After the quantum computation layer and the classical computation layer are created, the quantum computation layer and the classical computation layer can be packaged through a classical module to create an abstract class layer which accords with a certain standard, the abstract class layer is realized through a class (class) method in a programming language, and a machine learning model which accords with the certain standard can be created through packaging the quantum computation layer and the classical computation layer, for example, the created abstract class layer defines a forward operation machine learning model mode, and of course, only the quantum computation layer or the classical computation layer can be packaged, so that the machine learning model can be conveniently subjected to forward operation to obtain a computation result for computing a loss function when the machine learning model is trained, and meanwhile, a sequential relation of gradient computation when the machine learning model is reversely computed can also be obtained. The classical module may also be used to create a training layer of the machine learning model to train the machine learning model.
In step S21, the classification model is a machine learning model, i.e., the parameter value of the classification model itself can be updated by training to optimize the index performance of the processing task. The classification model includes a quantum program, that is, part or all of the classification model is realized based on quantum computation, referring to the description of the machine learning framework, a quantum module may be called to create a quantum logic gate acting on a quantum bit in a specific order to obtain the quantum program, the quantum program is encapsulated to obtain a quantum computation layer, and then the quantum computation layer is encapsulated to obtain the classification model.
Different from other classification models, the classification model comprises at least two calculation units, each calculation unit comprises at least one first parameter-containing sub-logic gate and at least one second parameter-containing sub-logic gate, and the first parameter-containing sub-logic gate and the second parameter-containing sub-logic gate are cascaded, namely the first parameter-containing sub-logic gate and the second parameter-containing sub-logic gate sequentially act on the same qubit according to a preset sequence. In other embodiments, each computing unit may also include 2 first parameter-containing sub-logic gates and 3 second parameter-containing sub-logic gates, and when the computing unit acts on a single qubit, 2 of the first parameter-containing sub-logic gates act on the qubit first, and then the remaining 3 second parameter-containing sub-logic gates act on the qubit later. Of course, the role of the computing unit may also be a plurality of qubits, for which the invention is not particularly limited.
The first parameter-containing sub-logic gate may encode input data into a quantum state of a qubit, for example, the input data may be used as a parameter of the first parameter-containing sub-logic gate, and when the first parameter-containing sub-logic gate acts on the qubit, the qubit may evolve from an initial state to a specific quantum state, and when the second parameter-containing sub-logic gate acts on the qubit, the quantum state of the qubit may evolve from the specific quantum state to a target state, where the target state is a quantum state including classification result information or intermediate information used to obtain classification result information, and the classification result may be extracted by processing the target state. In general, the parameters of the second scalar sub-logic gate may be determined by training.
Optionally, referring to fig. 3, in step S21, invoking the quantum module to create a classification model including a quantum program includes:
s211, calling the quantum module to create the first parameter-containing sub-logic gate, calling the quantum module to create the second parameter-containing sub-logic gate, and cascading the first parameter-containing sub-logic gate and the second parameter-containing sub-logic gate to obtain a computing unit.
S212, creating a quantum program comprising at least two computing units.
And S213, creating a classification model encapsulated with the quantum program.
In step S211, a quantum module may be called through a corresponding interface to create a first inclusive parameter sub logic gate and a second inclusive parameter sub logic gate, and the first inclusive parameter sub logic gate and the second inclusive parameter sub logic gate are cascaded to obtain a computing unit according to the foregoing manner.
Optionally, the quantum module may include:
an angle encoding unit configured to create a parametric sub-logic gate.
Specifically, the angle coding unit can be called through an interface corresponding to the angle coding unit, the parameter value is input to create the sub-logic gate containing the parameter, and when the sub-logic gate containing the parameter is created, the value of the parameter of the unitary matrix corresponding to the sub-logic gate containing the parameter is determined to be the input parameter value, so that the corresponding sub-logic gate containing the parameter can be determined.
Optionally, in step S211, invoking the quantum module to create the first argument-containing sub-logic gate includes:
calling the angle coding unit to create the first parameter-containing sub-logic gate which takes the input data as self parameters.
Specifically, the angle encoding unit may be called via a corresponding interface to create a first parameter-containing sub-logic gate, and the input data may be used as a parameter of the first parameter-containing sub-logic gate. Optionally, the first parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate. For example, a first parameter-containing sub-logic gate may be created via interface RX (qubits [0], np.pi/3), where the first parameter-containing sub-logic gate is an RX gate, acts on qubit [0] corresponding to number 0, and has a parameter of pi/3.
Optionally, in step S211, invoking the quantum module to create the second argument-containing sub-logic gate includes:
and calling the angle coding unit to create the second parameter-containing sub-logic gate for updating the parameters of the angle coding unit when the classification model is trained.
Specifically, the angle coding unit can be called through a corresponding interface to create a second parameter-containing sub-logic gate, the initial value of the parameter of the second parameter-containing sub-logic gate is given as a random value or a specific value, and the parameter of the second parameter-containing sub-logic gate can be continuously optimized during training of the classification model until the requirement is met. Optionally, the second parametric sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate. For example, a second parameter-containing sub-logic gate can be created via the interface RY (qubits [1], np.pi/2), the second parameter-containing sub-logic gate being a RY gate, acting on the qubit qubits [1] corresponding to the number 1, and the parameter of the RY gate being pi/2. Of course, in other possible embodiments, the first parameter-containing sub logic gate and the second parameter-containing sub logic gate may also be other quantum logic gates containing parameters, and the present invention is not limited to this specifically.
After obtaining the computing units, the method proceeds to step S212, and at least two computing units may be arranged according to a preset sequence, so as to create corresponding quantum programs. It should be noted that different calculating units may be the same or different, as long as they include the first inclusive sub logic gate and the second inclusive sub logic gate, and the present invention is not limited to this.
Optionally, in step S212, creating a quantum program including at least two of the computing units includes:
and arranging at least two computing units according to a preset sequence to obtain a quantum program acting on one or more quantum bits.
Specifically, at least two of the computing units may be arranged in a preset order, for example, a quantum program acting on a single qubit may be arranged in a certain order, or a quantum program acting on multiple qubits may be arranged in a certain order, where multiple qubits refer to 2 or more than 2 qubits. For example, referring to FIG. 4, for N computational units
Figure 747876DEST_PATH_IMAGE003
Figure 706473DEST_PATH_IMAGE004
Figure 657112DEST_PATH_IMAGE005
……
Figure 539617DEST_PATH_IMAGE006
Arranged in the order of the computational units as shown by the quantum lines in FIG. 4, resulting in pairs acting on a single qubitCorresponding to the quantum process of the quantum wire. Referring to FIG. 5, for 2N computing units
Figure 829784DEST_PATH_IMAGE007
Figure 772332DEST_PATH_IMAGE008
Figure 210267DEST_PATH_IMAGE005
……
Figure 820764DEST_PATH_IMAGE006
Figure 293334DEST_PATH_IMAGE009
Figure 406783DEST_PATH_IMAGE010
Figure 332014DEST_PATH_IMAGE011
……
Figure 494005DEST_PATH_IMAGE012
The quantum programs corresponding to the quantum wires, which act on 2 qubits, are obtained by arranging the calculation units in the order shown in fig. 5 for the quantum wires.
Alternatively, referring to fig. 7, in step S212, the creating a quantum program including at least two of the computing units includes:
and S2121, acquiring the multi-bit quantum logic gate.
And S2122, arranging the multi-bit quantum logic gate and the at least two computing units according to a preset sequence to obtain a quantum program acting on a plurality of quantum bits.
Specifically, in step S2121, the multi-bit qubit gate is a qubit gate that acts on 2 or more qubits simultaneously, such as a CNOT gate, a CR gate, an iSWAP gate, a CZ gate (control Z gate), and the like. The corresponding multibit quantum logic gate can be called through the corresponding interface. Then in stepIn S2122, the multi-bit quantum logic gate and the at least two computing units are arranged according to a preset sequence, so as to obtain a corresponding quantum program. Referring to fig. 6, the multi-bit quantum logic gate is a CZ gate, and after the CZ gate is obtained, the CZ gate is connected with 2N computing units
Figure 555502DEST_PATH_IMAGE013
Figure 26804DEST_PATH_IMAGE014
Figure 439330DEST_PATH_IMAGE005
……
Figure 467329DEST_PATH_IMAGE006
Figure 586595DEST_PATH_IMAGE015
Figure 776268DEST_PATH_IMAGE010
Figure 676091DEST_PATH_IMAGE011
……
Figure 429152DEST_PATH_IMAGE012
The quantum program corresponding to the quantum line of fig. 6 can be obtained by arranging the quantum lines in the order shown in fig. 6.
After obtaining the quantum program, the process proceeds to step S213, and a classification model may be created by encapsulating the quantum program, specifically, the quantum program may be encapsulated together with the classical program, or the quantum program may be encapsulated separately, for example, the quantum program is introduced as a parameter through a corresponding interface, and a quantum computing layer including the quantum program is created, and meanwhile, other programs may be encapsulated in the quantum computing layer, for example, a program for computing a gradient of an output of the quantum program relative to a parameter of the quantum program, so as to compute a relevant gradient when the classification model is trained, and complete updating of a parameter of the quantum program. And then, the quantum computing layers can be packaged, and the forward operation sequence of each quantum computing layer and each quantum computing layer included in the classification model is packaged into the classification model so as to facilitate the forward operation of the classification model in the follow-up process.
In step S22, the data to be classified is input to the classification model, the classification model is operated in the forward direction to obtain an output result of the classification model, and then the output result is subjected to correlation operation to obtain a classification result.
Alternatively, referring to fig. 8, in step S22, inputting data to be classified as the input data to the classification model, and obtaining an output result, the method includes:
s221, inputting data to be classified as the input data to the first parameter-containing sub-logic gate of the classification model.
S222, the classification model is operated in the forward direction, and an output result output by the classification model is obtained.
In step S221, the data to be classified is transmitted into the classification model through the calling interface of the classification model, and then the data to be classified is input into the first sub-logic gate containing parameters as input data, and the unitary matrix corresponding to the first sub-logic gate containing parameters is determined as parameters of the first sub-logic gate containing parameters, so as to determine the classification model. It should be noted that the data to be classified input to each first argument-containing sub-logic gate may be the same.
After the data to be classified is input, in step S222, the classification model is operated in the forward direction, the quantum computation layers different from the classification model can be operated according to the sequence defined in the classification model, the operation of the quantum computation layers includes the operation of the quantum program in the quantum computation layers, for the quantum program, the computation units, i.e., the first parameter-containing sub-logic gate and the second parameter-containing sub-logic gate, in the quantum program sequentially act on the corresponding quantum bits according to the sequence predefined by the quantum program, and finally the measurement gates measure the quantum bits to obtain the output result output by the classification model.
For the data to be classified, the data to be classified may be arranged according to a preset data structure to obtain tensor data, for example, the obtained data to be classified is 1,2,3, and the data to be classified may be converted into a vector structure [1,2,3] as tensor data. The tensor data is then input into the first inclusive sub-logic gate of the classification model in the manner previously described.
Optionally, in step S22, determining a classification result based on the output result includes:
and carrying out weighted summation on the output result to obtain a classification result.
Specifically, the output result may include different quantum states and probabilities of occurrence of the corresponding quantum states, and the classification result of the data to be classified may be obtained by performing weighted summation on the probabilities, for example, different probabilities may be multiplied by corresponding weights respectively to obtain different products, and then the products are added to obtain the classification result.
Based on the technical scheme, the classification model comprising at least two computing units is created by calling the quantum module, the same data to be classified is input to the first parameter-containing sub-logic gate of each computing unit as input data, and when the classification model is operated, the same data to be classified can be subjected to multiple times of operation processing through a quantum program, so that the classification accuracy of the data to be classified is improved.
Fig. 9 is another flow diagram illustrating a method of data classification based on a machine learning framework including a quantum module and a classical module, as shown in fig. 9, the method including:
and S91, calling the quantum module to create a classification model comprising a quantum program, wherein the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a qubit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the qubit to a target state.
And S92, calling the classical module to create a training layer of the classification model and acquiring training data.
And S93, inputting the training data into the classification model to obtain output data.
And S94, inputting the output data into the training layer to update the parameters of the second parameter-containing sub-logic gate to obtain the trained classification model.
S95, inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
In step S91, see step S21, and step S95 see step S22.
After creating the classification model, step S92 is executed to call the classical module to create a training layer, where the classical module can refer to the foregoing description, and the training layer can include a loss function layer and an optimizer layer, where the loss function layer includes a loss function of the classification model, such as a mean square error loss function, a cross entropy loss function, and the like, and the optimizer layer includes a method for updating parameters of the classification model, such as a Gradient Descent algorithm, such as a Stochastic Gradient Descent (SGD) algorithm, an Adaptive Gradient algorithm (Adaptive Gradient algorithm, adard), an Adaptive Moment Estimation (Adam), and the like. The loss function layer and the optimizer layer can be called through corresponding interfaces. The training data may be data for inputting a classification model training the classification model, which may be, for example, picture data to be classified.
In step S93, the training data may be input into the first parameter-containing sub-logic gate as input data, the unitary matrix corresponding to the first parameter-containing sub-logic gate is determined as a parameter of the first parameter-containing sub-logic gate, and then the classification model is determined, and the classification model is operated in the forward direction to perform an operation on the training data, so as to obtain output data output by the classification model.
In step S94, the output data may be input into a loss function layer in the training layer to calculate a loss function value of the classification model according to the output data, then the loss function value is input into the optimizer layer, the parameter of the second parameter-containing sub-logic gate in the classification model is updated according to the loss function value and the corresponding gradient descent algorithm, and the value of the loss function after updating the parameter is recalculated, and when the value is smaller than the threshold, the training is stopped, and the classification model after updating the parameter is used as a trained classification model for performing classification calculation on the data to be classified in step S95. If the value of the loss function is greater than or equal to the threshold, the training of the classification model may continue until the value of its corresponding loss function is less than the threshold. Of course, in other possible embodiments, the training data may be input to the classification model for multiple times of training, and the training of the classification model is stopped when the training times reaches the preset times.
For example, a quantum program corresponding to a quantum line as shown in fig. 10 is created, and a classification model including the quantum program is created, where the classification model includes 4 calculation units, which are a calculation unit 101, a calculation unit 102, a calculation unit 103, and a calculation unit 104, for a first parameter-containing sub-logic gate in each calculation unit, a first RZ gate and a first RY gate that sequentially act on a quantum bit 105 are included, data to be classified received by each first RZ gate is the same, data to be classified received by each first RY gate is the same, for a second parameter-containing sub-logic gate in each calculation unit, a second RZ gate, a second RY gate, and a third RZ gate that sequentially act on the quantum bit 105 are included, and finally, a measurement gate is provided for measuring the quantum bit 105 to obtain an output result or output data of the classification model. The classification model is used for classifying two-dimensional coordinate data generated randomly, and two types of data are distinguished by using a circle with the radius of 1, wherein the label of data in the circle is 1, and the label of data outside the circle is 0. The 500 training data and 500 test data are obtained, and the classification accuracy of the classification model on the training data and the test data is shown in fig. 11, where a curve 111 is used for representing the accuracy of the training data, and a curve 112 is used for representing the accuracy of the test data, it can be found that as the number of iterations of training the training data by the classification model increases, the classification accuracy of the classification model gradually increases, and finally, the classification accuracy can be higher.
Fig. 12 is a block diagram illustrating an apparatus for data classification based on a machine learning framework including quantum modules according to an exemplary embodiment, and as shown in fig. 12, the apparatus 120 includes:
a first creating module 121, configured to invoke the quantum module to create a classification model including a quantum program, where the quantum program includes at least two computing units, each computing unit includes a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate that are cascaded, the first parameter-containing sub-logic gate is configured to encode input data into a quantum state of a qubit, and the second parameter-containing sub-logic gate is configured to evolve the quantum state of the qubit into a target state;
the first input module 122 is configured to input data to be classified into the classification model as the input data, obtain an output result, and determine a classification result based on the output result.
Optionally, the first creating module 121 is further configured to:
calling the quantum module to create the first inclusion parameter sub-logic gate, calling the quantum module to create the second inclusion parameter sub-logic gate, and cascading the first inclusion parameter sub-logic gate and the second inclusion parameter sub-logic gate to obtain a computing unit;
creating a quantum program comprising at least two of the computational units;
and creating a classification model encapsulated with the quantum program.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
the first creating module 121 is further configured to:
calling the angle coding unit to create the first parameter-containing sub-logic gate which takes the input data as self parameters.
Optionally, the first parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the quantum module comprises:
an angle encoding unit configured to create a parametric sub-logic gate;
optionally, the first creating module 121 is further configured to:
and calling the angle coding unit to create the second parameter-containing sub-logic gate for updating the parameters of the angle coding unit when the classification model is trained.
Optionally, the second parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
Optionally, the first creating module 121 is further configured to:
and arranging at least two computing units according to a preset sequence to obtain a quantum program acting on one or more quantum bits.
Optionally, the first creating module 121 is further configured to:
obtaining a multi-bit quantum logic gate;
and arranging the multi-bit quantum logic gate and at least two computing units according to a preset sequence to obtain a quantum program acting on a plurality of quantum bits.
Optionally, the machine learning framework further comprises a classical module, the apparatus 120 further comprises:
the second creating module is used for calling the classic module to create a training layer of the classification model and acquiring training data;
the second input module is used for inputting the training data into the classification model to obtain output data;
and the updating module is used for inputting the output data into the training layer so as to update the parameters of the second parameter-containing sub-logic gate to obtain the trained classification model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Yet another embodiment of the present invention further provides a storage medium, in which a computer program is stored, where the computer program is configured to execute the steps in the above-mentioned data classification method based on machine learning framework when running.
Specifically, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Yet another embodiment of the present invention further provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the steps in the above-mentioned data classification method based on machine learning framework.
Specifically, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Specifically, in this embodiment, the processor may be configured to execute the following steps by a computer program:
calling the quantum module to create a classification model comprising a quantum program, wherein the quantum program comprises at least two computing units, each computing unit comprises a first parameter-containing sub-logic gate and a second parameter-containing sub-logic gate which are cascaded, the first parameter-containing sub-logic gate is used for encoding input data to a quantum state of a qubit, and the second parameter-containing sub-logic gate is used for evolving the quantum state of the qubit to a target state;
and inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
The construction, features and functions of the present invention are described in detail in the embodiments illustrated in the drawings, which are only preferred embodiments of the present invention, but the present invention is not limited by the drawings, and all equivalent embodiments modified or changed according to the idea of the present invention should fall within the protection scope of the present invention without departing from the spirit of the present invention covered by the description and the drawings.

Claims (10)

1. A method of machine learning framework-based data classification, wherein the machine learning framework comprises a quantum module comprising an angle coding unit configured to create sub-logic gates comprising parameters, the method comprising:
invoking the quantum module to create a classification model comprising a quantum program, the quantum program comprising at least two computational units, each computational unit comprising a cascade of a first inclusive sub-logic gate and a second inclusive sub-logic gate, the first parametrically involved sub-logic gate is to encode input data into a quantum state of a qubit, the second parametrically involved sub-logic gate is to evolve the quantum state of the qubit to a target state, wherein the quantum program consists of quantum wires, measurement operation aiming at quantum bits in the quantum wires, registers for storing measurement results and control flow nodes, the first inclusive sub-logic gate and the second inclusive sub-logic gate are created by the angle coding unit, the input data are parameters of the first parameter-containing sub-logic gate, and the parameters of the second parameter-containing sub-logic gate are updated when the classification model is trained;
and inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
2. The method of claim 1, wherein said invoking the quantum module creates a classification model comprising a quantum program, comprising:
calling the quantum module to create the first inclusion parameter sub-logic gate, calling the quantum module to create the second inclusion parameter sub-logic gate, and cascading the first inclusion parameter sub-logic gate and the second inclusion parameter sub-logic gate to obtain a computing unit;
creating a quantum program comprising at least two of the computational units;
and creating a classification model encapsulated with the quantum program.
3. The method of claim 1, wherein the first parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
4. The method of claim 1, wherein the second parameter-containing sub-logic gate comprises at least one of an RX gate, a RY gate, and an RZ gate.
5. The method of claim 2, wherein the creating a quantum program comprising at least two of the computational units comprises:
and arranging at least two computing units according to a preset sequence to obtain a quantum program acting on one or more quantum bits.
6. The method of claim 2, wherein the creating a quantum program comprising at least two of the computational units comprises:
obtaining a multi-bit quantum logic gate;
and arranging the multi-bit quantum logic gate and at least two computing units according to a preset sequence to obtain a quantum program acting on a plurality of quantum bits.
7. The method of claim 1, wherein the machine learning framework further comprises a classical module, the method further comprising:
calling the classical module to create a training layer of the classification model and acquiring training data;
inputting the training data into the classification model to obtain output data;
and inputting the output data into the training layer to update the parameters of the second parameter-containing sub-logic gate to obtain the trained classification model.
8. An apparatus for machine learning framework-based data classification, the machine learning framework comprising a quantum module including an angle encoding unit configured to create sub-logic gates comprising parameters, the apparatus comprising:
a first creation module to invoke the quantum module to create a classification model comprising a quantum program, the quantum program comprises at least two computing units, each computing unit comprises a first inclusive sub-logic gate and a second inclusive sub-logic gate which are cascaded, the first inclusive sub-logic gate is to encode input data into a quantum state of a qubit, the second inclusive sub-logic gate is to evolve the quantum state of the qubit to a target state, wherein the quantum program consists of quantum wires, measurement operation aiming at quantum bits in the quantum wires, registers for storing measurement results and control flow nodes, the first and second inclusive sub-logic gates are created by the angle coding unit, the input data are parameters of the first parameter-containing sub-logic gate, and the parameters of the second parameter-containing sub-logic gate are updated when the classification model is trained;
and the first input module is used for inputting the data to be classified into the classification model as the input data to obtain an output result, and determining the classification result based on the output result.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202210282555.4A 2022-03-22 2022-03-22 Data classification method based on machine learning framework and related equipment Active CN114358317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210282555.4A CN114358317B (en) 2022-03-22 2022-03-22 Data classification method based on machine learning framework and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210282555.4A CN114358317B (en) 2022-03-22 2022-03-22 Data classification method based on machine learning framework and related equipment

Publications (2)

Publication Number Publication Date
CN114358317A CN114358317A (en) 2022-04-15
CN114358317B true CN114358317B (en) 2022-06-21

Family

ID=81094555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210282555.4A Active CN114358317B (en) 2022-03-22 2022-03-22 Data classification method based on machine learning framework and related equipment

Country Status (1)

Country Link
CN (1) CN114358317B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293254A (en) * 2022-07-29 2022-11-04 合肥本源量子计算科技有限责任公司 Quantum multilayer perceptron-based classification method and related equipment
CN116258197B (en) * 2023-05-16 2023-09-08 之江实验室 Distributed training acceleration method and system based on parameter calculation and communication scheduling

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109416928A (en) * 2016-06-07 2019-03-01 伊路米纳有限公司 For carrying out the bioinformatics system, apparatus and method of second level and/or tertiary treatment
CN109478258A (en) * 2016-06-02 2019-03-15 谷歌有限责任公司 Use sub- logic control training quantum evolution
CN110692067A (en) * 2017-06-02 2020-01-14 谷歌有限责任公司 Quantum neural network
WO2020168158A1 (en) * 2019-02-15 2020-08-20 Rigetti & Co, Inc. Automated synthesizing of quantum programs

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190042970A1 (en) * 2018-09-27 2019-02-07 Xiang Zou Apparatus and method for a hybrid classical-quantum processor
US11321625B2 (en) * 2019-04-25 2022-05-03 International Business Machines Corporation Quantum circuit optimization using machine learning
US11387993B2 (en) * 2020-07-10 2022-07-12 Accenture Global Solutions Limited Quantum information interception prevention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109478258A (en) * 2016-06-02 2019-03-15 谷歌有限责任公司 Use sub- logic control training quantum evolution
CN109416928A (en) * 2016-06-07 2019-03-01 伊路米纳有限公司 For carrying out the bioinformatics system, apparatus and method of second level and/or tertiary treatment
CN110692067A (en) * 2017-06-02 2020-01-14 谷歌有限责任公司 Quantum neural network
WO2020168158A1 (en) * 2019-02-15 2020-08-20 Rigetti & Co, Inc. Automated synthesizing of quantum programs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Classical and Quantum Algorithms for Orthogonal Neural Networks;Iordanis Kerenidis et al.;《Arxiv》;20210614;全文 *
量子–经典混合神经网络及其故障诊断应用;窦通 等;《控制理论与应用》;20211130;第38卷(第11期);全文 *

Also Published As

Publication number Publication date
CN114358317A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN114358317B (en) Data classification method based on machine learning framework and related equipment
CN112016691B (en) Quantum circuit construction method and device
CN114358319B (en) Machine learning framework-based classification method and related device
CN114358318B (en) Machine learning framework-based classification method and related device
CN114358295B (en) Machine learning framework-based classification method and related device
CN114792378A (en) Quantum image identification method and device
CN114358216B (en) Quantum clustering method based on machine learning framework and related device
CN114819163B (en) Training method and device for quantum generation countermeasure network, medium and electronic device
CN115293254A (en) Quantum multilayer perceptron-based classification method and related equipment
CN114372539B (en) Machine learning framework-based classification method and related equipment
CN116011681A (en) Meteorological data prediction method and device, storage medium and electronic device
CN116403657A (en) Drug response prediction method and device, storage medium and electronic device
CN115809707B (en) Quantum comparison operation method, device, electronic device and basic arithmetic component
CN114764620A (en) Quantum convolution manipulator
CN114764619A (en) Convolution operation method and device based on quantum circuit
CN115775029A (en) Quantum line conversion method, device, medium, and electronic device
CN114372584B (en) Transfer learning method based on machine learning framework and related device
CN116432710B (en) Machine learning model construction method, machine learning framework and related equipment
CN114372583B (en) Quantum program optimization method based on machine learning framework and related equipment
CN116432764B (en) Machine learning frame
CN116431807B (en) Text classification method and device, storage medium and electronic device
CN116523059A (en) Data processing method, machine learning framework and related equipment
CN116306952A (en) Molecular property prediction method and device, storage medium and electronic device
CN116257668A (en) Data classification method and related equipment
CN116542337A (en) Data processing method, machine learning framework and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant