WO2023125858A1 - Procédé de traitement de données, système de cadre d'apprentissage automatique et dispositif associé - Google Patents

Procédé de traitement de données, système de cadre d'apprentissage automatique et dispositif associé Download PDF

Info

Publication number
WO2023125858A1
WO2023125858A1 PCT/CN2022/143598 CN2022143598W WO2023125858A1 WO 2023125858 A1 WO2023125858 A1 WO 2023125858A1 CN 2022143598 W CN2022143598 W CN 2022143598W WO 2023125858 A1 WO2023125858 A1 WO 2023125858A1
Authority
WO
WIPO (PCT)
Prior art keywords
quantum
layer
machine learning
computing
learning model
Prior art date
Application number
PCT/CN2022/143598
Other languages
English (en)
Chinese (zh)
Inventor
方圆
王伟
李蕾
窦猛汉
周照辉
王汉超
孔小飞
Original Assignee
本源量子计算科技(合肥)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111680614.5A external-priority patent/CN116432764A/zh
Priority claimed from CN202111680572.5A external-priority patent/CN116432721A/zh
Priority claimed from CN202210083468.6A external-priority patent/CN116523059A/zh
Application filed by 本源量子计算科技(合肥)股份有限公司 filed Critical 本源量子计算科技(合肥)股份有限公司
Publication of WO2023125858A1 publication Critical patent/WO2023125858A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena

Definitions

  • the present disclosure generally relates to the field of quantum computing technology. More specifically, the present disclosure relates to a data processing method, a machine learning framework system, a data processing device, a storage medium, and an electronic device.
  • Machine learning models are widely used in artificial intelligence research due to their excellent performance. By using labeled training data to train machine learning models, machine learning models that meet expectations can be obtained, and then used for speech recognition and image recognition. and other specific applications.
  • the machine learning model does not need to artificially set its standards for specific application work. By training the machine learning model, it can establish corresponding work standards by itself, which has good adaptability to different application work.
  • quantum computing there are more and more machine learning models that include quantum programs.
  • this disclosure proposes a data processing method, machine learning framework and related equipment, aiming at reducing the difficulty of debugging machine learning models including quantum programs and improving development efficiency.
  • the present disclosure provides solutions in the following aspects.
  • the present disclosure provides a data processing method applied to an electronic device including a machine learning framework, the machine learning framework including a data structure module, a quantum module and a classical module, the method comprising: calling the data
  • the structure module acquires input data and creates tensor data including the input data, calls the quantum module and the classical module to create a machine learning model, and the machine learning model includes multiple computing layers and multiple computing layers The forward propagation relationship among them; determine the first computing layer to be executed corresponding to the tensor data from a plurality of the computing layers; create a sub-calculation corresponding to the first computing layer based on the forward propagation relationship A computation graph of a graph; determining an output of the machine learning model based on the computation graph.
  • the creating the computation graph including the sub-computation graph corresponding to the first computation layer based on the forward propagation relationship includes: determining whether the first computation layer has previously There is an unexecuted second computing layer associated with the first computing layer; if there is an unexecuted second computing layer associated with the first computing layer, then execute the second computing layer, and determining the calculation relationship between the output of the second calculation layer and the output of the first calculation layer; adding the sub-computation graph corresponding to the first calculation layer to the second calculation layer based on the calculation relationship In the corresponding calculation graph, a new calculation graph is obtained.
  • the method further includes: if there is no unexecuted second computing layer associated with the first computing layer, creating the computing graph corresponding to the first computing layer.
  • adding the sub-computation graph corresponding to the first computing layer to the computing graph corresponding to the second computing layer based on the computing relationship to obtain a new computing graph includes: based on the Calculation relationship Add the output corresponding calculation node of the first calculation layer as the successor node of the output corresponding calculation node of the second calculation layer to the calculation graph corresponding to the second calculation layer; The calculation node corresponding to the dependent variable of the layer is added to the calculation graph corresponding to the second calculation layer as the predecessor node of the output corresponding calculation node of the first calculation layer to obtain a new calculation graph.
  • the determining the output result of the machine learning model based on the computation graph includes: executing the first computation layer based on the computation graph to obtain an output of the first computation layer; based on the The output of the first calculation layer determines the output result of the machine learning model.
  • the method further includes: calling the classic module to create a training layer of the machine learning model; inputting the output result of the machine learning model into the training layer, so as to Adding the sub-computation graph corresponding to the training layer to the computation graph according to the relationship of the machine learning model; updating the parameters of the machine learning model based on the computation graph to obtain the trained machine learning model.
  • the training layer includes a loss function layer and an optimizer layer
  • the classical module includes: a loss function unit configured to calculate a loss function of the machine learning model; an optimizer unit configured to When training the machine learning model, update the parameters of the machine learning model based on the loss function to optimize the machine learning model;
  • the calling the classic module to create the training layer of the machine learning model includes: calling the loss function unit to create the loss function layer; calling the optimizer unit to create the optimizer layer.
  • the output of the machine learning model is input into the training layer, so as to add the corresponding sub-computation graph of the training layer to the computing layer based on the relationship between the training layer and the machine learning model.
  • it includes: inputting the output result of the machine learning model into the loss function layer to calculate the value of the loss function of the machine learning model, and using the calculation node corresponding to the value of the loss function as the machine learning
  • the output result of the model corresponds to the successor node of the calculation node, and is added to the calculation graph;
  • the parameters of the machine learning model are updated based on the calculation graph, and the trained machine learning model is obtained, including: When determining that the value of the loss function does not meet the preset condition, input the value of the loss function into the optimizer layer, so as to update the parameters of the machine learning model based on the value of the loss function and the calculation graph; Determine the value of the loss function of the machine learning model after updating the parameters; when it is determined that the value of the loss function satisfies a preset
  • the updating the parameters of the machine learning model based on the value of the loss function and the calculation graph includes: calculating the loss function relative to the calculation graph based on the value of the loss function and the calculation graph The gradient of the parameter of the machine learning model; update the parameter of the machine learning model based on the gradient and the gradient descent algorithm.
  • the calculation of the gradient of the loss function relative to the parameters of the machine learning model based on the value of the loss function and the calculation graph includes: determining the corresponding value of the loss function based on the calculation graph Calculate the path from the calculation node to the parameter of the machine learning model corresponding to the calculation node; calculate the intermediate gradient of each calculation node of the non-leaf node on the path relative to the predecessor node of the calculation node based on the value of the loss function; calculate the obtained All the intermediate gradients of are multiplied together to obtain the gradient of the loss function with respect to the parameters.
  • the present disclosure also provides a data processing device, which is applied to an electronic device including a machine learning framework, the machine learning framework includes a data structure module, a quantum module and a classical module, and the device includes: a first establishment A module for calling the data structure module to obtain input data and create tensor data including the input data, calling the quantum module and the classical module to create a machine learning model, the machine learning model including multiple computing layers And a forward propagation relationship between multiple said computing layers; a determining module, configured to determine from multiple said computing layers the first computing layer to be executed corresponding to said tensor data; a second creating module, using A calculation graph including calculation nodes corresponding to the first calculation layer is created based on the forward propagation relationship; an output module is configured to determine an output result of the machine learning model based on the calculation graph.
  • the machine learning framework includes a data structure module, a quantum module and a classical module
  • the device includes: a first establishment A module for calling the data structure module to obtain input data and create tensor data including the
  • the second creation module is further configured to: determine whether there is an unexecuted second calculation associated with the first calculation layer before the first calculation layer based on the forward propagation relationship layer; when there is an unexecuted second computation layer associated with the first computation layer, execute the second computation layer, and determine that the output of the second computation layer is consistent with that of the first computation layer Outputting the calculation relationship among them; adding the sub-computation graph corresponding to the first calculation layer to the calculation graph corresponding to the second calculation layer based on the calculation relationship to obtain a new calculation graph.
  • the device further includes: a third creating module, configured to create a second computing layer corresponding to the first computing layer when there is no unexecuted second computing layer associated with the first computing layer The computational graph.
  • a third creating module configured to create a second computing layer corresponding to the first computing layer when there is no unexecuted second computing layer associated with the first computing layer The computational graph.
  • the second creation module is further configured to: based on the calculation relationship, use the output corresponding calculation node of the first calculation layer as the successor node of the output corresponding calculation node of the second calculation layer, and add To the calculation graph corresponding to the second calculation layer; the dependent variable corresponding calculation node of the first calculation layer is added to the second calculation layer as the predecessor node of the output corresponding calculation node of the first calculation layer In the corresponding calculation graph, a new calculation graph is obtained.
  • the output module is further configured to: execute the first computing layer based on the computing graph to obtain the output of the first computing layer; determine the machine based on the output of the first computing layer The output of the learned model.
  • the device further includes: a fourth creation module, configured to call the classic module to create the training layer of the machine learning model; an input module, configured to input the output result of the machine learning model into the The training layer is used to add the corresponding sub-computing graph of the training layer to the computing graph based on the relationship between the training layer and the machine learning model; an update module is used to update the machine learning model based on the computing graph The parameters are updated to obtain the trained machine learning model.
  • the training layer includes a loss function layer and an optimizer layer
  • the classical module includes: a loss function unit configured to calculate a loss function of the machine learning model; an optimizer unit configured to When training the machine learning model, update the parameters of the machine learning model based on the loss function to optimize the machine learning model; the fourth creation module is also used to: call the loss function unit to create the A loss function layer; calling the optimizer unit to create the optimizer layer.
  • the input module is further configured to: input the output result of the machine learning model into the loss function layer, so as to calculate the value of the loss function of the machine learning model, and input the output of the loss function
  • the calculation node corresponding to the value is added to the calculation graph as a successor node of the calculation node corresponding to the output result of the machine learning model
  • the update module is also used for: when determining that the value of the loss function does not meet the preset condition , input the value of the loss function into the optimizer layer, so as to update the parameters of the machine learning model based on the value of the loss function and the calculation graph; determine the parameters of the machine learning model after updating the parameters The value of the loss function; when it is determined that the value of the loss function satisfies the preset condition, the machine learning model after updating the parameters is used as the machine learning model after training.
  • the update module is further configured to: calculate the gradient of the loss function relative to the parameters of the machine learning model based on the value of the loss function and the calculation graph; based on the gradient and the gradient descent algorithm Updating the parameters of the machine learning model.
  • the update module is further configured to: determine the path from the calculation node corresponding to the loss function to the calculation node corresponding to the parameter of the machine learning model based on the calculation graph; calculate based on the value of the loss function The intermediate gradient of each calculation node that is not a leaf node on the path relative to the predecessor node of the calculation node; multiplying all the calculated intermediate gradients to obtain the gradient of the loss function relative to the parameter.
  • the present disclosure also provides a machine learning framework, the framework comprising: a data structure module configured to acquire input data and create tensor data including the input data; a quantum module configured to create A machine learning model; a classic module configured to create a machine learning model, the machine learning model including a plurality of calculation layers and a forward propagation relationship between a plurality of the calculation layers; the classic module is also configured to learn from multiple Determine the first computing layer to be executed corresponding to the tensor data in one of the computing layers; create a computing graph including computing nodes corresponding to the first computing layer based on the forward propagation relationship; based on the computing graph An output result of the machine learning model is determined.
  • a data structure module configured to acquire input data and create tensor data including the input data
  • a quantum module configured to create A machine learning model
  • a classic module configured to create a machine learning model, the machine learning model including a plurality of calculation layers and a forward propagation relationship between a plurality of the calculation layers
  • the classic module is also
  • the present disclosure also provides a storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps of any one of the methods described in the above first aspect when running .
  • the present disclosure also provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above-mentioned first aspects. A step of said method.
  • the machine learning model created by calling the machine learning framework among the multiple computing layers included in the machine learning model, first determine the first computing layer to be executed, and then create a Calculate the calculation graph of the calculation graph, and then determine the output result of the machine learning model according to the calculation graph, that is, execute it immediately after creating the calculation graph for each computing layer, without having to create the calculation graph of all computing layers before executing it, and then debug the machine learning
  • the machine learning model can be run layer by layer, and debugged according to the results of the layer-by-layer operation, which facilitates locating the problem of the machine learning model, reduces the difficulty of debugging the machine learning model, and speeds up the debugging efficiency.
  • this disclosure proposes a data processing method, a machine learning framework and related equipment, aiming at improving the efficiency of classical-quantum hybrid machine learning model processing data and the overall computing performance .
  • the present disclosure provides solutions in the following aspects.
  • the present disclosure provides a data processing method applied to an electronic device of a machine learning framework including a data structure module, a classical module and a quantum module, the method comprising: calling the quantum module to construct a quantum computing layer , call the classical module to build the classical computing layer, and call the data structure module to build the forward propagation relationship between the classical computing layer and the quantum computing layer; call the classical module to the quantum computing layer, Encapsulating the classical computing layer and the forward propagation relationship to obtain a machine learning model, the classical computing layer, the quantum computing layer, the forward propagation relationship and the data structure of the machine learning model are the same; calling the The above machine learning model is used for data processing.
  • the classical module includes a classical neural network layer unit, and the classical neural network layer unit includes at least one of the following: a specified model classical neural network layer subunit configured to pass through the encapsulated classical neural network layer
  • the interface creates a classical neural network layer of a specified model
  • the activation layer subunit is configured to create an activation layer for nonlinear transformation of the output of the classical neural network layer
  • the call of the classic module builds a classical calculation layer, Including: calling the specified model classic neural network layer subunit to construct a classic neural network layer, and using the classic neural network layer as a classic calculation layer; or calling the specified model classic neural network layer subunit and the activation layer
  • the subunit constructs a classical neural network layer and an activation layer, and uses the classical neural network layer and the activation layer as a classical computing layer.
  • the classical module further includes an abstract class submodule, and the calling of the classical module encapsulates the quantum computing layer, the classical computing layer, and the forward propagation relationship to obtain a machine learning model, Including: calling the abstract class submodule to initialize and encapsulate the quantum computing layer and the classical computing layer based on the initialization function, and obtain the initialized and encapsulated quantum computing layer and the classical computing layer; calling the abstract The class submodule encapsulates the forward propagation relationship based on the forward propagation function to obtain the encapsulated forward propagation relationship; calling the abstract class submodule based on the module class initializes and encapsulates the quantum calculation Layer and the classical computing layer, and the encapsulated forward propagation relationship are encapsulated to obtain a machine learning model.
  • the quantum module includes a quantum logic gate submodule configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement submodule configured to create a quantum logic gate through a packaged quantum logic gate interface; The measurement interface creates quantum measurement operations acting on qubits; the quantum computing layer sub-module is configured to create the quantum computing layer of the machine learning model through the encapsulated quantum computing layer interface, or through the quantum logic gate sub-module and the quantum The measurement sub-module creates the quantum computing layer of the machine learning model; the calling the quantum module to build the quantum computing layer includes: calling the quantum computing layer sub-module to build the quantum computing layer, or calling the quantum logic gate sub-module and the The quantum measurement sub-module builds the quantum computing layer.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer; the quantum computing layer submodule Including at least one of the following: a general quantum program encapsulation unit, configured to create the general quantum computing layer through the encapsulated general quantum computing layer interface, and the general quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library; the compatible quantum program encapsulation unit is configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, and the compatible quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library included in the machine learning framework; the noise-containing quantum program encapsulation unit configured to create the noise-containing quantum computing layer through the encapsulated noise-containing quantum computing layer interface, and the noise-containing quantum computing layer
  • the layer interface is used to provide a quantum program that considers the influence of noise created based on the quantum computing programming library included in the machine learning framework
  • the quantum logic gate submodule includes: a quantum state encoding logic gate unit configured to create logic that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer Gate: a quantum state evolution logic gate unit configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the quantum measurement submodule includes at least one of the following: an expected measurement unit configured to measure a specified qubit in the quantum computing layer based on a target observable to obtain a corresponding expected value; a probability measurement unit , configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit; the number of times measurement unit is configured to measure the specified qubit in the quantum computing layer A measurement is performed to obtain the number of occurrences of different ground states of the quantum state of the specified qubit.
  • the present disclosure also provides a data processing device, which is applied to an electronic device of a machine learning framework including a data structure module, a classical module and a quantum module, and the device includes: a construction unit configured to call the The quantum module builds the quantum computing layer, calls the classical module to build the classical computing layer, and calls the data structure module to build the forward propagation relationship between the classical computing layer and the quantum computing layer; the encapsulation unit is used to call The classical module encapsulates the quantum computing layer, the classical computing layer and the forward propagation relationship to obtain a machine learning model, the classical computing layer, the quantum computing layer, the forward propagation relationship and the The data structure of the machine learning model is the same; the processing unit is used to call the machine learning model for data processing.
  • the classical module includes a classical neural network layer unit
  • the classical neural network layer unit includes at least one of the following: a specified model classical neural network layer subunit configured to pass through the encapsulated classical neural network layer
  • the interface creates a classical neural network layer of a specified model
  • the activation layer subunit is configured to create an activation layer for performing nonlinear transformation on the output of the classical neural network layer; when calling the classical module to construct a classical calculation layer
  • the construction unit is specifically used to: call the specified model classic neural network layer subunit to construct a classic neural network layer, and use the classic neural network layer as a classic calculation layer; or call the specified model classic neural network layer
  • the network layer subunit and the activation layer subunit construct a classical neural network layer and an activation layer, and use the classical neural network layer and the activation layer as a classical calculation layer.
  • the classical module further includes an abstract class submodule, and the quantum computing layer, the classical computing layer, and the forward propagation relationship are encapsulated to obtain a machine learning model when the classical module is called.
  • the encapsulation unit is specifically used to: call the abstract class submodule to initialize and encapsulate the quantum computing layer and the classical computing layer based on the initialization function, and obtain the initialized and encapsulated quantum computing layer and the The classic computing layer; call the abstract class sub-module based on the forward propagation function to encapsulate the forward propagation relationship to obtain the encapsulated forward propagation relationship; call the abstract class sub-module based on the module class to the described forward propagation relationship The initialized and encapsulated quantum computing layer and the classical computing layer, and the encapsulated forward propagation relationship are encapsulated to obtain a machine learning model.
  • the quantum module includes a quantum logic gate submodule configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement submodule configured to create a quantum logic gate through a packaged quantum logic gate interface; The measurement interface creates quantum measurement operations acting on qubits; the quantum computing layer sub-module is configured to create the quantum computing layer of the machine learning model through the encapsulated quantum computing layer interface, or through the quantum logic gate sub-module and the quantum The measurement sub-module creates a quantum computing layer of the machine learning model; in terms of calling the quantum module to build a quantum computing layer, the construction unit is specifically used to: call the quantum computing layer sub-module to build a quantum computing layer, or, Calling the quantum logic gate sub-module and the quantum measurement sub-module to construct a quantum computing layer.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer; the quantum computing layer submodule Including at least one of the following: a general quantum program encapsulation unit, configured to create the general quantum computing layer through the encapsulated general quantum computing layer interface, and the general quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library; the compatible quantum program encapsulation unit is configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, and the compatible quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library included in the machine learning framework; the noise-containing quantum program encapsulation unit configured to create the noise-containing quantum computing layer through the encapsulated noise-containing quantum computing layer interface, and the noise-containing quantum computing layer
  • the layer interface is used to provide a quantum program that considers the influence of noise created based on the quantum computing programming library included in the machine learning framework
  • the quantum logic gate submodule includes: a quantum state encoding logic gate unit configured to create logic that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer Gate: a quantum state evolution logic gate unit configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the quantum measurement submodule includes at least one of the following: an expected measurement unit configured to measure a specified qubit in the quantum computing layer based on a target observable to obtain a corresponding expected value; a probability measurement unit , configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit; the number of times measurement unit is configured to measure the specified qubit in the quantum computing layer A measurement is performed to obtain the number of occurrences of different ground states of the quantum state of the specified qubit.
  • the present disclosure also provides a machine learning framework, characterized in that the machine learning framework includes: a quantum module configured to build a quantum computing layer; a classical module configured to build a classical computing layer, The data structure module is configured to construct the forward propagation relationship between the classical computing layer and the quantum computing layer; Encapsulating the forward propagation relationship to obtain a machine learning model, the machine learning model is used for data processing, the classical computing layer, the quantum computing layer, the forward propagation relationship and the data structure of the machine learning model same.
  • the classical module includes a classical neural network layer unit, and the classical neural network layer unit includes at least one of the following: a specified model classical neural network layer subunit configured to pass through the encapsulated classical neural network layer The interface creates a classical neural network layer of a specified model; the activation layer subunit is configured to create an activation layer for nonlinear transformation of the output of the classical neural network layer; the classical calculation layer includes the classical neural network layer , or the classical neural network layer and activation layer.
  • the classical module further includes an abstract class submodule configured to initialize and encapsulate the quantum computing layer and the classical computing layer based on an initialization function to obtain the initialized and encapsulated quantum computing layer and the classical computing layer; based on the forward propagation function, the forward propagation relationship is encapsulated to obtain the encapsulated forward propagation relationship; based on the module class, the initialized and encapsulated quantum computing layer and the The above classical computing layer and the encapsulated forward propagation relationship are encapsulated to obtain a machine learning model.
  • the quantum module includes a quantum logic gate submodule configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement submodule configured to create a quantum logic gate through a packaged quantum logic gate interface; The measurement interface creates quantum measurement operations acting on qubits; the quantum computing layer sub-module is configured to create the quantum computing layer of the machine learning model through the encapsulated quantum computing layer interface, or through the quantum logic gate sub-module and the quantum The measurement submodule creates the quantum computing layer of the machine learning model.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer; the quantum computing layer submodule Including at least one of the following: a general quantum program encapsulation unit, configured to create the general quantum computing layer through the encapsulated general quantum computing layer interface, and the general quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library; the compatible quantum program encapsulation unit is configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, and the compatible quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library included in the machine learning framework; the noise-containing quantum program encapsulation unit configured to create the noise-containing quantum computing layer through the encapsulated noise-containing quantum computing layer interface, and the noise-containing quantum computing layer
  • the layer interface is used to provide a quantum program that considers the influence of noise created based on the quantum computing programming library included in the machine learning framework
  • the quantum logic gate submodule includes: a quantum state encoding logic gate unit configured to create logic that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer Gate: a quantum state evolution logic gate unit configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the quantum measurement submodule includes at least one of the following: an expected measurement unit configured to measure a specified qubit in the quantum computing layer based on a target observable to obtain a corresponding expected value; a probability measurement unit , configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit; the number of times measurement unit is configured to measure the specified qubit in the quantum computing layer A measurement is performed to obtain the number of occurrences of different ground states of the quantum state of the specified qubit.
  • the present disclosure further provides a storage medium, in which a computer program is stored, wherein the computer program is configured to execute the method described in any one of the above items when running.
  • the present disclosure also provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above-mentioned the method described.
  • the data processing method provided by the present disclosure is applied to an electronic device with a machine learning framework including a data structure module, a classical module and a quantum module, and can construct a classical computing layer and a quantum computing layer through the same machine learning framework
  • the computing layer does not need to create a classical computing layer and a quantum computing layer through two machine learning frameworks, reducing the cumbersome interaction process between different machine learning frameworks; secondly, passing data between each computing layer (classical computing layer or quantum computing layer)
  • Tensors with the same structure communicate to improve the computing efficiency between the various computing layers, thereby improving the efficiency of classical-quantum hybrid machine learning model processing data and the overall computing performance.
  • this disclosure proposes a machine learning framework aimed at improving the development efficiency of quantum machine learning.
  • the present disclosure provides solutions in the following aspects.
  • the present disclosure provides a machine learning framework, the framework comprising: a data structure module configured to create tensor data for input into a machine learning model, and to perform operations on the tensor data; A module configured to create a quantum computing layer for creating a machine learning model; a classical module configured to create a classical computing layer for creating a machine learning model, a layer for encapsulating the quantum computing layer and the classical computing layer An abstract class layer, a machine learning model training layer for training and optimizing the machine learning model.
  • a data structure module configured to create tensor data for input into a machine learning model, and to perform operations on the tensor data
  • a module configured to create a quantum computing layer for creating a machine learning model
  • a classical module configured to create a classical computing layer for creating a machine learning model, a layer for encapsulating the quantum computing layer and the classical computing layer
  • An abstract class layer a machine learning model training layer for training and optimizing the machine learning model.
  • the data structure module includes: a tensor creation submodule configured to arrange input data according to a preset data structure to create tensor data for input into the machine learning model, and/or Create tensor data arranged in the preset data structure and whose value is determined for inputting the machine learning model; the operation operation sub-module is configured to perform operations on the tensor data.
  • the operation operation sub-module includes at least one of the following: a mathematical operation unit configured to perform mathematical operations on the tensor data; a logic operation unit configured to perform logic operations on the tensor data operation; a data transformation unit configured to perform a transformation operation on the tensor data to transform the data structure of the tensor data.
  • the quantum module includes: a quantum logic gate submodule configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement submodule configured to create a quantum logic gate through a packaged quantum logic gate interface; The quantum measurement interface creates quantum measurement operations acting on qubits; the quantum computing layer sub-module is configured to create the quantum computing layer of the machine learning model through the encapsulated quantum computing layer interface, or through the quantum logic gate sub-module and the The quantum measurement submodule creates the quantum computing layer of machine learning models.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer; the quantum computing layer submodule Including at least one of the following: a general quantum program encapsulation unit, configured to create the general quantum computing layer through the encapsulated general quantum computing layer interface, and the general quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library; the compatible quantum program encapsulation unit is configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, and the compatible quantum computing layer interface is used to provide The quantum program created by the quantum computing programming library included in the machine learning framework; the noise-containing quantum program encapsulation unit configured to create the noise-containing quantum computing layer through the encapsulated noise-containing quantum computing layer interface, and the noise-containing quantum computing layer
  • the layer interface is used to provide a quantum program that considers the influence of noise created based on the quantum computing programming library included in the machine learning framework
  • the quantum logic gate submodule includes: a quantum state encoding logic gate unit configured to create logic that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer Gate: a quantum state evolution logic gate unit configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the quantum state encoding logic gate unit includes at least one of the following: a ground state encoding subunit configured to create a tensor data encoded based on input data into a specified qubit in the quantum computing layer A logic gate of a first quantum state, the ground state of which is used to represent the tensor data; an amplitude encoding subunit configured to create and encode tensor data created based on input data into the quantum computing layer A logic gate specifying a second quantum state of the qubit in which the amplitude of the ground state of the second quantum state is used to represent the tensor data; an angle encoding subunit configured to create the tensor data to be created based on the input data A parameter-containing sub-logic gate as a parameter for acting on a specified qubit in the quantum computing layer to obtain a third quantum state for representing the tensor data; an instantaneous quantum polynomial An IQP encoding subunit configured to create logic gates of IQP circuits that take as parameters
  • the quantum state evolution logic gate unit includes at least one of the following: a basic quantum logic gate subunit configured to create a single-bit quantum logic gate or a multi-bit quantum logic gate acting on a specified qubit in the quantum computing layer
  • Quantum logic gate a common quantum logic gate subunit configured to create a common logic gate through an encapsulated common logic gate interface, and the common logic gate includes a combination of several quantum logic gates corresponding to the basic quantum logic gate unit.
  • the quantum measurement submodule includes at least one of the following: an expected measurement unit configured to measure a specified qubit in the quantum computing layer based on a target observable to obtain a corresponding expected value; a probability measurement unit , configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit; the number of times measurement unit is configured to measure the specified qubit in the quantum computing layer A measurement is performed to obtain the number of occurrences of different ground states of the quantum state of the specified qubit.
  • the classical module includes: a classical computing layer sub-module configured to create a classical computing layer for creating a machine learning model; an abstract class sub-module configured to create a quantum computing layer for encapsulating the and the abstract class layer of the classic computing layer; the machine learning model training layer submodule configured to create a machine learning model training layer for training and optimizing the machine learning model.
  • the classical computing layer includes a classical neural network layer
  • the classical computing layer submodule includes a classical neural network layer unit
  • the classical neural network layer unit includes at least one of the following: a specified model classical neural network layer
  • the subunit is configured to create the classical neural network layer of the specified model through the encapsulated classical neural network layer interface;
  • the activation layer subunit is configured to create a non-linear transformation for the output of the classical neural network layer the activation layer.
  • the machine learning model training layer submodule includes: a loss function unit configured to calculate the loss function of the machine learning model; an optimizer unit configured to train the machine learning model based on The loss function updates parameters of the machine learning model to optimize the machine learning model.
  • the quantum module and the classical module can be called to correspond to the quantum computing layer and the classical computing layer of the machine learning model, and the abstract class layer can be created through the classical module to compare the quantum computing layer and the classical computing layer Encapsulation is performed to form a machine learning model that is convenient for training and includes the quantum computing layer and the classical computing layer.
  • Fig. 1 is a hardware structural block diagram of a computer terminal according to a data processing method shown in an exemplary embodiment
  • FIG. 2 is a flowchart of a data processing method shown according to an exemplary embodiment
  • Fig. 3 is a block diagram of a machine learning framework system shown according to an exemplary embodiment
  • FIG. 4 is a flowchart of step S23 included in a data processing method according to an exemplary embodiment
  • FIG. 5 is a flowchart of step S233 included in a data processing method according to an exemplary embodiment
  • Fig. 6 is a calculation diagram of a machine learning model shown according to an exemplary embodiment
  • FIG. 7 is a flowchart of step S24 included in a data processing method according to an exemplary embodiment
  • Fig. 8 is another flowchart of a data processing method according to an exemplary embodiment
  • Fig. 9 is another flowchart of a data processing method according to an exemplary embodiment
  • Fig. 10 is a block diagram of typical modules included in a data processing device according to an exemplary embodiment
  • Fig. 11 is a flowchart of step S95 included in a data processing method according to an exemplary embodiment
  • Fig. 12 is a flowchart of step S97 included in a data processing method according to an exemplary embodiment
  • Fig. 13 is a flowchart of step S971 included in a data processing method according to an exemplary embodiment
  • Fig. 14 is a flowchart of step S9711 included in a data processing method according to an exemplary embodiment
  • Fig. 15 is a schematic flowchart of creating a machine learning model according to an exemplary embodiment
  • Fig. 16 is a block diagram of a data structure module according to an exemplary embodiment
  • Fig. 17 is a block diagram of an operation submodule according to an exemplary embodiment
  • Fig. 18 is a block diagram of a quantum module according to an exemplary embodiment
  • Fig. 19 is a block diagram of a quantum logic gate sub-module shown according to an exemplary embodiment
  • Fig. 20 is a block diagram of a quantum state encoding logic gate unit shown according to an exemplary embodiment
  • Fig. 21 is a schematic diagram of a logic gate of an IQP circuit according to an exemplary embodiment
  • Fig. 22 is a block diagram of a quantum state evolution logic gate unit shown according to an exemplary embodiment
  • Fig. 23 is a block diagram of a quantum measurement sub-module shown according to an exemplary embodiment
  • Fig. 24 is a block diagram of a quantum computing layer sub-module shown according to an exemplary embodiment
  • Fig. 25 is a block diagram of a classic module according to an exemplary embodiment
  • Fig. 26 is a block diagram of a classical computing layer sub-module shown according to an exemplary embodiment
  • Fig. 27 is a block diagram of a classical neural network layer unit shown according to an exemplary embodiment
  • Fig. 28 is another block diagram of a machine learning framework system according to an exemplary embodiment
  • Fig. 29 is a block diagram of a data processing device according to an exemplary embodiment
  • Fig. 30 is another block diagram of a data processing device according to an exemplary embodiment.
  • Embodiments of the present disclosure firstly provide a data processing method, which can be applied to electronic devices, such as computer terminals, specifically, ordinary computers, quantum computers, and the like.
  • Fig. 1 is a block diagram showing a hardware structure of a computer terminal according to a data processing method according to an exemplary embodiment.
  • the computer terminal may include one or more (only one is shown in Figure 1) processors 102 (the processors 102 may include but not limited to processing devices such as microprocessor MCU or programmable logic device FPGA, etc.) and a memory 104 for storing data processing methods based on quantum circuits.
  • the above-mentioned computer terminal may also include a transmission device 106 and an input and output device 108 for communication functions.
  • the structure shown in FIG. 1 is only for illustration, and it does not limit the structure of the above computer terminal.
  • the computer terminal may also include more or fewer components than shown in FIG. 1 , or have a different configuration than that shown in FIG. 1 .
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the data processing method in the embodiment of the present application, and the processor 102 executes various programs and modules by running the software programs and modules stored in the memory 104 Functional application and data processing are to realize the above-mentioned method.
  • the memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory that is remotely located relative to the processor 102, and these remote memories may be connected to a computer terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or transmit data via a network.
  • the specific example of the above-mentioned network may include a wireless network provided by the communication provider of the computer terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • a real quantum computer has a hybrid structure, which consists of two parts: one is a classical computer, which is responsible for performing classical calculation and control; the other is a quantum device, which is responsible for running quantum programs and realizing quantum computing.
  • a quantum program is a program that characterizes qubits and their evolution written in a quantum language based on classical language definitions, such as QRunes. It is a sequence of instructions that can be run on a quantum computer, and supports quantum logic gate operations. Finally, quantum computing will be realized. Specifically, a quantum program is a series of instruction sequences that operate quantum logic gates in a certain sequence.
  • quantum circuits are also called quantum logic circuits. They are the most commonly used general-purpose quantum computing models. They represent circuits that operate on qubits under an abstract concept. The components include qubits, circuits (timelines) , and various quantum logic gates, the results often need to be read out through quantum measurement operations.
  • circuits can be regarded as connected by time, and the state of the transmitted qubits evolves with time.
  • a quantum program can be composed of quantum circuits, measurement operations for qubits in quantum circuits, registers for saving measurement results, and control flow nodes (jump instructions).
  • a quantum circuit can contain tens, hundreds, or even thousands of Quantum logic gate operations.
  • the execution process of a quantum program is the process of executing all quantum logic gates according to a certain time sequence. It should be noted that timing refers to the time sequence in which a single quantum logic gate is executed.
  • Quantum logic gates can be used to evolve quantum states. Quantum logic gates are the basis of quantum circuits.
  • Quantum logic gates include single-bit quantum logic gates, such as Hadamard gates (H gates, Hadamard gates), Pauli-X gates ( X gate, Pauli X gate), Pauli-Y gate (Y gate, Pauli Y gate), Pauli-Z gate (Z gate, Pauli Z gate), RX gate (RX revolving gate), RY gate ( RY revolving gate), RZ gate (RZ revolving gate), etc.; multi-bit quantum logic gates, such as CNOT gate, CR gate, iSWAP gate, Toffoli gate, etc.
  • Quantum logic gates are generally represented by unitary matrices, and unitary matrices are not only in the form of matrices, but also a kind of operation and transformation. Generally, the function of a quantum logic gate on a quantum state is calculated by multiplying the left side of the unitary matrix by the vector corresponding to the right vector of the quantum state.
  • Fig. 2 is a flow chart showing a data processing method according to an exemplary embodiment.
  • the present embodiment provides a data processing method, which can be applied to an electronic device including a machine learning framework system 30 as shown in FIG. 3, and the machine learning framework system 30 includes a data structure module 31, a quantum module 32 and classic module 33, the method comprising:
  • S21 call the data structure module to obtain input data and create tensor data including the input data, call the quantum module and the classical module to create a machine learning model, the machine learning model includes multiple computing layers and multiple The forward propagation relationship between each of the computing layers.
  • the machine learning framework system 30 integrates numerous function sets for creating and training machine learning models, and these functions can be conveniently called through the defined interfaces to implement related operations on the machine learning models.
  • the machine learning framework system 30 may include:
  • a data structure module 31 configured to obtain input data and create tensor data including the input data
  • a quantum module 32 configured to create a machine learning model
  • the classic module 33 is configured to create a machine learning model, the machine learning model includes a plurality of calculation layers and a forward propagation relationship between a plurality of the calculation layers;
  • the classic module 33 is further configured to determine the first calculation layer to be executed corresponding to the tensor data from a plurality of the calculation layers; based on the forward propagation relationship, create a computing a computing graph of nodes; determining an output result of the machine learning model based on the computing graph.
  • the data structure module 31 defines the data structure of the tensor data. By calling the data structure module 31, the input data can be converted into tensor data for input into the machine learning model for forward calculation.
  • the data structure module 31 can also be configured to perform operations on tensor data.
  • the data structure module 31 can also define mathematical operations and logical operations between tensor data, and then call data The structure module 31 creates the classic calculation layer of the machine learning model based on the operational relationship between the tensor data.
  • the data structure module 31 can be used to arrange the input data according to a preset data structure, so as to create tensor data for inputting the machine learning model, and create tensor data based on the preset Tensor data for input to the machine learning model that is arranged in a data structure and determined by value.
  • the input data may be arranged according to a preset data structure to obtain tensor data, and the input data may be stored as a part of the tensor data.
  • the acquired input data is 1, 2, 3, and the input data can be converted into a vector structure [1, 2, 3] as part of the tensor data.
  • the input data may be the data used to train the machine learning model, or the data to be predicted and classified.
  • the tensor data may also include the information of the tensor data from which the data value is calculated and the relative information of the tensor data containing the said The gradient function of the tensor data of the data value, where the calculated information of the tensor data of the data value may include the variable of the tensor data, the storage address of the data value, and the data value, as long as it indicates that the corresponding node of the tensor data is Calculate the predecessor node of the node corresponding to the tensor data of the data value.
  • the tensor data y includes the data value corresponding to y such as [1,2,3], and also includes the calculated tensor data of w, x, and b of y Information and the gradient functions of y relative to w, x, and b respectively.
  • the information may include the data value storage addresses of w, x, and b
  • the tensor data y includes the gradient of y relative to w function x, the gradient function w of y relative to x, and the gradient function 1 of y relative to b
  • the gradient values of y relative to w, x, and b are calculated through back propagation, specifically, Obtain the data value of y directly from the tensor data y, as well as the data values of w, x, and b and the corresponding gradient function, and calculate the gradient values of y relative to w, x, and b respectively through these data values and the corresponding gradient function .
  • the quantum computing layer of the machine learning model can be created by calling the quantum module 32.
  • the quantum computing layer is a program module containing a quantum program, which can be used to realize the quantum computing of the corresponding quantum program.
  • the program is packaged according to certain standards, making the quantum computing layer easy to use when creating and training machine learning models. For the part of the machine learning model realized by quantum computing, it can be understood as the corresponding quantum computing layer.
  • the quantum program is a program for implementing quantum computing.
  • the quantum program can be obtained by calling the quantum module 32 to create quantum logic gates that act on the qubits in a specific order, and the quantum program can be packaged to obtain the quantum computing layer.
  • the classic calculation layer of the machine learning model can be created by calling the classic module 33.
  • the classic calculation layer is the classic calculation part of the machine learning model, which can be created by the classic module 33
  • Classical computing programs are packaged according to certain standards, making the classical computing layer easy to use when training machine learning models.
  • the quantum computing layer and the classical computing layer are created, they can be encapsulated through the classical module 33 to create an abstract class layer that meets certain standards.
  • the abstract class layer is realized by the class method in the programming language.
  • Computing layer and classic computing layer encapsulation can create machine learning models that meet certain standards.
  • the created abstract class layer defines the way of forward computing machine learning models, which is convenient for forward computing of machine learning models when training machine learning models.
  • the calculation result used to calculate the loss function can be obtained, and the sequence relationship of gradient calculation during reverse calculation can also be obtained.
  • the classic module 33 can also be used to create a training layer of the machine learning model to train the machine learning model.
  • the classical module 33 can also be called to determine the first calculation layer to be executed corresponding to the tensor data from the plurality of calculation layers; based on the forward propagation relationship, a calculation including the calculation layer corresponding to the first calculation layer can be created Computation graph of the node; determine the output result of the machine learning model based on the computation graph, and complete the forward operation of the machine learning model.
  • the specific operation process please refer to the subsequent description of relevant steps in the data processing method.
  • the quantum module 32 can be called to create a quantum computing layer
  • the classical module 33 can be called to create a classical computing layer
  • the classical module 33 can be used to encapsulate the quantum computing layer and the classical computing layer to obtain a machine learning model that mixes quantum computing and classical computing .
  • the data structure module 31 is invoked to create tensor data containing the input data for input into the machine learning model.
  • the created machine learning model has multiple computing layers. For example, there can be multiple quantum computing layers, or multiple classical computing layers.
  • step S22 according to calculation relational expressions of multiple calculation layers in the machine learning model, it is determined that the calculation layer to be executed with tensor data as the dependent variable is the first calculation layer.
  • the calculation layer to be executed with tensor data as the dependent variable is the first calculation layer.
  • step S23 is executed to create a new computing graph, which may include sub-computing graphs corresponding to the first computing layer.
  • step S23 referring to FIG. 4 , creating a computation graph including sub-computation graphs corresponding to the first computation layer based on the forward propagation relationship, including:
  • step S231 the output of the second calculation layer can be the input of the first calculation layer, that is, the dependent variable, so the first calculation layer can only be executed after the second calculation layer is executed, so the first calculation layer can be determined according to the forward propagation relationship Whether there is an unexecuted second computing layer before the first computing layer.
  • step S232 if there is a second computing layer that has not been executed and has the aforementioned association with the first computing layer, for example, the output of the second computing layer is the input of the first computing layer, then execute the second computing layer, specifically, You can first create a sub-computation graph of the second computing layer, then add the sub-computation graph to the corresponding computing graph of the executed computing layer, and then execute the second computing layer based on the computing graph to obtain the output of the second computing layer. In addition, it is also necessary to determine the calculation relationship between the output and the output of the first calculation layer, for example, the output is a dependent variable of the output of the first calculation layer.
  • a sub-computation graph corresponding to the first computing layer may be created, and then the sub-computation graph is added to a computing graph corresponding to the second computing layer to obtain a new computing graph.
  • step S233 referring to FIG. 5 , based on the computation relationship, the sub-computation graph corresponding to the first computation layer is added to the computation graph corresponding to the second computation layer to obtain a new computation graph, include:
  • step S2331 the output of the first computing layer is obtained according to the output of the second computing layer, so in the corresponding computing graph of the second computing layer, the output of the first computing layer corresponds to the computing nodes as the output corresponding to the second computing layer
  • the successor node of the calculation node is added to the calculation graph corresponding to the second calculation layer.
  • the graph structure of the data structure For example, the relationship can be represented by a linked list.
  • step S2332 the output of the first calculation layer is also obtained according to dependent variables other than the output of the second calculation layer, so in the aforementioned calculation graph, the calculation node corresponding to the dependent variable of the first calculation layer can be used as the first calculation
  • the layer output corresponds to the predecessor node of the calculation node, which is added to the calculation graph to obtain a new calculation graph.
  • the sub-computing graph 61 When creating the sub-computation graph 61, since c and d are the dependent variables of w, the computing node 611 corresponding to c and the computing node 613 corresponding to d are used as the predecessor nodes of the computing node 612 corresponding to w, the sub-computing graph 61 is created, and according to The sub-computation graph 61 executes the first calculation layer, and then enters the execution of the second calculation layer.
  • computing node 614 corresponding to the output y of the first computing layer as the successor node of the computing node 612 corresponding to the output w of the second computing layer, and add it to the corresponding computing graph of the second computing layer, that is, the sub-computing graph 61, and then calculate correspondingly for x
  • computing node 615 is added to the calculation graph corresponding to the second computing layer as the predecessor node of computing node 614 to obtain a new computing graph.
  • the computing graph consists of computing node 611, computing Node 612, computing node 613, computing node 614, and computing node 615.
  • the same method can be used to first construct the sub-computing graph corresponding to the computing layer, and then add the sub-computing graph to the executed computing layer.
  • the calculation graph a new calculation graph is obtained.
  • the corresponding calculation layer can be executed according to the new calculation graph to obtain the output of the calculation layer.
  • determining the output result of the machine learning model based on the calculation graph includes:
  • the calculation node of the calculation graph may include a formula for forward operation.
  • the output of the first computing layer is calculated. It should be noted that, referring to FIG.
  • the quantum program corresponding to the corresponding quantum circuit 6161 is stored in the computing node 616, and the effect of the quantum circuit 6161 on the qubit can be equivalent to the unitary matrix U(x; ⁇ ).
  • step S242 according to the output of the first calculation layer, the sub-computation graph of the subsequent calculation layer can be continuously added to the calculation graph of the executed calculation layer, and the corresponding calculation layer is executed according to the obtained new calculation graph until the execution is completed After all the calculation layers, the output result of the machine learning model can be obtained.
  • the calculation result of the last calculation layer may be the output result of the machine learning model.
  • the machine learning model created by calling the machine learning framework system among the multiple computing layers included in the machine learning model, first determine the first computing layer to be executed, and then create a corresponding The calculation graph of the sub-calculation graph, and then determine the output result of the machine learning model according to the calculation graph, that is, execute it immediately after creating the calculation graph for each computing layer, without having to create the calculation graph of all computing layers before executing it, and then debug the machine
  • the machine learning model can be run layer by layer, and debugged according to the results of the layer-by-layer operation, which is convenient for locating the problem of the machine learning model, reduces the difficulty of debugging the machine learning model, and speeds up the debugging efficiency.
  • Fig. 8 is another flow chart of a data processing method according to an exemplary embodiment.
  • the method can be applied to an electronic device including the machine learning framework system 30 shown in Fig. 3 , the machine Learning framework system 30 comprises data structure module 31, quantum module 32 and classical module 33, and described method comprises:
  • S81 call the data structure module to obtain input data and create tensor data including the input data, call the quantum module and the classical module to create a machine learning model, the machine learning model includes multiple computing layers and multiple The forward propagation relationship between each of the computing layers.
  • step S81 and step S82 can refer to step S21 and step S22 respectively
  • step S83 to step S85 can refer to step S231 to step S233 respectively
  • step S87 can refer to step S24.
  • step S83 if it is determined that there is no unexecuted second computing layer associated with the first computing layer before the first computing layer, enter into execution step S86, and directly create a computing graph corresponding to the first computing layer, specifically, The output corresponding calculation node of the first calculation layer is used as the successor node of the dependent variable corresponding calculation node of the first calculation layer, and a corresponding calculation graph is created, and the output result of the machine learning model can be subsequently determined according to the created calculation graph.
  • Fig. 9 is another flow chart of a data processing method according to an exemplary embodiment.
  • the method can be applied to an electronic device including the machine learning framework system 30 shown in Fig. 3 , the machine Learning framework system 30 comprises data structure module 31, quantum module 32 and classical module 33, and described method comprises:
  • S91 call the data structure module to obtain input data and create tensor data including the input data, call the quantum module and the classical module to create a machine learning model, the machine learning model includes multiple computing layers and multiple The forward propagation relationship between each of the computing layers.
  • step S91 to step S94 can refer to step S21 to step S24 respectively.
  • step S95 can be performed to create a training layer for training the machine learning model.
  • the training layer can also be created when the machine learning model is created, which is not specifically limited in the present disclosure.
  • the training layer includes a loss function layer and an optimizer layer
  • the classical module 33 includes:
  • a loss function unit 331 configured to calculate a loss function of the machine learning model
  • the optimizer unit 332 is configured to update the parameters of the machine learning model based on the loss function when training the machine learning model, so as to optimize the machine learning model.
  • step S95 calling the classic module to create the training layer of the machine learning model, including:
  • the loss function unit 331 is used to calculate the loss function of the machine learning model. For example, the square of the difference between the output result of the machine learning model and the label data can be calculated as the loss function, and the binary value of the output result and the label data can also be calculated. Cross entropy (Binary Cross Entropy) as a loss function.
  • the optimizer unit 332 can be used to update the parameters of the machine learning model using the gradient descent algorithm to optimize them according to the gradient of the loss function relative to the parameters of the machine learning model.
  • the gradient descent algorithm used by the optimizer can be any one of Stochastic Gradient Descent (SGD), Adaptive Gradient Algorithm (Adagrad), Adaptive Moment Estimation (Adam)
  • SGD Stochastic Gradient Descent
  • Adagrad Adaptive Gradient Algorithm
  • Adagrad Adaptive Gradient Algorithm
  • Adadam Adaptive Moment Estimation
  • other algorithms can also be used to update the parameters of the machine learning model.
  • the present disclosure does not specifically limit which types of loss functions the loss function unit 331 can calculate and which method the optimizer unit 332 uses to update parameters.
  • step S951 can be performed to call the loss function unit 331 to create the loss function layer.
  • the loss function layer is a packaged calculation module, which defines the calculation method of the loss function, and then in the machine
  • the loss function of the machine learning model can be calculated according to the calculation method defined by the loss function layer.
  • the execution step S952 can be entered, and the optimizer unit 332 is called to create the optimizer layer, so that after the prediction result is input to the loss function layer and the loss function is calculated, the machine learning model is updated according to the loss function. parameters, until the appropriate parameters are obtained so that the machine learning model can achieve the expected effect, and the optimization of the machine learning model is completed.
  • step S96 when the output result is input into the training layer, it means to start the training process of the machine learning model.
  • the aforementioned calculation method can be used to add the corresponding sub-computation graph of the training layer to the Computation graph corresponding to the executed computation layer.
  • step S97 the parameters of the machine learning model are updated according to the calculation graph to obtain the trained machine learning model.
  • step S96 input the output result of the machine learning model into the training layer, so as to add the corresponding sub-computation graph of the training layer to the training layer based on the relationship between the training layer and the machine learning model.
  • Computation graph including:
  • the value of the loss function is calculated based on the output of the machine learning model
  • the value of the loss function that is, the output of the loss function corresponding to the calculation node
  • the layer corresponds to the computation graph.
  • the output result of the machine learning model corresponds to computing node 617
  • the value of the loss function Loss corresponding to computing node 618 can be added to the computing graph as the successor node of computing node 617.
  • step S97 referring to FIG. 12 , the parameters of the machine learning model are updated based on the calculation graph to obtain the trained machine learning model, including:
  • step S971 it can be judged whether the value of the loss function satisfies the preset condition by comparing the value of the loss function with the preset threshold, for example, when it is determined that the value of the loss function is greater than or equal to the threshold, the value of the loss function is input into Optimizer layer.
  • the value of the loss function is input into Optimizer layer.
  • other methods can also be used to determine that the value of the loss function does not meet the preset conditions, as long as the value of the preset function can be used to judge that the current machine learning model does not meet expectations.
  • the value of the loss function is input into the optimizer layer, and the value of the loss function and the relationship between the corresponding calculation nodes of each data in the calculation graph can be used to calculate the loss function based on the chain derivation rule relative to machine learning.
  • the gradient of the parameters of the model and then update the parameters of the machine learning model based on the gradient descent algorithm.
  • step 972 after the parameters of the machine learning model are updated, the value of the corresponding loss function is recalculated. And re-judging whether the value of the loss function satisfies the preset condition, if not, return to step S971, and continue to update the parameters of the machine learning model according to the value of the loss function, and if so, proceed to step S973.
  • step S973 when it is determined that the value of the loss function satisfies the preset condition, for example, the value of the loss function is less than the threshold value, it means that the output result of the machine learning model has a small gap with the label data, and the machine learning model can achieve the expected application effect. Then, the machine learning model after updating the parameters is used as the machine learning model after training, and the updating of parameters is stopped.
  • step S971 updating parameters of the machine learning model based on the value of the loss function and the calculation graph includes:
  • step S9711 for example, partial derivatives of the loss function with respect to its parameters may be calculated to obtain gradients of the loss function with respect to the parameters.
  • step S9712 according to the obtained gradient, it is brought into the relevant formula of the gradient descent algorithm to update the parameters of the machine learning model.
  • the gradient reflects the fastest changing direction of the loss function.
  • the gradient descent algorithm can quickly change the parameters, thereby increasing the speed of the value change of the loss function, so as to quickly find the parameters corresponding to the value of the loss function that meets the preset conditions, and obtain the parameters that meet the The required machine learning model.
  • step S9711 calculating the gradient of the loss function relative to the parameters of the machine learning model based on the value of the loss function and the calculation graph, including:
  • step S97111 the loss function can be used as the starting point, and the selected parameters can be used as the end point, and the shortest path between the two can be determined in the calculation graph. Furthermore, in step S97112, for each computing node on the path, calculate the intermediate gradient of the computing node relative to its predecessor nodes, where nodes with predecessor nodes are marked as leaf nodes, and nodes without predecessor nodes are marked as non-leaf nodes Since non-leaf nodes have no predecessor nodes, non-leaf nodes cannot calculate the corresponding intermediate gradients, and non-leaf nodes are generally parameters, and there is no need to calculate gradients as the end of the path.
  • step S97113 After calculating the intermediate gradient, execute step S97113 to multiply all the intermediate gradients corresponding to the aforementioned paths, and obtain the gradient of the loss function relative to its parameters according to the chain derivation rule.
  • Fig. 15 is a schematic flowchart of creating a machine learning model according to an exemplary embodiment.
  • step 1501 calling the quantum module to build a quantum computing layer, calling the classical module to build a classical computing layer, and calling the data structure module to build a forward propagation relationship between the classical computing layer and the quantum computing layer ;
  • Step 1502 call the classical module to encapsulate the quantum computing layer, the classical computing layer and the forward propagation relationship, so as to create a machine learning model.
  • the classical computing layer, the quantum computing layer, the forward propagation relationship, and the machine learning model have the same data structure, and the created machine learning model can be used for data processing.
  • quantum computing is a new type of computing mode that follows the laws of quantum mechanics to control quantum information units for computing. With the help of quantum superposition and quantum entanglement, multiple states of information can be processed simultaneously.
  • the quantum computing layer is a program module containing quantum circuits, which can be used to realize quantum computing corresponding to quantum circuits. By encapsulating the quantum circuits according to certain standards, the quantum computing layer is easy to use when creating and training machine learning models. For the part of the machine learning model realized by quantum computing, it can be understood as the corresponding quantum computing layer.
  • classical computing is a traditional computing mode that follows the laws of classical physics to regulate classical information units for computing. It works through a binary system, that is, information is stored using 1 or 0.
  • the classical computing layer corresponds to the quantum computing layer, which can encapsulate the created classical computing program according to certain standards, making the classical computing layer easy to use when creating and training machine learning models.
  • the forward propagation is to use the output of the previous calculation layer as the input of the next calculation layer, and calculate the output of the next calculation layer, and continue to operate until there is no next calculation layer, where the calculation layer can be the above-mentioned classical calculation
  • the layer can also be the above-mentioned quantum computing layer.
  • the data structure refers to the set of data elements that have one or more relationships with each other and the relationship between the data elements in the set.
  • Commonly used data structures are: arrays, stacks, linked lists, queues, trees, Graphs, heaps, hash tables, etc.
  • the data structure can be a 0-dimensional scalar structure, such as a single number 0, or a 1-dimensional vector structure, such as [1,2,3], or a 2-dimensional matrix structure, such as [[1,2,3] ,[4,5,6]], higher-dimensional data structures can be deduced in turn.
  • corresponding to the same data structure may be different, for example, both are arrays, some data structures are [[1,2,3],[4,5,6]], and some data structures are ((1,2,3), (4,5,6)); some [1,2,3] represent columns, and some [1,2,3] represent rows. Therefore, when data flows in different machine learning framework systems, it must first be converted into a data structure supported by the current machine learning framework system.
  • the classic module includes a classic neural network layer unit, and the classic neural network layer unit includes at least one of the following: a specified model classic neural network layer subunit, configured to be created through an encapsulated classic neural network layer interface A classical neural network layer of the specified model; an activation layer subunit configured to create an activation layer for nonlinearly transforming the output of the classical neural network layer;
  • Said calling said classic module to build a classic computing layer including:
  • the classical module also includes an abstract class submodule, and the calling of the classical module encapsulates the quantum computing layer, the classical computing layer and the forward propagation relationship to obtain a machine learning model, including:
  • the abstract class submodule is called to encapsulate the initialized and encapsulated quantum computing layer and the classical computing layer, as well as the encapsulated forward propagation relationship based on the module class, to obtain a machine learning model.
  • the initialization function is _init_()
  • the forward propagation function is forward()
  • the module class is class Net (Module).
  • the created classical computing layer and quantum computing layer are as follows:
  • a, b, c, d, e, f, g, h are interface parameters.
  • the quantum computing layer and the classical computing layer are initialized and encapsulated, and the quantum computing layer and the classical computing layer after initialization and packaging are obtained, then it can be as follows:
  • the classical computing module includes an assignment function, and the forward propagation relationship between the classical computing layer and the quantum computing layer can be constructed through the assignment function.
  • the forward propagation relationship constructed by the assignment function is as follows:
  • the quantum module includes a quantum logic gate sub-module configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement sub-module configured to Create quantum measurement operations that act on qubits; the quantum computing layer submodule is configured to create a quantum computing layer for machine learning models through the encapsulated quantum computing layer interface, or through the quantum logic gate submodule and the quantum measurement submodule module to create a quantum computing layer for machine learning models;
  • Said calling said quantum module builds a quantum computing layer, including:
  • the quantum logic gate sub-module is used to create a quantum logic gate, and the quantum logic gate is used to act on the qubit to make the qubit perform a specific evolution to achieve a specific calculation.
  • the quantum state of the qubit can be represented by a vector containing two elements means, such as
  • the quantum logic gate acts on the qubit, which is equivalent to multiplying the unitary matrix corresponding to the quantum logic gate with the current quantum state of the qubit.
  • the unitary matrix corresponding to the Pauli X gate is Its action on the qubit is equivalent to performing an operation
  • the quantum logic gate sub-module can call a program for creating a quantum logic gate corresponding to the interface through the quantum logic gate interface, and then run the program to create the quantum logic gate.
  • the machine learning framework system may also include a module system for creating quantum logic gates, the module includes a target interface and a target program for creating quantum logic gates in response to calls from the target interface, calling quantum computing
  • the target interface in the embedded system can be called first, and then the target program can be called through the target interface to create a quantum logic gate.
  • the creation of the quantum logic gate can be expressed as creating a corresponding unitary matrix for the vector corresponding to the quantum state of the qubit multiplied.
  • the creation of the quantum logic gate can be expressed as creating a signal for acting on the qubit, so that the qubit executes the corresponding
  • the quantum logic gate sub-module can be configured to only run on a classical computer or a quantum device of a quantum computer, or have the ability to run on both a classical computer and a quantum device, which is not specifically limited in this disclosure.
  • the quantum logic gate sub-module includes:
  • a quantum state encoding logic gate unit configured to create a logic gate that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer
  • the quantum state evolution logic gate unit is configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the logic gate created by the quantum state encoding logic gate unit is used to encode tensor data into a quantum state, which represents the mapping of classical data to quantum states
  • the logic gate created by the quantum state evolution logic gate unit is used to make the specified quantum state
  • a bit performs an evolution corresponding to a target operation, which characterizes the mapping of one quantum state to another.
  • the quantum measurement submodule includes at least one of the following:
  • the expected measurement unit is configured to measure a specified qubit in the quantum computing layer based on the target observable to obtain a corresponding expected value
  • a probability measurement unit configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit
  • the times measuring unit is configured to measure a specified qubit in the quantum computing layer to obtain the number of occurrences of different ground states of the specified qubit's quantum state.
  • the target observable can be a Pauli gate or a combination of Pauli gates, and of course it can also be other Hamiltonian.
  • the expected measurement unit creates a measurement to obtain the expected value of the specified qubit through the corresponding quantum measurement interface. Operation, for example, when the target observable quantity is the Hamiltonian H, the observed expected value is ⁇ y
  • the measurement operation created by the quantum measurement interface called by it acts on the specified qubit to obtain the occurrence probability and occurrence frequency of different ground states of the result quantum state, for example, for the quantum state
  • 1> are 0.25 and 0.75, respectively.
  • the actual measurement results may be that the occurrence probabilities of
  • the actual measurement results may be that
  • a quantum circuit that is, a quantum program, usually includes an encoding part for converting classical data into a quantum state, an operation part for performing specific evolution on qubits to achieve quantum computing, and a part for converting the quantum state into a classical
  • the measurement part of the data can be realized by the above-mentioned quantum state encoding logic gate unit, quantum state evolution logic gate unit and quantum measurement sub-module respectively.
  • the interface encapsulates the quantum circuit to obtain the quantum computing layer. Its interface can be a function with specified input parameters.
  • the interface QuantumLayer(pqctest,3,"cpu",4,1) can be used to create a quantum layer corresponding to pqctest
  • the circuit, 3 parameters, 4 qubits and 1 classical bit run on the quantum computing layer of the CPU (central processing unit, central processing unit).
  • the quantum computing layer runs on the CPU to simulate quantum computing through classical computing.
  • the quantum computing layer can be a quantum computing layer containing a self-built quantum circuit or a quantum circuit of a specified model.
  • the quantum circuit corresponding to pqctest can be a quantum circuit of a specified model such as a quantum convolution layer, or it can be based on quantum logic gate submodules and quantum Measure the self-built quantum circuit of the sub-module.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer;
  • the quantum computing layer submodule includes at least one of the following:
  • a universal quantum program encapsulation unit configured to create the universal quantum computing layer through an encapsulated universal quantum computing layer interface, the universal quantum computing layer interface being used to provide a quantum computing programming library based on the machine learning framework system Quantum programs created;
  • a compatible quantum program encapsulation unit configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, the compatible quantum computing layer interface is used to provide quantum computing programming based on a system that is not based on the machine learning framework Quantum programs created by the library;
  • the noisy quantum program packaging unit is configured to create the noisy quantum computing layer through the encapsulated noisy quantum computing layer interface, and the noisy quantum computing layer interface is used to provide the built-in system based on the machine learning framework.
  • a quantum convolution layer creation unit configured to create the quantum convolution layer through an encapsulated quantum convolution layer interface
  • the quantum fully connected layer creation unit is configured to create the quantum fully connected layer through the encapsulated quantum fully connected layer interface.
  • the universal quantum program encapsulation unit calls the universal quantum computing layer interface to create a universal quantum computing layer.
  • the quantum program is packaged by calling the interface to obtain
  • the universal quantum computing layer makes it easy to call the quantum program to calculate the gradient of the parameters of the quantum circuit in the machine learning model when creating and training the machine learning model running on its own machine learning framework system.
  • the interface QuantumLayer() can be defined to encapsulate the self-built quantum program, and the self-built quantum program can be passed as a parameter into the program corresponding to the interface.
  • the quantum program encapsulated by the interface needs to build a quantum circuit before each run.
  • the interface VQCLayer() can also be defined to encapsulate the quantum program, and the quantum program can be passed as a parameter into the program corresponding to the interface.
  • the quantum program encapsulated by this interface only needs to build a quantum circuit once when running, and only needs to be Change the parameter value in its quantum program.
  • the parameters that need to be passed in to the interface can be defined according to the situation.
  • the interface QuantumLayerV2() that is the same as the above-mentioned QuantumLayer() can be defined.
  • QuantumLayerV2() does not need to pass in the quantity of qubits required for the operation of the encapsulated quantum program as a parameter, and can automatically read the quantum program to obtain the required quantity of qubits.
  • Compatible quantum program encapsulation unit calls the compatible quantum computing layer interface to create a compatible quantum computing layer. After creating a quantum program based on the third-party quantum computing programming library, call this interface to encapsulate the quantum program to obtain a compatible quantum computing layer.
  • the interface Compatiblelayer() can be defined to encapsulate quantum programs created based on third-party quantum computing programming libraries.
  • the quantum computing programming library included in the machine learning framework system refers to the quantum computing programming library such as Qpanda on which the machine learning framework system depends, and not the quantum computing programming library contained in the machine learning framework system. Refers to a quantum computing programming library that is different from the quantum computing programming library that the machine learning framework system depends on, such as Qiskit that is different from Qpanda.
  • the quantum program considering the influence of noise is the quantum program with noise added by the encapsulation unit of the noise-containing quantum program.
  • the interface of the noise-containing quantum computing layer is used to provide a quantum computer based on the quantum computing programming library included in the machine learning framework system.
  • noises in the operating environment such as decoherence noise, bit flip noise, etc.
  • quantum programs such as decoherence noise, bit flip noise, etc.
  • the noisy quantum program encapsulation unit calls the noisy quantum computing layer interface to create a noisy quantum computing layer, and provides the quantum program based on its own machine learning framework system to create quantum programs and noise through the noisy quantum computing layer interface.
  • the quantum program encapsulates the noise-added quantum program to obtain a noisy quantum computing layer, which makes it easy to call the quantum program when creating and training a machine learning model running on its own machine learning framework system. And when the quantum program runs on a virtual machine in the way of classical computing, it simulates its running results on a real quantum computer.
  • the interface NoiseQuantumLayer() can be defined to add noise to the incoming quantum program and encapsulate it.
  • the quantum program and the type of noise can be passed as parameters to the program corresponding to the interface. It should be noted that both the type of noise and how to add noise to the subroutine can be implemented using existing technologies.
  • the quantum convolution layer creation unit creates the quantum convolution layer through the quantum convolution layer interface such as QConv () defined, and the quantum fully connected layer creation unit creates the quantum fully connected layer through the quantum fully connected layer interface such as Qlinear (),
  • the quantum convolutional layer and the quantum fully connected layer can be encapsulated with an interface that is easy to call, making it easy to call the quantum convolutional layer or quantum fully connected layer when creating and training a machine learning model running on its own machine learning framework system without Manually build a quantum fully connected layer or a quantum convolutional layer to improve development efficiency.
  • quantum convolutional layer and the quantum fully connected layer are respectively corresponding to the implementation methods of the convolutional layer and the fully connected layer of the classical neural network based on quantum computing, which can adopt the existing technology, and the specific implementation method of this disclosure I won't go into details.
  • the data processing method provided by this disclosure is applied to the electronic device of the machine learning framework system including the data structure module, the classical module and the quantum module, and the classical computing layer can be constructed through the same machine learning framework system And the quantum computing layer, there is no need to create a classical computing layer and a quantum computing layer through two machine learning framework systems, reducing the cumbersome interaction process between different machine learning framework systems; secondly, each computing layer (classical computing layer or quantum computing layer) Communication between tensors with the same data structure improves the computing efficiency between each computing layer, thereby improving the efficiency of classical-quantum hybrid machine learning model processing data and the overall computing performance.
  • the machine learning framework system of the embodiment of the present disclosure may include a data structure module, a quantum module, and a classical module, as shown in FIG. 3 above, for example.
  • the data structure module 31 is configured to obtain input data and create tensor data including the input data
  • the quantum module 32 is configured to create a machine learning model
  • the classical module 33 is configured to create a machine learning model, so
  • the machine learning model includes a plurality of calculation layers and a forward propagation relationship between the plurality of calculation layers, and determines the first calculation layer to be executed corresponding to the tensor data from the plurality of calculation layers; based on The forward propagation relationship creates a computation graph including computation nodes corresponding to the first computation layer; and an output result of the machine learning model is determined based on the computation graph.
  • the above-mentioned quantum module of the embodiment of the present disclosure is further configured to create a quantum computing layer for creating the machine learning model
  • the classical module is also configured to create a classical computing layer for creating the machine learning model layer
  • the data structure module is also configured to construct the forward propagation relationship between the classical computing layer and the quantum computing layer
  • the classical module is also configured to And the forward propagation relationship is encapsulated to create the machine learning model and the abstract class layer that encapsulates the classical computing layer, and the machine learning model training layer for training and optimizing the machine learning model.
  • the data structure module defines the data structure of tensor data.
  • the input data can be converted into tensor data for input into the machine learning model for forward calculation.
  • the data structure module also defines operations between tensor data, such as mathematical operations and logic operations, etc., and then the data structure module can be called to create a classic computing layer of a machine learning model based on the operational relationship between tensor data, such as a classic neural network.
  • the data structure module performs operations corresponding to this function on these tensor data, and can build a fully connected layer.
  • the data structure module 31 includes:
  • the tensor creation sub-module 1610 is configured to arrange the input data according to a preset data structure to create tensor data for inputting the machine learning model, and/or create a tensor data arranged in the preset data structure and have a value determined tensor data for input into said machine learning model;
  • the operation operation sub-module 1620 is configured to perform operations on the tensor data.
  • the preset data structure can be a 0-dimensional scalar structure such as a single number 0, or a 1-dimensional vector structure such as [1,2,3], or a 2-dimensional matrix structure such as [[1,2,3] ,[4,5,6]], the higher-dimensional data structure can be deduced in turn, the tensor creation sub-module 211 can arrange the input data according to the corresponding data structure after obtaining the input data, for example, the obtained input data is 1, 2,3, and the data structure of the tensor data is specified as a 1-dimensional vector structure through the input parameters, then the input data is converted into [1,2,3], of course, data with a specific data structure can also be directly input for use Convert to tensor data, if the input data is [1,2,3] with vector structure, it can be converted into 2-dimensional tensor data with 1,2,3 as the diagonal data.
  • the tensor creation submodule 1610 can also create tensor data with definite values, for example, create tensor data whose values are all 1 or 0 or random numbers, and the data structure of the tensor data can be the same as that of the input.
  • the data structure of tensor data is the same, and the default data structure can also be used.
  • the operation submodule 1620 includes at least one of the following:
  • a mathematical operation unit 1710 configured to perform mathematical operations on the tensor data
  • Logic operation unit 1720 configured to perform logic operations on the tensor data
  • the data transformation unit 1730 is configured to perform a transformation operation on the tensor data, so as to transform the data structure of the tensor data.
  • the mathematical operation unit 1710 can perform mathematical operations such as addition, subtraction, multiplication, and division on tensor data
  • the logical operation unit 1720 can perform logical operations such as AND, OR, and NOT on tensor data.
  • the data conversion unit 1730 can Perform transformation operations such as transposition and reverse order on tensor data to change the data structure of tensor data.
  • the arithmetic operation sub-module 1620 may include any one of the above three units, or may include all three units as shown in FIG. 17 .
  • a quantum computing layer can be created by calling the quantum module 32.
  • the quantum computing layer is a program module containing quantum circuits, which can be used to realize quantum computing corresponding to quantum circuits. Standard encapsulation makes the quantum computing layer easy to use when creating and training machine learning models. For the part of the machine learning model realized by quantum computing, it can be understood as the corresponding quantum computing layer.
  • the quantum module 32 includes:
  • a quantum logic gate sub-module 1810 configured to create quantum logic gates that act on qubits through the encapsulated quantum logic gate interface
  • a quantum measurement sub-module 1820 configured to create quantum measurement operations acting on qubits through the encapsulated quantum measurement interface
  • the quantum computing layer sub-module 1830 is configured to create the quantum computing layer of the machine learning model through the encapsulated quantum computing layer interface, or create the quantum computing layer of the machine learning model through the quantum logic gate sub-module 1810 and the quantum measurement sub-module 1820 computing layer.
  • the quantum logic gate sub-module 1810 is used to create a quantum logic gate, and the quantum logic gate is used to act on the qubit so that the qubit performs a specific evolution to achieve a specific calculation, and the quantum state of the qubit can be obtained by a Vector representation, such as
  • the quantum logic gate acts on the qubit, which is equivalent to multiplying the unitary matrix corresponding to the quantum logic gate with the current quantum state of the qubit.
  • the unitary matrix corresponding to the Pauli X gate is Acting on the qubit is equivalent to performing an operation
  • the quantum logic gate sub-module 1810 may invoke a program for creating a quantum logic gate corresponding to the interface through the quantum logic gate interface, and then run the program to create the quantum logic gate.
  • the machine learning framework system 30 may also include a system for creating a module of a quantum logic gate, and the module includes a target interface and a target program for creating a quantum logic gate in response to a call of the target interface, calling the quantum logic gate
  • the target interface in the built-in system can be called first, and then the target program can be called through the target interface to create a quantum logic gate.
  • the creation of the quantum logic gate can be expressed as creating a corresponding unitary matrix for multiplying the vector corresponding to the quantum state of the qubit
  • the creation of the quantum logic gate can be expressed as creating a signal for acting on the qubit, so that the qubit performs
  • the quantum logic gate sub-module 1810 can be configured to only run on a classical computer or a quantum device of a quantum computer, or have the ability to run on a classical computer or a quantum device at the same time, which is not specified in this disclosure. limit.
  • the quantum logic gate sub-module 1810 includes:
  • Quantum state encoding logic gate unit 1910 configured to create a logic gate that encodes tensor data created based on input data into the quantum state of a specified qubit in the quantum computing layer;
  • the quantum state evolution logic gate unit 1920 is configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the logic gate created by the quantum state encoding logic gate unit 1910 is used to encode tensor data to a quantum state, which represents the mapping of classical data to quantum states
  • the logic gate created by the quantum state evolution logic gate unit 1920 is used to use
  • a qubit is assigned to perform an evolution corresponding to a target operation, which characterizes the mapping of one quantum state to another.
  • the quantum state encoding logic gate unit 1910 may include at least one of the following:
  • the ground state encoding subunit 2010 is configured to create a logic gate that encodes the tensor data created based on the input data into the first quantum state of the specified qubit in the quantum computing layer, and the ground state of the first quantum state is used to represent the tensor data;
  • the amplitude encoding subunit 2020 is configured to create a logic gate that encodes the tensor data created based on the input data into the second quantum state of the specified qubit in the quantum computing layer, the amplitude of the ground state of the second quantum state is represented by to represent said tensor data;
  • the angle encoding subunit 2030 is configured to create a parameter-containing sub-logic gate with tensor data created based on the input data as a parameter, and the parameter-containing sub-logic gate is used to act on a specified qubit in the quantum computing layer, to obtain a third quantum state representing said tensor data;
  • an instantaneous quantum polynomial IQP encoding subunit 2040 configured to create logic gates that take as parameters tensor data created based on input data an IQP circuit comprising specified qubits in the quantum computing layer, the IQP Logic gates of the circuit are used to act on the designated qubit to obtain a fourth quantum state representing said tensor data.
  • the quantum state encoding logic gate unit 1910 may include some of the above subunits, or may include all of the above subunits as shown in FIG. 20 .
  • the ground state is relative to any quantum state, which is equivalent to the basis vector relative to any vector, for example, for the quantum state
  • 1> are ground states
  • ⁇ > c
  • 11> is the ground state.
  • the logic gate created by the ground state encoding subunit 2010 is used to make the specified qubit evolve to the first quantum state, no matter what quantum state the specified qubit is before evolution, of course, it can also be fixed to be
  • the ground state of the first quantum state can be used to represent the binary code of tensor data, for example, for tensor data 5, its binary code is 101, and then create a logic gate to act on the specified qubit to evolve to the first quantum state
  • the specified qubit can be the qubit specified by passing parameters to the ground state encoding subunit 2010, for example, there are 4 qubits respectively No. 0, No. 1, No. 2, and No. 3, and the parameter q[1] is passed in, specifying Qubit No. 1 is the designated qubit for encoding.
  • the logic gate created by the amplitude encoding subunit 2020 is used to act on the specified qubit to evolve to the second quantum state, no matter what quantum state the specified qubit is before evolution, of course, the specified qubit can also be fixed.
  • the corresponding tensor data is represented by configuring the amplitude of the ground state of the second quantum state. For example, for tensor data [1,3], after normalization, 1 corresponds to 0.25, and 3 corresponds to 0.75, and then a logic gate is created to act on the specified qubit to evolve to the second quantum state
  • the angle encoding subunit 2030 itself includes a parametric sub-logic gate, and the tensor data is directly used as a parameter of the parametric sub-logic gate or is used as a parameter of the parametric sub-logic gate after performing functional transformation on the tensor data. It can represent the rotation angle of the quantum state in the Bloch sphere.
  • the parametric quantum logic gate can be any one of the RX revolving gate, RY revolving gate and RZ revolving gate, and then the parameter is the content of the tensor data
  • the parametric quantum logic gate acts on the specified qubit, so that the specified qubit evolves to the third quantum state, no matter what quantum state the specified qubit is before evolving.
  • the parametric quantum logic gate can be an RZ revolving gate, and its corresponding unitary matrix is:
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • i is an imaginary number
  • the third quantum state can be obtained by applying the unitary matrix to the designated qubit.
  • x can also be directly brought into the above ⁇ to obtain the corresponding unitary matrix.
  • the instantaneous quantum polynomial IQP encoding subunit 2040 adopts IQP encoding (instantaneous quantum polynomial style encoding), and the IQP circuit is obtained by acting on the specified qubit through the logic gate of the creation of the IQP circuit, and the tensor data is used as the parameter of the IQP circuit , running this IQP circuit can encode the tensor data x into the fourth quantum state Among them, x is tensor data, H is the above H gate, n is the number of specified qubits,
  • RZZ gate represents the RZZ gate
  • R Z represents the RZ gate
  • S represents the set of qubits acted on by the U Z (x) logic gate.
  • Fig. 21 taking an IQP circuit containing 4 specified qubits as an example, first, the H gate and the RZ gate are sequentially applied to each specified qubit, and then the RZZ gate 2100 is applied to every two adjacent specified qubits, each The RZZ gate 2100 includes a CNOT gate, an RZ gate and another CNOT gate that act sequentially on a given qubit. It should be noted that in FIG. 21 , other CNOT gates and the RZ gate between two CNOT gates also constitute the RZZ gate, which are not marked for simplicity of illustration. Tensor data can be used as the parameters of the RZ gate next to the H gate in Figure 21, and the parameters of other RZ gates can be set according to specific situations.
  • the quantum state encoding logic gate unit 1910 can quickly and conveniently encode the classical data into the corresponding quantum state, improving the Improve the development efficiency of machine learning models.
  • the quantum state evolution logic gate unit 1920 includes at least one of the following:
  • the basic quantum logic gate subunit 2210 is configured to create single-bit quantum logic gates or multi-bit quantum logic gates acting on specified qubits in the quantum computing layer;
  • the common quantum logic gate subunit 2220 is configured to create a common logic gate through the packaged common logic gate interface, and the common logic gate includes a combination of quantum logic gates corresponding to the basic quantum logic gate unit.
  • single-bit quantum logic gates include H gate, Pauli X gate, Pauli Y gate, Pauli Z gate, RX revolving gate, RY revolving gate , at least one of the RZ revolving gate
  • the multi-bit quantum logic gate can include at least one of the CNOT gate, CR gate, iSWAP gate acting on two qubits and the Toffoli gate acting on three qubits, of course,
  • other single-bit quantum logic gates or multi-bit quantum logic gates may also be included, which is not specifically limited in this application.
  • the commonly used quantum logic gate subunit 2220 obtains a commonly used logic gate interface for calling the combination by encapsulating a combination of at least two single-bit quantum logic gates or at least two multi-bit quantum logic gates, so as to facilitate quick calling of commonly used logic gates Combination, for example, a common logic gate interface Rot() can be defined to call the combination of RX revolving door, RY revolving door and RZ revolving door. It should be noted that the commonly used quantum logic gate subunit 2220 may also package at least one single-bit quantum logic gate or at least one multi-bit quantum logic gate, which is not specifically limited in this disclosure.
  • the quantum state evolution logic gate unit 1920 may include any of the aforementioned subunits or both of the aforementioned two subunits as shown in FIG. 22 .
  • the quantum measurement submodule 1820 includes at least one of the following:
  • the expected measurement unit 2310 is configured to measure a specified qubit in the quantum computing layer based on the target observable to obtain a corresponding expected value
  • the probability measurement unit 2320 is configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit;
  • the times measuring unit 2330 is configured to measure the specified qubit in the quantum computing layer to obtain the occurrence times of different ground states of the quantum state of the specified qubit.
  • the target observable quantity can be a Pauli gate or a combination of Pauli gates, and of course it can also be other Hamiltonian quantities.
  • the expected measurement unit 2310 creates the expected value for measuring the specified qubit through the corresponding quantum measurement interface.
  • the target observable quantity is the Hamiltonian H
  • the observed expected value is ⁇ y
  • the measurement operation created by the quantum measurement interface called by it acts on the specified qubit to obtain the occurrence probability and the number of occurrences of different ground states of the result quantum state, for example, for the quantum state
  • 1> are 0.25 and 0.75 respectively.
  • the actual measurement results may be that the occurrence probabilities of
  • the actual measurement result may be that
  • the quantum measurement sub-module 1820 may include any of the aforementioned units, or include all three units as shown in FIG. 23 .
  • a quantum circuit that is, a quantum program, usually includes an encoding part for converting classical data into a quantum state, an operation part for performing specific evolution on qubits to achieve quantum computing, and a part for converting the quantum state into a classical
  • the measurement part of the data can be realized by the above-mentioned quantum state encoding logic gate unit 1910, quantum state evolution logic gate unit 1920 and quantum measurement sub-module 1820 respectively.
  • the different interfaces that have been encapsulated encapsulate the quantum circuit to obtain the quantum computing layer.
  • the interface can be a function with specified input parameters.
  • the interface QuantumLayer(pqctest,3,"cpu",4,1) can be used to create a layer containing The quantum circuit corresponding to pqctest, 3 parameters, 4 qubits, and 1 classical bit run on the quantum computing layer of the CPU (central processing unit, central processing unit).
  • the quantum computing layer runs on the CPU to pass the classical computing method Simulate quantum computing.
  • the quantum computing layer can be a quantum computing layer that includes a self-built quantum circuit or a quantum circuit of a specified model.
  • the quantum circuit corresponding to pqctest can be a quantum circuit of a specified model such as a quantum convolution layer, or it can be based on the quantum logic gate submodule 1810 and The quantum circuit built by the quantum measurement sub-module 1820.
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer.
  • the quantum computing layer submodule 1830 includes at least one of the following:
  • the universal quantum program packaging unit 2410 is configured to create the universal quantum computing layer through the encapsulated universal quantum computing layer interface, and the universal quantum computing layer interface is used to provide quantum computing based on the machine learning framework system 30. Quantum programs created by programming libraries;
  • the compatible quantum program encapsulation unit 2420 is configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, and the compatible quantum computing layer interface is used to provide quantum information based on the quantum data not included in the machine learning framework system 30. Quantum programs created by computational programming libraries;
  • the noisy quantum program encapsulation unit 2430 is configured to create the noisy quantum computing layer through the encapsulated noisy quantum computing layer interface, and the noisy quantum computing layer interface is used to provide A quantum program that considers the influence of noise created by the included quantum computing programming library;
  • a quantum convolution layer creation unit 2440 configured to create the quantum convolution layer through a packaged quantum convolution layer interface
  • the quantum fully connected layer creating unit 2450 is configured to create the quantum fully connected layer through the encapsulated quantum fully connected layer interface.
  • the universal quantum program encapsulation unit 2410 calls the universal quantum computing layer interface to create a universal quantum computing layer, and after the quantum program is created based on the quantum computing programming library contained in its own machine learning framework system 30, the quantum program is encapsulated by calling the interface , to obtain a universal quantum computing layer, so that when creating and training a machine learning model running on its own machine learning framework system 30, it is convenient to call the quantum program to calculate the gradient of the parameters of the quantum circuit in the machine learning model.
  • the interface QuantumLayer() can be defined to encapsulate the self-built quantum program, and the self-built quantum program can be passed as a parameter into the program corresponding to the interface.
  • the quantum program encapsulated by the interface needs to build a quantum circuit before each run.
  • the interface VQCLayer() can also be defined to encapsulate the quantum program, and the quantum program can be passed as a parameter into the program corresponding to the interface. Need to change the parameter value in its quantum program.
  • the parameters that need to be passed in to the interface can be defined according to the situation.
  • the interface QuantumLayerV2() that is the same as the above-mentioned QuantumLayer() can be defined. The quantity is passed in as a parameter, while QuantumLayerV2() does not need to pass in the quantity of qubits required for the operation of the encapsulated quantum program as a parameter, and can automatically read the quantum program to obtain the required quantity of qubits.
  • Compatible quantum program packaging unit 2420 calls the compatible quantum computing layer interface to create a compatible quantum computing layer. After creating a quantum program based on a third-party quantum computing programming library, call this interface to encapsulate the quantum program to obtain a compatible quantum computing layer.
  • the interface Compatiblelayer() can be defined to encapsulate quantum programs created based on third-party quantum computing programming libraries.
  • the quantum computing programming library contained in the machine learning framework system 30 refers to the quantum computing programming library such as Qpanda on which the machine learning framework system 30 operates, and is not the quantum computing library contained in the machine learning framework system 30.
  • the programming library refers to a quantum computing programming library that is different from the quantum computing programming library that the machine learning framework system 30 depends on, for example, Qiskit that is different from Qpanda.
  • the quantum program that considers the influence of noise is the quantum program that is added with noise by the noise-containing quantum program packaging unit 2430, and the noise-containing quantum computing layer interface is used to provide the quantum computing programming library based on the machine learning framework system 30 for simulation.
  • Various noises in the operating environment of quantum computers such as decoherence noise, bit flip noise, etc.
  • the noise-containing quantum program encapsulation unit 2430 calls the noise-containing quantum computing layer interface to create a noise-containing quantum computing layer, and provides the quantum program and noise based on the quantum program contained in the machine learning framework system 30 based on itself through the noise-containing quantum computing layer interface.
  • Noise is added to the quantum program, and the noise-added quantum program is encapsulated to obtain a noise-containing quantum computing layer, which makes it easy to call the quantum program when creating and training the machine learning model running on its own machine learning framework system 30 .
  • the quantum program runs on a virtual machine in the way of classical computing, it simulates its running results on a real quantum computer.
  • the interface NoiseQuantumLayer() can be defined to add noise to the incoming quantum program and encapsulate it.
  • the quantum program and the type of noise can be passed as parameters to the program corresponding to the interface. It should be noted that both the type of noise and how to add noise to the subroutine can be implemented using existing technologies.
  • the quantum convolution layer creation unit 2440 creates a quantum convolution layer through the quantum convolution layer interface such as QConv () defined, and the quantum fully connected layer creation unit 2450 creates the quantum full connection through the quantum fully connected layer interface such as Qlinear ().
  • Layer, quantum convolutional layer and quantum fully connected layer can be encapsulated with an interface that is easy to call, so that when creating and training a machine learning model running on its own machine learning framework system 30, it is convenient to call the quantum convolutional layer or quantum fully connected Layer, no need to manually build a quantum fully connected layer or a quantum convolution layer, which improves development efficiency.
  • the quantum convolutional layer and the quantum fully connected layer are respectively corresponding to the implementation methods of the convolutional layer and the fully connected layer of the classical neural network based on quantum computing, which can adopt the existing technology, and the specific implementation method of this disclosure I won't go into details.
  • the quantum computing layer sub-module 1830 may include any one or some of the aforementioned units, or include all of the units as shown in FIG. 24 .
  • the classical module 33 includes:
  • the classic computing layer sub-module 2510 is configured to create a classic computing layer for creating a machine learning model
  • the abstract class submodule 2520 is configured to create an abstract class layer for encapsulating the quantum computing layer and the classical computing layer;
  • the machine learning model training layer sub-module 2530 is configured to create a machine learning model training layer for training and optimizing the machine learning model.
  • the classical computing layer is the classical computing part in the machine learning model, which can be created through the classical computing layer sub-module 2510, which can encapsulate the created classic computing program according to certain standards, so that the classical computing layer is convenient Used when creating and training machine learning models.
  • the quantum computing layer and the classical computing layer can be encapsulated by the abstract class sub-module 2520 to create an abstract class layer that meets certain standards.
  • the abstract class layer is implemented by the method of a class in a programming language.
  • a machine learning model that meets certain standards can be created by encapsulating the quantum computing layer and the classical computing layer. In order to obtain the calculation result used to calculate the loss function, it can also obtain the sequence relationship of gradient calculation during reverse calculation.
  • the machine learning model training layer sub-module 2530 is used to create a machine learning model training layer to train the machine learning model.
  • the classical computing layer includes a classical neural network layer
  • the classical computing layer submodule 2510 includes a classical neural network layer unit 2610
  • the classical neural network layer unit 2610 includes at least one of the following By:
  • the specified model classic neural network layer subunit 2710 is configured to create the classic neural network layer of the specified model through the packaged classic neural network layer interface;
  • the activation layer subunit 2720 is configured to create an activation layer for nonlinearly transforming the output of the classical neural network layer.
  • the classic calculation layer can be a classic neural network layer including a classic neural network calculation program, and then the classic calculation layer submodule 2510 can include a classic neural network layer unit 2610 for creating a classic neural network layer , this unit can encapsulate the classical neural network computing layer program with a self-built model or a specified model.
  • the classical neural network layer unit 2610 When the classical neural network layer unit 2610 includes the specified model classical neural network layer subunit 2710, it can create a classical neural network layer of a specified model such as a classic convolutional layer, a pooling layer, a normalization layer, a random dropout layer, and a full connection layer, embedding layer, etc., when the classical neural network layer unit 2610 includes the activation layer subunit 2720, it can perform non-linear transformation on the output of the computing nodes included in the classical neural network layer by creating an activation layer.
  • a classical neural network layer of a specified model such as a classic convolutional layer, a pooling layer, a normalization layer, a random dropout layer, and a full connection layer, embedding layer, etc.
  • the classical neural network layer unit 2610 when the classical neural network layer unit 2610 includes the activation layer subunit 2720, it can perform non-linear transformation on the output of the computing nodes included in the classical neural network layer by creating an activation layer.
  • the activation function of the output of the calculation node may include at least one of a tanh activation function, a sigmoid activation function, and a softmax activation function, and of course other activation functions may also be included, which is not specifically limited in the present disclosure.
  • the classical neural network layer unit 2610 may include any one of the aforementioned subunits, or may include the aforementioned two subunits at the same time.
  • the machine learning model training layer submodule may also include:
  • a loss function unit configured to calculate a loss function of the machine learning model
  • An optimizer unit configured to update parameters of the machine learning model based on the loss function when training the machine learning model, so as to optimize the machine learning model.
  • the loss function unit is used to calculate the loss function of the machine learning model.
  • the square difference between the forward operation result of the machine learning model and the label data can be calculated as the loss function, and the difference between the forward operation result and the label data can also be calculated.
  • Binary Cross Entropy (Binary Cross Entropy) as a loss function.
  • the optimizer unit is used to update the parameters of the machine learning model by using the gradient descent algorithm according to the gradient of the loss function relative to the parameters of the machine learning model to optimize it.
  • the gradient descent algorithm used by the optimizer can be any one of Stochastic Gradient Descent (SGD), Adaptive Gradient Algorithm (Adagrad), Adaptive Moment Estimation (Adam)
  • SGD Stochastic Gradient Descent
  • Adagrad Adaptive Gradient Algorithm
  • Adadam Adaptive Moment Estimation
  • other algorithms can also be used to update the parameters of the machine learning model.
  • the present disclosure does not specifically limit which types of loss functions the loss function unit can calculate and which method the optimizer unit uses to update the parameters.
  • the quantum module and the classical module can be called to correspond to the quantum computing layer and the classical computing layer of the machine learning model, and the abstract class layer can be created through the classical module to compare the quantum computing layer and the classical computing layer Encapsulation is performed to form a machine learning model that is convenient for training and includes the quantum computing layer and the classical computing layer.
  • the machine learning model needs to be trained, you can call the classic module to create the machine learning model training layer to train the formed machine learning model, and then when creating and training the machine learning model, you can directly call the corresponding module, reducing the creation and training
  • the workload required for machine learning models increases the efficiency of their development.
  • the learning framework system can create and train a pure quantum machine learning model, a pure classical machine learning model, or a mixed machine learning model on different hardware (quantum computer or classical computer).
  • Fig. 28 is another block diagram of a machine learning framework system according to an exemplary embodiment, which integrates the above-mentioned modules, sub-modules, units, and sub-units. When describing the above-mentioned technical solution, it can also be combined with the diagram 28 shows the machine learning framework system for explanation.
  • FIG. 29 is a block diagram of a data processing device according to an exemplary embodiment, which can be applied to an electronic device including a machine learning framework system 30 as shown in FIG. 3 , and the machine learning framework system 30 includes a data structure module 31 , quantum module 32 and classical module 33, described device 2900 comprises:
  • the first creation module 2910 is used to call the data structure module to obtain input data and create tensor data including the input data, call the quantum module and the classical module to create a machine learning model, and the machine learning model includes A plurality of computing layers and a forward propagation relationship among the plurality of computing layers;
  • a determination module 2920 configured to determine the first calculation layer to be executed corresponding to the tensor data from the plurality of calculation layers
  • the second creation module 2930 is configured to create a calculation graph including calculation nodes corresponding to the first calculation layer based on the forward propagation relationship;
  • An output module 2940 configured to determine an output result of the machine learning model based on the calculation graph.
  • the second creating module 2930 is also used for:
  • the sub-computation graph corresponding to the first calculation layer is added to the calculation graph corresponding to the second calculation layer to obtain a new calculation graph.
  • the device 2900 further includes:
  • a third creating module configured to create the computation graph corresponding to the first computation layer when there is no unexecuted second computation layer associated with the first computation layer.
  • the second creating module 2930 is also used for:
  • the output module 2940 is also used for:
  • An output result of the machine learning model is determined based on the output of the first computing layer.
  • the device 2900 further includes:
  • the fourth creation module is used to call the classic module to create the training layer of the machine learning model
  • An input module configured to input the output result of the machine learning model into the training layer, so as to add the corresponding sub-computation graph of the training layer to the calculation graph based on the relationship between the training layer and the machine learning model;
  • An update module configured to update the parameters of the machine learning model based on the computation graph, to obtain the trained machine learning model.
  • the training layer includes a loss function layer and an optimizer layer
  • the classic module 33 includes:
  • a loss function unit 331 configured to calculate a loss function of the machine learning model
  • An optimizer unit 332 configured to update parameters of the machine learning model based on the loss function when training the machine learning model, so as to optimize the machine learning model
  • the fourth creation module is also used for:
  • the optimizer unit is invoked to create the optimizer layer.
  • the input module is also used for:
  • the update module is also used to:
  • the machine learning model after updating the parameters is used as the machine learning model after training.
  • the update module is also used for:
  • the update module is also used for:
  • the above-mentioned device 2900 of the present disclosure may further include a construction unit, a fudal unit and a processing unit, see FIG. 30 , which is another block diagram of a data processing device according to an exemplary embodiment.
  • the devices include:
  • a construction unit 3010 configured to call the quantum module to build a quantum computing layer, call the classical module to build a classical computing layer, and call the data structure module to build a forward link between the classical computing layer and the quantum computing layer communication relationship;
  • Encapsulation unit 3020 configured to call the classical module to encapsulate the quantum computing layer, the classical computing layer, and the forward propagation relationship to obtain a machine learning model, the classical computing layer, the quantum computing layer, the The forward propagation relationship is the same as the data structure of the machine learning model;
  • the processing unit 3030 is configured to call the machine learning model for data processing.
  • the classic module includes a classic neural network layer unit, and the classic neural network layer unit includes at least one of the following: a specified model classic neural network layer subunit, configured to be created through an encapsulated classic neural network layer interface A classical neural network layer of the specified model; an activation layer subunit configured to create an activation layer for nonlinearly transforming the output of the classical neural network layer;
  • the construction unit 3010 is specifically used for:
  • the classical module further includes an abstract class submodule.
  • the packaging unit 3020 is specifically used for:
  • the abstract class submodule is called to encapsulate the initialized and encapsulated quantum computing layer and the classical computing layer, as well as the encapsulated forward propagation relationship based on the module class, to obtain a machine learning model.
  • the quantum module includes a quantum logic gate sub-module configured to create a quantum logic gate acting on a qubit through a packaged quantum logic gate interface; a quantum measurement sub-module configured to Create quantum measurement operations that act on qubits; the quantum computing layer submodule is configured to create a quantum computing layer for machine learning models through the encapsulated quantum computing layer interface, or through the quantum logic gate submodule and the quantum measurement submodule module to create a quantum computing layer for machine learning models;
  • the construction unit 3010 is specifically used for:
  • the quantum computing layer includes at least one of the following: a general quantum computing layer, a compatible quantum computing layer, a noisy quantum computing layer, a quantum convolution layer, and a quantum fully connected layer;
  • the quantum computing layer submodule includes at least one of the following:
  • a universal quantum program encapsulation unit configured to create the universal quantum computing layer through an encapsulated universal quantum computing layer interface, the universal quantum computing layer interface being used to provide a quantum computing programming library based on the machine learning framework system Quantum programs created;
  • a compatible quantum program encapsulation unit configured to create the compatible quantum computing layer through the encapsulated compatible quantum computing layer interface, the compatible quantum computing layer interface is used to provide quantum computing programming based on a system that is not based on the machine learning framework Quantum programs created by the library;
  • the noisy quantum program packaging unit is configured to create the noisy quantum computing layer through the encapsulated noisy quantum computing layer interface, and the noisy quantum computing layer interface is used to provide the built-in system based on the machine learning framework.
  • a quantum convolution layer creation unit configured to create the quantum convolution layer through an encapsulated quantum convolution layer interface
  • the quantum fully connected layer creation unit is configured to create the quantum fully connected layer through the encapsulated quantum fully connected layer interface.
  • the quantum logic gate sub-module includes:
  • a quantum state encoding logic gate unit configured to create a logic gate that encodes tensor data created based on input data into a quantum state of a specified qubit in the quantum computing layer
  • the quantum state evolution logic gate unit is configured to create a logic gate that performs evolution corresponding to a target operation on a specified qubit in the quantum computing layer.
  • the quantum measurement submodule includes at least one of the following:
  • the expected measurement unit is configured to measure a specified qubit in the quantum computing layer based on the target observable to obtain a corresponding expected value
  • a probability measurement unit configured to measure the specified qubit in the quantum computing layer to obtain the probability of occurrence of different ground states of the quantum state of the specified qubit
  • the times measuring unit is configured to measure a specified qubit in the quantum computing layer to obtain the number of occurrences of different ground states of the specified qubit's quantum state.
  • the data processing method provided by this disclosure is applied to the electronic device of the machine learning framework system including the data structure module, the classical module and the quantum module, and the classical computing layer can be constructed through the same machine learning framework system And the quantum computing layer, there is no need to create a classical computing layer and a quantum computing layer through two machine learning framework systems, reducing the cumbersome interaction process between different machine learning framework systems; secondly, each computing layer (classical computing layer or quantum computing layer) Communication between tensors with the same data structure improves the computing efficiency between each computing layer, thereby improving the efficiency of classical-quantum hybrid machine learning model processing data and the overall computing performance.
  • the data processing method provided by this disclosure is applied to the electronic device of the machine learning framework system including the data structure module, the classical module and the quantum module, and the classical computing layer can be constructed through the same machine learning framework system And the quantum computing layer, there is no need to create a classical computing layer and a quantum computing layer through two machine learning framework systems, reducing the cumbersome interaction process between different machine learning framework systems; secondly, each computing layer (classical computing layer or quantum computing layer) Communication between tensors with the same data structure improves the computing efficiency between each computing layer, thereby improving the efficiency of classical-quantum hybrid machine learning model processing data and the overall computing performance.
  • the foregoing electronic device may include a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the data processing method described in the above-mentioned accompanying drawings 1-15.
  • An embodiment of the present disclosure also provides a storage medium, in which a computer program is stored, wherein the computer program is configured to execute the data processing method described in the above-mentioned figures 1 to 15 when running.
  • the term “if” may be interpreted as “when” or “once” or “in response to determining” or “in response to detecting” depending on the context.
  • the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Debugging And Monitoring (AREA)

Abstract

L'invention concerne un procédé de traitement de données, un système de cadre d'apprentissage automatique et un dispositif associé, le procédé consistant à : invoquer un module de structure de données pour obtenir des données d'entrée, créer des données de tenseur comprenant les données d'entrée, et invoquer un module quantique et un module classique pour créer un modèle d'apprentissage automatique, le modèle d'apprentissage automatique comprenant une pluralité de couches de calcul et une relation de propagation vers l'avant entre la pluralité de couches de calcul; déterminer à partir de la pluralité de couches de calcul une première couche de calcul à exécuter qui correspond aux données de tenseur; sur la base de la relation de propagation vers l'avant, créer un graphe de calcul comprenant un graphe de sous-calcul correspondant à la première couche de calcul; et, sur la base du graphe de calcul, déterminer un résultat de sortie du modèle d'apprentissage automatique. L'utilisation de la présente solution réduit la difficulté de débogage d'un modèle d'apprentissage automatique contenant un programme quantique, et améliore l'efficacité de développement.
PCT/CN2022/143598 2021-12-30 2022-12-29 Procédé de traitement de données, système de cadre d'apprentissage automatique et dispositif associé WO2023125858A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202111680614.5 2021-12-30
CN202111680572.5 2021-12-30
CN202111680614.5A CN116432764A (zh) 2021-12-30 2021-12-30 机器学习框架
CN202111680572.5A CN116432721A (zh) 2021-12-30 2021-12-30 数据处理方法、机器学习框架及相关设备
CN202210083468.6A CN116523059A (zh) 2022-01-24 2022-01-24 数据处理方法、机器学习框架及相关设备
CN202210083468.6 2022-01-24

Publications (1)

Publication Number Publication Date
WO2023125858A1 true WO2023125858A1 (fr) 2023-07-06

Family

ID=86998058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/143598 WO2023125858A1 (fr) 2021-12-30 2022-12-29 Procédé de traitement de données, système de cadre d'apprentissage automatique et dispositif associé

Country Status (1)

Country Link
WO (1) WO2023125858A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118014094A (zh) * 2024-04-09 2024-05-10 国开启科量子技术(安徽)有限公司 确定函数分类的量子计算方法、量子线路、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200272515A1 (en) * 2020-05-08 2020-08-27 Intel Corporation Techniques to generate execution schedules from neural network computation graphs
CN112270403A (zh) * 2020-11-10 2021-01-26 北京百度网讯科技有限公司 构建深度学习的网络模型的方法、装置、设备和存储介质
CN112529206A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种模型运行方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529206A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种模型运行方法和系统
US20200272515A1 (en) * 2020-05-08 2020-08-27 Intel Corporation Techniques to generate execution schedules from neural network computation graphs
CN112270403A (zh) * 2020-11-10 2021-01-26 北京百度网讯科技有限公司 构建深度学习的网络模型的方法、装置、设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118014094A (zh) * 2024-04-09 2024-05-10 国开启科量子技术(安徽)有限公司 确定函数分类的量子计算方法、量子线路、设备及介质

Similar Documents

Publication Publication Date Title
CN113850389B (zh) 一种量子线路的构建方法及装置
CN114358319B (zh) 基于机器学习框架的分类方法及相关装置
Stornaiuolo et al. On how to efficiently implement deep learning algorithms on pynq platform
WO2023125858A1 (fr) Procédé de traitement de données, système de cadre d'apprentissage automatique et dispositif associé
CN112232513A (zh) 一种量子态的制备方法及装置
CN114358295B (zh) 基于机器学习框架的二分类方法及相关装置
CN114358317B (zh) 基于机器学习框架的数据分类方法及相关设备
CN114050975B (zh) 一种异构多节点互联拓扑生成方法和存储介质
CN114358216B (zh) 基于机器学习框架的量子聚类方法及相关装置
CN114358318B (zh) 基于机器学习框架的分类方法及相关装置
US20240160977A1 (en) Quantum circuit compilation method, device, compilation framework and quantum operating system
CN115293254A (zh) 基于量子多层感知器的分类方法及相关设备
WO2023143121A1 (fr) Procédé de traitement de données et dispositif associé correspondant
WO2023179379A1 (fr) Procédé et système de simulation pour système de circuit à retard non linéaire, et support
CN116403019A (zh) 遥感图像量子识别方法、装置、存储介质及电子装置
CN114819163A (zh) 量子生成对抗网络的训练方法、装置、介质及电子装置
CN117709415A (zh) 一种量子神经网络模型的优化方法及装置
CN115544307A (zh) 基于关联矩阵的有向图数据特征提取与表达方法和系统
WO2021146977A1 (fr) Procédé et appareil de recherche d'architecture neuronale
CN115983392A (zh) 量子程序映射关系的确定方法、装置、介质及电子装置
CN116523059A (zh) 数据处理方法、机器学习框架及相关设备
CN116432764A (zh) 机器学习框架
CN114372584B (zh) 基于机器学习框架的迁移学习方法及相关装置
WO2024066808A1 (fr) Procédé et appareil de génération de circuit quantique, support de stockage et dispositif électronique
WO2022143789A1 (fr) Procédé et appareil de prétraitement quantique, support d'enregistrement et appareil électronique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22915107

Country of ref document: EP

Kind code of ref document: A1