CN117574285A - Index generating system and related device thereof - Google Patents
Index generating system and related device thereof Download PDFInfo
- Publication number
- CN117574285A CN117574285A CN202311559013.8A CN202311559013A CN117574285A CN 117574285 A CN117574285 A CN 117574285A CN 202311559013 A CN202311559013 A CN 202311559013A CN 117574285 A CN117574285 A CN 117574285A
- Authority
- CN
- China
- Prior art keywords
- quantum
- data
- vector machine
- gate
- index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000013598 vector Substances 0.000 claims abstract description 98
- 238000012549 training Methods 0.000 claims abstract description 84
- 230000006870 function Effects 0.000 claims abstract description 72
- 238000012706 support-vector machine Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims description 44
- 230000015654 memory Effects 0.000 claims description 34
- 239000002096 quantum dot Substances 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000005259 measurement Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 9
- 238000012546 transfer Methods 0.000 description 14
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 13
- 229910052709 silver Inorganic materials 0.000 description 13
- 239000004332 silver Substances 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 238000010801 machine learning Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005251 gamma ray Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005610 quantum mechanics Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the specification provides an index generation system and a related device thereof. The index generation system is used for realizing the following steps: generating a model by using a plurality of primary indexes respectively based on the generated first data in the first time period, and generating a primary index value of the second data aiming at the target index in the second time period; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; wherein the end time of the second time period is later than the end time of the first time period; the system is capable of being rapidly deployed online to generate the target index value for the user.
Description
Technical Field
The embodiments in the present specification relate to the field of machine learning, and more particularly, to an index generating system and a related apparatus thereof.
Background
With the advent of cloud computing and big data age, users often need to simulate the change condition of data of a certain data category in different time periods through historical data of the data category. In some cases, the user may simulate and calculate the data of the data category of one time period in the future, so as to obtain the target index value of the data category aiming at the target index in the future time period, so as to guide the social production and management. For example, the data of the data category may be meteorological data. By simulating the change condition of the weather data in a future time period, the target index value of the weather data aiming at the target index in the time period can be obtained. The target index may be an index such as air quality.
In the prior art, a user can generate a target index value of data of a specified data category aiming at a target index in a specified time period through a machine learning model. However, in order to improve the accuracy of the target index value generated by the machine learning model, the machine learning model needs to be trained using a large number of training samples.
Therefore, there is a process in the prior art that it takes a long time for the machine learning model for generating the target index value of the target index to be deployed from training to the upper line.
Disclosure of Invention
The embodiments in the present disclosure provide an index generating system and related devices, and provide a system capable of being deployed online quickly to generate a target index value for a user.
One embodiment of the present specification provides an index generation system including a memory storing a computer program and a processor configured to implement the following steps when executing the computer program: generating a model by using a plurality of primary indexes respectively based on the first data generated in the first time period, and generating a primary index value of the second data aiming at the target index in the second time period; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed through a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; wherein the end time of the second time period is later than the end time of the first time period; and determining a target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models.
One embodiment of the present specification provides a training system of an index generation model including a plurality of primary index generation models; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; the training system of the metric generation model comprises a memory storing a computer program and a processor configured to implement the following steps when executing the computer program: and respectively training the plurality of primary index generation models by using the constructed training samples, and forming the index generation models by the plurality of trained primary index generation models.
One embodiment of the present specification provides a quantum computing system; the quantum computing system comprises a basic computing unit and a quantum computing unit; the quantum computing system is used for training a quantum vector machine model; wherein the quantum vector machine model is an index generation model comprising at least one of a plurality of primary index generation models; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of the quantum vector machine model is obtained through specified quantum circuit operation; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same; the basic calculation unit is used for receiving sample characteristic data provided by a training system of the index generation model when the quantum vector machine model is trained, and sending the sample characteristic data to the quantum calculation unit; the quantum computing unit includes a plurality of qubits; the quantum computing unit is used for acting quantum gates corresponding to the sample characteristic data on the plurality of quantum bits according to a specified quantum circuit so as to obtain the similarity between the sample characteristic data based on the measurement results of the plurality of quantum bits and serve as an output value of a kernel function of the quantum vector machine model.
In the embodiments provided herein, the primary index value of the target index is generated for the second data in the second period by using the plurality of primary index generation models based on the generated first data in the first period, and then the target index value of the target index is determined according to the plurality of primary index values generated by the plurality of primary index generation models. The ending time of the second time period is later than the ending time of the first time period, and the data types of the first data and the second data are the same. Since the plurality of primary index generation models include a plurality of vector machine models constructed by supporting a vector machine, at least some of the vector machine models use different kernel functions in training. And as the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit, the similarity of different training samples under a high-dimensional space can be calculated with higher efficiency during training, so that the parameters of an index generation model can be quickly adjusted, and the index generation system can be quickly deployed on line to generate a target index value for a user.
Drawings
Fig. 1 is a schematic diagram of a quantum computing system provided in one embodiment of the present description.
Fig. 2 is a schematic diagram of an index generating system according to an embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating steps implemented by the index generating system according to an embodiment of the present disclosure.
FIG. 4 is a flowchart illustrating steps implemented by a training system for an index generation model according to one embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a specified quantum circuit provided in one embodiment of the present description.
Fig. 6 is a schematic diagram of a specified quantum circuit provided in one embodiment of the present description.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In order to enable quick deployment of a machine learning model for generating a target index value for a target index at the time of version iteration or initial online, a lightweight machine learning model or a reduction in training samples may be employed in the related art. However, the above approach may reduce the performance of the machine learning model to some extent.
Accordingly, it is necessary to provide an index generating system, which may include a memory, which may store a computer program, and a processor configured to generate primary index values for a target index by using a plurality of primary index generation models, respectively, based on first data generated in a first period of time, and then determine target index values for the target index based on the plurality of primary index values generated by the plurality of primary index generation models, respectively, when executing the computer program. The ending time of the second time period is later than the ending time of the first time period, and the data types of the first data and the second data are the same. Since the plurality of primary index generation models include a plurality of vector machine models constructed by supporting a vector machine, at least some of the vector machine models use different kernel functions in training. In addition, as the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit, the similarity of different training samples under a high-dimensional space can be calculated with higher efficiency during training, so that the parameters of the index generation model can be quickly adjusted. By means of the characteristics of the quantum kernel function, the technical problem that a machine learning model for generating a target index value of a target index in the related art takes a long time from training to online deployment can be solved to a certain extent.
The embodiment of the specification provides an index generation system and a training system of an index generation model. The training system of the number index generation system and the index generation model may include a client and a server. The client may be an electronic device with network access capabilities. Specifically, for example, the client may be a desktop computer, a tablet computer, a notebook computer, a smart phone, a digital assistant, a smart wearable device, a shopping guide terminal, a television, a smart speaker, a microphone, and the like. Wherein, intelligent wearable equipment includes but is not limited to intelligent bracelet, intelligent wrist-watch, intelligent glasses, intelligent helmet, intelligent necklace etc.. Alternatively, the client may be software capable of running in the electronic device. The server may be an electronic device with some arithmetic processing capability. Which may have a network communication module, a processor, memory, and the like. Of course, the server may also refer to software running in the electronic device. The server may also be a distributed server, and may be a system having a plurality of processors, memories, network communication modules, etc. operating in concert. Alternatively, the server may be a server cluster formed for several servers. Or, with the development of science and technology, the server may also be a new technical means capable of realizing the corresponding functions of the embodiment of the specification. For example, a new form of "server" based on quantum computing implementation may be possible.
Please refer to fig. 1. One embodiment of the present specification provides a quantum computing system. The quantum computing system may include a base computing unit and a quantum computing unit.
The base computing unit may be an electronic device having a certain computing power and display power. Specifically, for example, the base computing unit may include a desktop computer, a tablet computer, a notebook computer, a smart phone, a smart television, a smart wearable device, and the like, which are equipped with a display screen. The base computing unit may have a network communication module, a processor, a memory, and the like.
In some implementations, the base computing unit may act as a client. Alternatively, the base computing unit may be deployed with a client software program. The basic calculation unit can send the appointed data or the data operation task to the quantum calculation unit for operation, receive the quantum operation result fed back by the quantum calculation unit, and process and display the quantum operation result fed back by the quantum calculation unit.
The base computing unit may also include a server or workstation based on existing computer architecture. The server may be a distributed server, or may be a system having a plurality of processors, memories, network communication modules, etc. operating in concert. Alternatively, the server may be a server cluster formed by several servers. The basic computing unit can process the quantum operation result fed back by the quantum computing unit through the server or the workstation, and then the processing result is sent to the client for display.
The quantum computing unit may be a device that uses the characteristics of quantum mechanics to implement quantum computing. Specifically, the quantum computing unit can drive the quantum bit evolution based on the linear superposition principle of the quantum state by taking the quantum state of the quantum bit as a data carrier so as to realize a specified operation process. For example, the quantum computing unit may be a superconducting qubit control circuit implemented based on ultra-low temperature technology. Alternatively, the quantum computing unit may be a qubit control circuit built by quantum well technology. Of course, the quantum computing unit may be a photonic quantum integrated chip or the like.
The basic calculation unit and the quantum calculation unit can communicate. For example, communication between the base computing unit and the quantum computing unit may be achieved through a wired connection. Alternatively, the basic computing unit and the quantum computing unit may implement wireless communication through a network communication module.
Referring to fig. 2 and 3, an embodiment of the present disclosure provides an index generating system. The index generation system may include a memory and a processor. The memory may store a computer program. The processor may be configured to implement the following steps when executing the computer program.
Step S110: generating a model by using a plurality of primary indexes respectively based on the first data generated in the first time period, and generating a primary index value of the second data aiming at the target index in the second time period; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed through a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; wherein the end time of the second time period is later than the end time of the first time period.
In the related art, data of some data categories may have partially missing values or abnormal values. For example, with respect to meteorological data, problems may be encountered with meteorological sensor failure or detection anomalies. For this purpose, the primary index generation model in the embodiment of the present specification may be built by using a support vector machine (Support Vector Machine, SVM). Because the support vector machine has good generalization capability and maximum interval classification characteristics, the support vector machine is relatively robust to outliers, the problem of overfitting can be reduced to a certain extent, and the influence on model performance caused by missing values or outliers of data of a specified data class can be reduced.
Further, support vector machines, while they may use kernel functions to process nonlinear data, make the data linearly separable in high-dimensional space by mapping the data to that space. However, the performance of the support vector machine is highly dependent on the kernel function selected, and in some cases, it may be difficult to select an appropriate kernel function. Improper kernel function selection may result in degraded performance of the support vector machine. For this reason, in the embodiment of the present specification, a plurality of vector machine models built based on support vector machines are adopted. The kernel functions used by at least part of the vector machine model in training are different, so that the characteristics of the vector machine model trained by different kernel functions can be considered when the target index value aiming at the target index is generated by the index generation system, and the index generation system has stronger fault tolerance capability and robustness.
Different kernel functions of the support vector machine are used to map data to different high-dimensional spaces so that the data is linearly separable in the corresponding space. However, the kernel function of classical support vector machines may be static for high-dimensional space. There are certain limitations to handling high-dimensional data and non-linearity issues. Specifically, in classical support vector machines, the high-dimensional space is typically defined by selecting a kernel function (e.g., linear kernel, polynomial kernel, gaussian kernel, etc.), which is typically executed on a classical computer, and the dimensions of the high-dimensional map are also predetermined. This approach limits modeling of complex data structures because it is not easy to select the appropriate kernel function.
For this reason, the embodiment of the present disclosure uses at least one quantum vector machine model constructed by a quantum support vector machine among a plurality of support vector machine models. Wherein the high-dimensional mapping can be performed using quantum transformation, taking advantage of the properties of quantum computing. The dimensions of such a map may be dynamically generated based on the nature of the input data and the requirements of the problem. This means that quantum support vector machines can flexibly increase or decrease dimensions in high-dimensional space to better accommodate the structure of the data, if desired.
Meanwhile, quantum computing systems have exponentially accelerating computing capabilities compared to classical computers. Meanwhile, the quantum computation can enable the quantum support vector machine to execute a plurality of computations at the same time, so that the computation efficiency is improved to a certain extent, and the technical problem that a machine learning model for generating a target index value of a target index in the related technology takes a long time from training to online deployment can be solved to a certain extent.
In this embodiment, the data category may refer to different kinds of data. Specifically, for example, the data of the data category of the first data and the second data may be weather data. Alternatively, the data type of the first data and the second data may be vehicle traffic data, network traffic data, or the like. Of course, the data categories of the first data and the second data may also be economic data. The specified categories may be used to refer to any category of data.
In this embodiment, the first data and the second data have the same data type. The data categories of the first data and the second data may refer to specified categories of either data category.
In some implementations, the data of the specified category may also include sub-data of a plurality of sub-categories. For example, the weather data may include air temperature data, air humidity data, PM2.5 data, and the like. The economic data may include regional production data, regional transaction data, and the like.
In some implementations, the specified category of data may be used to describe the state of the specified object. In particular, for example, weather data may be used to describe the condition of air. The vehicle flow data may be used to describe traffic conditions. Alternatively, the economic data may be used to generate business status for a specified area. Accordingly, the first data and the second data may be used to describe the state of the specified object in the first period of time and the state of the specified object in the second period of time, respectively.
In some embodiments, where the specified category of data is economic data, the economic data may be used to describe the production business status of the physical world. In particular, for example, economic data may be used to describe the allocation of specified resources, which may be, among other things, labor, land, raw materials, and energy. Of course, economic data may also be used to describe production and manufacturing, traffic and logistics, climate and natural disasters, environmental impacts, infrastructure and construction, population and society, and the like.
In this embodiment, the first period may be a specified one. For example, the first time period may be 2023, 1, to 2023, 10, 30. Of course, the first time period may also be a dynamic time period. Specifically, for example, the first time period may be determined based on the current time, and the start time and the end time of the first time period may be determined. For example, the first time period may be 100 days ago to the day.
In this embodiment, the first data may be the specified category of data generated during the first period of time. Alternatively, the first data may be used to describe data specifying the state of the object at the first time. Specifically, for example, the first period of time may be 2023, 1 month, 1 day to 2023, 10 months, 30 days. The specified category may be weather data. Accordingly, the specified object may refer to the atmosphere. The first data may be weather data describing an atmospheric change between 1 st 2023 and 10 th 2023. Of course, the specified category may also be economic data. Accordingly, the specified object may refer to a specified economic society or a specified region. The first data may be economical data specifying a region between 1 st of 2023 and 10 th of 2023 or specifying a region produced by economical society.
In this embodiment, the second period may be a period different from the first period. In particular, the end time of the second period of time may be later than the end time of the first period of time. In particular, the second period of time may be a period of time subsequent to the first period of time. For example, the first time period may be 2023, 1, 30, to 2023, 10, 30. The second time period may be 2023, 10, 31. Of course, the second time period and the first time period may also partially overlap. The present embodiment is not particularly limited herein.
In some embodiments, the interval length of the second period of time may be less than the interval length of the first period of time.
In this embodiment, the second data may be the specified category of data generated or expected to be generated during the second period of time. The data category of the second data and the data category of the first data may be the same. For example, the category of the second data, and the specified category of the first data may be economic data. The second data differs from the first data in that the corresponding economic data of the second data describes socially generated business conditions for different time periods.
In the present embodiment, the target index may be an index of data of a specified category. The target metrics may be used to describe information of a specified category of data in a specified dimension. For example, in the case where the specified category of data is weather data, the target index may include air quality, sunshine hours, and the like. Alternatively, in the case where the data of the specified category is economic data, the target index may be a national stock silver ticket transfer rate, a monetary rate, or the like. Of course, in the case where the data of the specified category is traffic data, the target index may be a public traffic utilization rate, a traffic congestion index, or the like.
In this embodiment, the primary index value may be used to describe a value of the target index for the second data expected to be generated in the second period. That is, the primary index value may describe a possible state of the target object for the dimension represented by the target index during the second period. Specifically, for example, for meteorological data, the target indicator may be air quality. The primary index value may describe the air quality over the second period of time. For the economic data, the target index may be a state stock silver ticket transfer rate change condition, and the target index value may represent whether the state stock silver ticket transfer rate increases or decreases in the second period of time. For example, if the target index value is 0, this indicates that the national stock ticketing transfer rate is increasing. If the target index value is 1, this indicates that the national stock silver ticket transfer rate is a drop.
In this embodiment, the index generation system may include an index generation model. The index generation model may include a plurality of primary index generation models. Wherein each primary index generation model can generate a corresponding primary index value based on the first data, respectively. Wherein at least part of the primary index generation models in the plurality of primary index generation models have different structures. Specifically, for example, the multiple primary index generation models may include a model constructed by XGBoost, a model constructed by LightGBM, a model constructed by classical support vector machine, and a model constructed by quantum classical support vector machine. The number of the primary index generation models with different structures can be one or a plurality of primary index generation models.
In this embodiment, a model constructed by a support vector machine may be used as the vector machine model. The model built by the quantum support vector machine can be used as a quantum vector machine model. The quantum vector machine model needs to be trained by a kernel function formed based on a specified quantum circuit. For sample feature data input into the quantum vector machine model, the kernel function can map the sample feature data to the hilbert space, so that sample feature data of different categories can be more easily separated in the Gao Weixi hilbert space. Therefore, the quantum vector machine model learns excellent classification capability. The kernel function of the quantum vector machine model can be calculated based on a specified quantum circuit. The specified quantum wires can be used to calculate the similarity between different sample feature data.
Step S120: and determining a target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models.
In some cases, the plurality of primary index values respectively generated by the plurality of primary index models may be different. By integrating the primary index values respectively generated by the plurality of primary index models, a more accurate and robust target index value can be obtained.
In some embodiments, the range of values of the target indicator may be determined by a limited number of candidate indicator values. That is, the index generation model may be regarded as a classification model. The method for determining the target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models may be to determine the target index value through a voting mechanism of the models. Of course, the value range of the target index may be any value in a numerical range. That is, the index generation model may be considered as a regression model. Correspondingly, the method for determining the target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models can be obtained by carrying out weighted summation on the plurality of primary index values.
In some embodiments, the step of generating the primary index value for the target index for the second data in the second time period based on the first data generated in the first time period using the plurality of primary index generation models, respectively, may include: acquiring a plurality of specified index values of the first data aiming at a plurality of specified indexes in the first time period; respectively inputting the plurality of designated index values into the plurality of primary index generation models to obtain primary index values of second data aiming at the target index in a second time period fed back by the plurality of primary index generation models respectively; wherein the end time of the second time period is later than the end time of the first time period.
In some cases, the data of a given category may include a large number of different dimensions of information. The partial information may not help much in the generation of the target index value. For example, traffic data may include brands of different vehicles in a traffic stream. The target index may be a traffic congestion index. Brands of different vehicles do not contribute much to the traffic congestion index. Therefore, the target index value of the second data for the target index can be generated through the first data for the plurality of specified index values of the plurality of specified indexes, and the calculation efficiency can be improved while the accuracy is considered to a certain extent.
In this embodiment, the specified index is associated with the target index. In some embodiments, the specified index and the target index may both be indexes of specified category data. The target index value of the target index needs to be indirectly obtained through the specified index value of the specified index. In particular, the target indicator may be an air quality, for example, for meteorological data. The plurality of specified indicators may be air humidity, PM2.5 concentration, or the like. The plurality of specified index values may be a value of air humidity and a value of PM2.5 concentration, respectively. Alternatively, for economic data, the target index may be national stock silver ticket transfer rate. The plurality of specified indicators may be Cox-engorgement-Ross model (CIR) prediction features, technical indicator features, and macro economic indicators. The national stock silver ticket transfer discount rate refers to the discount rate used when bank drafts issued by national commercial banks are transferred and discount traded in the market. The setting of the national stock silver ticket transfer discount rate needs to comprehensively consider factors such as market supply and demand relationship, currency policy, economic situation and the like so as to keep the stable and healthy development of the market.
In some embodiments, the plurality of specified indicators and the target indicator are different. The plurality of specified indexes can be preset or obtained by performing feature screening on the first data.
In this embodiment, the method for obtaining the multiple specified index values of the first data for the multiple specified indexes in the first period may be a method for directly obtaining the multiple specified index values of the first data for the multiple specified indexes provided by the third party. Of course, the method of acquiring the multiple specified index values of the first data for the multiple specified indexes in the first period may be a method of directly acquiring the first data and calculating the multiple specified index values for the multiple specified indexes.
In the present embodiment, after a plurality of specified index values of the first data for a plurality of specified indexes are obtained, a plurality of primary index generation model input feature data may be generated from the plurality of specified index values. The input characteristic data can be a vector, and the value of each position of the vector can be a normalized designated index value. And respectively inputting the specified index values into a plurality of primary index generation models to obtain primary index values respectively fed back by the primary index generation models.
In this embodiment, the step of inputting the plurality of specified index values into the plurality of primary index generation models, respectively, and obtaining the primary index values of the target index for the second data in the second time period fed back by the plurality of primary index generation models, respectively, may generate the target index value in the second time period by the specified index value in the first time period. For example, regarding the weather data, the PM2.5 concentration, humidity, and temperature of the weather data for the first 60 days may be used to generate a model using a plurality of primary index generation models based on the current time, and the air quality for the next day fed back by each primary index generation model may be obtained. Or, for the economic data, the current time can be used as a reference, and the rise and fall condition of the next national stock silver ticket transfer survival rate can be generated through the time sequence decomposition feature, CIR model prediction feature, technical index feature and macroscopic economic index of the first 60 days.
In some embodiments, the range of values of the target indicator is formed from a plurality of different candidate indicator values; the primary index value is one of a plurality of candidate index values; the step of determining the target index value of the target index according to the plurality of primary index values respectively generated by the plurality of primary index generation models may include: and according to the times of the generation of the plurality of candidate index values by the plurality of primary index generation models, determining the candidate index value with the largest generation times as the target index value of the target index.
In some cases, the range of values of the target indicator may be formed from a plurality of different candidate indicator values. That is, the primary index value generated by the primary index generation model is one of the candidate index values. Accordingly, the target index value may be determined from the plurality of primary index values by a voting mechanism of the model.
In this embodiment, the candidate index value may represent a possible value of the target index. The plurality of candidate index values may form a set of target index values. Candidate index values without values can represent possible states of the target object in the dimension corresponding to the target index. For example, for meteorological data, the target indicator may be air quality. The corresponding candidate index values may represent excellent, good, poor, etc. air quality, respectively. For automotive traffic data, the target index may be a traffic congestion index. The corresponding candidate indicator value may be congestion, general, idle, etc. For economic data, the target index can be the change condition of the national stock silver ticket transfer cash ratio. The corresponding candidate index value may represent an increase or decrease in the national stock silver ticket transfer rate.
In some embodiments, the candidate index value determined by the primary index generation model may represent a state of the target object over a second period of time. In some embodiments, the candidate index values may represent different states of the target object in the dimension corresponding to the target index by numerical characterization.
In the present embodiment, the method of determining the candidate index value having the largest number of times of generation as the target index value of the target index based on the number of times the candidate index values are generated by the plurality of primary index generation models may be a method of determining the candidate index value having the largest number of times of generation as the target index value by using the plurality of primary index generation models, and determining the candidate index value having the largest number of times of generation as the target index value in accordance with a voting mechanism. For example, the index generation model includes 3 primary index generation models. The initial index generation model a generates a candidate index value 1, the initial index generation model B generates a candidate index value 1, and the initial index generation model C generates a candidate index value 2. Accordingly, the candidate index value 1 may be regarded as the target index value.
Referring to fig. 4, the embodiment of the present disclosure further provides a training system for the index generation model. The index generation model comprises a plurality of primary index generation models; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; the training system of the metric generation model comprises a memory storing a computer program and a processor configured to implement the following steps when executing the computer program.
Step S210: and training the plurality of primary index generation models by using the training samples, wherein the plurality of trained primary index generation models form the index generation model.
In some cases, training samples may be constructed from historical specified categories of data to train to obtain an index generation model.
In this embodiment, the training sample includes sample feature data formed by sample data of a specified category generated in a sample period, and a target index value of target data of the specified category generated in a target period for a target index; wherein the end time of the target time period is later than the end time of the sample time period.
Specifically, the training samples may be data for training the index generation model. Training samples may be constructed from historical data of specified categories. Wherein each training sample may include sample feature data as an input of the primary index generation model, and a target index value as a target output of the primary index generation model. For each training sample, sample feature data may be generated from a specified class of sample data generated over a sample period. The target index value may be generated by target data of a specified category generated within the target period. The end time of the target time period corresponding to the same training sample may be later than the end time of the sample time period. The target data and the sample data are the same as the data belonging to the specified category. The time periods generated by the target data and the sample data are different to some extent.
In this embodiment, the method of constructing training samples of the index generation model may divide data corresponding to each training sample according to the time at which data of a historically specified class is generated. Among the data corresponding to each training sample, the data of the specified category generated from the sample period may be used as sample data. The data of the specified category generated from the target period may be regarded as target data. Wherein the end time of the target period may be later than the end time of the sample period. The sample period of time may correspond to a first period of time when the model is generated using the metrics. The target time period may correspond to a second time period when the model is generated using the metrics.
Specifically, for example, the data of the specified category may include data generated within 300 days. For each of days 61 through 300, 240 training samples may be constructed on a per day basis. Wherein the sample period of each training sample may be the first 60 days of each day. The target time period may be the corresponding current day. Specifically, sample characteristic data may be constructed from sample data within a sample period. From the target data within the target period, a target index value of the target data for the target index may be calculated. The sample characteristic data may be used as an input of the primary index generation model, and the target index value may be used as a target output of the primary index generation model.
In some embodiments, the sample characteristic data may be a vector, and the values of the plurality of positions of the vector may be specified index values of the sample data for a plurality of specified indexes.
In some cases, each primary index model may be trained using the training samples and a loss function corresponding to each primary index generation model. The trained primary index generation model may form the index generation model. Specifically, for example, the index generation model may be an integrated model, and the plurality of primary index generation models may be base models of the integrated model. The primary index values generated by the plurality of primary index generation models can be combined with a voting mechanism to generate the output of the index generation models. Of course, the primary index values generated by the plurality of primary index generation models may be obtained by a weighted summation method.
In some embodiments, training samples that are input into different primary metrics generation models may also be modified depending on the characteristics of the model. For example, for a part of primary index generation models, the features of the sample feature data in the training samples may be filtered and then input into the corresponding primary index generation models.
In some implementations, the specified category of data can be economic data. The economic data may be raw data of national stock silver ticket transfer rate. The plurality of specified indexes can be indexes such as time series decomposition characteristics, CIR model prediction characteristics, technical index characteristics, macroscopic economic indexes and the like of the original data. The target index can be the rising and falling condition of the national stock silver ticket transfer rate. Correspondingly, when the training index generates a model, a rolling method of a time window can be used for model training on a training set of 300 days. Wherein the test set includes 60 days of data. Specifically, the rolling training can be performed by taking 1 day as a time window, and the state stock silver ticket transfer survival interest rate rise and fall condition of one day in the future can be predicted after each fitting.
In this embodiment, the primary index generation model included in the index generation model may be built using the Xgboost model, the LightGBM model, the support vector machine model, and the two quantum support vector machines, respectively, to fit the training set. And makes predictions on the test set.
In some embodiments, the kernel function is a vector machine model implemented by specifying quantum wires as the quantum vector machine model; the step of training the plurality of primary index generation models using the constructed training samples, respectively, may include: inputting the sample characteristic data into the quantum vector machine model, so that a quantum computing system is called when the quantum vector machine model is operated to calculate an output value of a kernel function aiming at the sample characteristic data according to the appointed quantum circuit, and an initial index value corresponding to the quantum vector machine model is obtained; and adjusting parameters of the quantum vector machine model based on the difference between the initial index value fed back by the quantum vector machine model and the target index value until the difference meets the specified condition, and obtaining the quantum vector machine model after training.
In some cases, quantum vector machine models require the use of quantum kernel methods in training. In particular, quantum kernel methods require the use of quantum kernel functions. The output value of the quantum kernel function is obtained after the quantum computing system operates according to the specified quantum circuit. Throughput child kernel functions may make quantum vector machine models potentially more efficient in processing high-dimensional data. Moreover, quantum computing may in some cases exploit quantum parallelism, while processing multiple data, may speed up computation of kernel functions and training of quantum vector machine models.
The kernel approach can be used to deal with non-linearity problems and high-dimensional data. In the kernel method, by mapping the raw data to a high-dimensional feature space such that the data in the space is linearly separable or approximately linearly separable, the problem of nonlinearity in the raw data can be solved using a linear classifier or regressor. The core of the kernel method is the use of a kernel function, which is a function of similarity measurement of two samples. By calculating the inner product of the sample in the characteristic space, the similarity of the sample in the original space can be obtained, so that the linearization processing of nonlinear data is realized. The kernel function is able to map low-dimensional data into high-dimensional space without explicitly computing high-dimensional feature vectors, thereby avoiding the complexity of high-dimensional computation.
Compared to classical kernel methods, quantum kernel methods can exploit the properties of quantum computation to map input data into Gao Weixi erbet space in order to better separate different classes of data points.
In this embodiment, the quantum vector machine model may be a model built by a support vector machine based on a kernel method. The quantum vector machine model can be optimized through a constructed objective function during training. The objective function may include a kernel function. According to the output value of the kernel function, the output value of the objective function can be calculated to adjust the parameters of the quantum vector machine model.
The output value of the kernel function of the quantum vector machine model during training can be obtained through the operation of a specified quantum circuit. The output value of the kernel function may represent the similarity between different sample feature data. Specifically, the similarity between the different sample feature data can be obtained by a specified quantum circuit operation.
In the process of training the quantum vector machine model, a training system of the index generation model can input the sample characteristic data into the quantum vector machine model, and the quantum vector machine model can send the sample characteristic data of the training sample to a quantum computing system so that the quantum computing system can calculate an output value of a kernel function aiming at the sample characteristic data according to a specified quantum circuit. Of course, the training system of the index generation model can also determine parameters of quantum gates in the designated quantum circuits through sample feature data, then send the parameters of the quantum gates to the quantum computing system, and the quantum computing system can calculate and obtain output values of kernel functions fed back by the quantum computing system according to the designated quantum circuits and the parameters of the quantum gates.
In this embodiment, the initial index value may represent a value for the target index generated by the quantum vector machine model in the training process. Based on an optimization method, according to the difference between the initial index value and the target index value, the method canPair quantum vector machine modelIs adjusted by parameters of (a). Under the condition that the difference between the initial index value and the target index value output by the quantum vector machine model after parameter adjustment meets the specified condition, the quantum vector machine model can be considered to be trained. The method for adjusting the parameters of the quantum vector machine model includes, but is not limited to, gradient descent, random gradient descent, adam optimizer and the like.
Referring to fig. 1, the present embodiment further provides a quantum computing system, where the quantum computing system includes a base computing unit and a quantum computing unit; the quantum computing system is used for training a quantum vector machine model; wherein the quantum vector machine model is an index generation model comprising at least one of a plurality of primary index generation models; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of the quantum vector machine model is obtained through specified quantum circuit operation; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same; the basic calculation unit is used for receiving sample characteristic data provided by a training system of the index generation model when the quantum vector machine model is trained, and sending the sample characteristic data to the quantum calculation unit; the quantum computing unit includes a plurality of qubits; the quantum computing unit is used for acting quantum gates corresponding to the sample characteristic data on the plurality of quantum bits according to a specified quantum circuit so as to obtain the similarity between the sample characteristic data based on the measurement results of the plurality of quantum bits and serve as an output value of a kernel function of the quantum vector machine model.
In some embodiments, the sample feature data comprises first sample feature data and second sample feature data; the step of applying a quantum gate corresponding to the sample feature data to the plurality of qubits according to a specified quantum line to obtain a similarity between the sample feature data based on measurement results of the plurality of qubits may include: operating the plurality of qubits based on the first quantum unit; wherein the first quantum unit comprises a plurality of first quantum gates; the parameters of the first quantum gate are determined through the characteristic values of the first sample characteristic data; operating the plurality of qubits based on the second quantum unit; wherein the second quantum unit comprises a plurality of second quantum gates; the parameters of the second quantum gate are determined through the characteristic values of the second sample characteristic data; and measuring the probability that the qubit is in a specified quantum state as the similarity between the first sample characteristic data and the second sample characteristic data.
In some cases, a specified quantum wire may be used to calculate the similarity between sample feature data of two training samples.
In this embodiment, the first sample feature data and the second sample feature data may be sample feature data included in different training samples. The quantum kernel function can calculate the similarity between any two sample feature data.
In this embodiment, the prescribed quantum wire may include a first quantum unit and a second quantum unit. Wherein the first quantum unit may encode information of the first sample feature data into a quantum state of the qubit. The second quantum unit may encode information of the second sample characteristic data into a quantum state of the qubit.
In this embodiment, the first quantum unit may include a plurality of first quantum gates. The parameters of the first quantum gate may be determined by the eigenvalues of the first sample characteristic data. In particular, the plurality of first quantum gates may act on different qubits, respectively. Wherein each qubit may correspond to a feature in the sample feature data. The features of the sample feature data corresponding to different amounts of sub-bits are different. Accordingly, the parameter of the first quantum gate may be a characteristic value of the characteristic corresponding to the qubit acting on the first quantum gate in the first sample characteristic data. Alternatively, the parameter of the first quantum gate may have a specified mapping relationship with the feature value in the first sample feature data.
In the present embodiment, the second quantum unit may include a plurality of second quantum gates. The parameters of the second quantum gate may be determined by the eigenvalues of the second sample characteristic data. For example, a plurality of second quantum gates may also act on different qubits, respectively. The parameter of the second quantum gate may be a characteristic value of the corresponding characteristic of the quantum bit of which it is an effect in the second sample characteristic data. Alternatively, the parameter of the second quantum gate may have a specified mapping relationship with the feature value in the second sample feature data.
In some embodiments, the sample feature data may include the same number of features as the number of qubits. For example, the feature number of the sample feature data is 10. Then 10 qubits may be used to calculate the similarity between the first sample feature data and the second sample feature data. In some embodiments, in order to reduce the occupation amount of resources of the quantum computing system, the similarity between the sample feature data with the reduced dimensions can be calculated by using a designated quantum circuit after the dimension of the sample feature data is reduced.
In order to calculate the similarity between the first sample feature data and the second sample feature data, a plurality of qubits may be operated on first based on a first quantum unit formed of a first quantum gate corresponding to the first sample data. Further, the plurality of qubits may be further manipulated based on a second quantum unit formed of a second quantum gate corresponding to the second sample data. Thus, by specifying the measurement result of the quantum wire end state, the similarity between the first sample feature data and the second sample feature data can be obtained.
By measuring the probability that the qubit is in a specified quantum state, the similarity between the first sample feature data and the second sample feature data can be obtained. Wherein the designated quantum state may be determined according to the structure of the quantum wire. For example, the specified quantum state may be a quantum state when the quantum states of the plurality of qubits are all in the |0> state. The probability of the qubits being all in the |0> state may be used as a similarity between the first sample feature data and the second sample feature data. In some embodiments, the specified quantum state may also be the probability that the qubit is all in the |1> state.
In some embodiments, the second quantum gate is a conjugate transposed quantum gate of the first quantum gate; the parameters of the plurality of first quantum gates respectively correspond to one characteristic value of the first sample characteristic data; the characteristic values of the first sample characteristic data corresponding to different first quantum gates are different; the parameters of the plurality of second quantum gates respectively correspond to one characteristic value of the second sample characteristic data; the characteristic values of the second sample characteristic data corresponding to different second quantum gates are different; a step of operating on the qubit based on a first quantum unit, comprising: encoding the characteristic values of the first sample characteristic data to the plurality of quantum bits through an angle encoding method based on the first quantum gate; applying a specified quantum gate to the plurality of qubits such that quantum entanglement is achieved between the plurality of qubits; respectively acting the first quantum gate on the plurality of quantum bits; accordingly, the step of applying a second quantum unit to the plurality of qubits comprises: respectively acting the second quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits such that quantum entanglement is achieved between the plurality of qubits; and respectively applying the second quantum gate to the plurality of quantum bits.
In this embodiment, there is a conjugate transposed relationship between the types of the first quantum gate and the second quantum gate. Specifically, for example, the first quantum gate is an RX gate, and the second quantum gate may be a conjugate transposed quantum gate of the RX gate. Of course, the first quantum gate may be a RY gate or an RZ gate. In the first quantum unit, the qubit can be angle-encoded by applying an RX gate to each qubit.
In this embodiment, a plurality of specified quantum gates may be further included in the specified quantum wire. The manipulation of the plurality of qubits using the specified quantum gates may enable quantum entanglement between the plurality of qubits. Specifically, the designated quantum gate may be a CONT gate. For example, ring entanglement between qubits can be achieved by multiple CNOT gates. Alternatively, a CNOT gate may be used to sequentially correlate multiple qubits with each other. That is, the quantum state of the latter qubit may be controlled by the quantum state of the former qubit based on the CNOT.
In the present embodiment, referring to fig. 5, in the present embodiment, RX (γ i ) May be a first quantum gate. Gamma ray i Parameters of the first quantum gate may be represented. Different gamma i The value of (c) may be derived from the characteristic value of the first sample characteristic data. RX (x) + (β i ) May be a second quantum gate. Beta i Parameters of the second quantum gate may be represented. Different beta i May be derived from the characteristic values of the second sample characteristic data. And may represent a controlled CNOT gate. q i May represent a qubit. c may represent a measurement bit. i may be any integer.
In some embodiments, the first quantum gate comprises a first type quantum gate and a second type quantum gate of different types; the parameters of the first type quantum gate and the second type quantum gate are one characteristic value in the first sample characteristic data; the characteristic values of the first sample characteristic data corresponding to the different first type quantum gates and the second type quantum gates are different; a step of operating on the qubit based on a first quantum unit, comprising: respectively acting the first type quantum gate on the plurality of quantum bits; respectively acting the second type quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits to cause quantum entanglement between the plurality of qubits; respectively acting the first type quantum gate on the plurality of quantum bits; correspondingly, the second quantum gate comprises a third type quantum gate and a fourth type quantum gate of different types; the third type quantum gate and the fourth type quantum gate parameter are one feature value in the second sample feature data; the characteristic values of the second sample characteristic data corresponding to the different third type quantum gates and fourth type quantum gates are different; the third type quantum gate is a conjugate transposed quantum gate of the first type quantum gate; the fourth type quantum gate is a conjugate transposed quantum gate of the second type quantum gate; a step of manipulating the plurality of qubits based on the second quantum unit, comprising: respectively acting the third type quantum gate on the plurality of quantum bits; respectively acting the fourth type quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits to cause quantum entanglement between the plurality of qubits; the third type quantum gate is applied to the plurality of qubits, respectively.
In this embodiment, the first quantum gate in the first quantum unit may include a plurality of different types of first type quantum gates and second type quantum gates. For example, the first type of quantum gate may be an RZ gate and the second type of quantum gate may be a RY gate. Accordingly, the second quantum gate in the second quantum unit may also include a plurality of different types of third type quantum gates and fourth type quantum gates. For example, the third type of quantum gate may be a conjugate transposed quantum gate of an RZ gate. The fourth type of quantum gate may be a conjugate transposed quantum gate of the RY gate.
In this embodiment, referring to fig. 6, each qubit may be subjected to an H-gate operation before the plurality of qubits are angle-encoded. Accordingly, each qubit may also be operated on separately using an H-gate prior to qubit measurement. Wherein RZ (gamma) i ) Can be expressed as a parameter gamma i Is a RZ gate of (C). RZ (gamma) i ) May be a first type of quantum gate. RY (gamma) i ) Can be expressed as a parameter gamma i RY door of (C). RY (gamma) i ) May be a second type quantum gate. RZ (RZ) + (β i ) Can be represented as a parameter beta i A conjugate transposed quantum gate of an RZ gate of (c). RZ (RZ) + (β i ) May be a third type of quantum gate. RY type + (β i ) Can be represented as a parameter beta i The RY gate of (C) is a conjugate transposed quantum gate of RY gate of (C). RY type + (β i ) May be a fourth type of quantum gate. H represents an H gate. RX+ (beta) i ) May be a second quantum gate. Different gamma i The value of (c) may be derived from the characteristic value of the first sample characteristic data. Different beta i May be derived from the characteristic values of the second sample characteristic data.
In some embodiments, multiple layers of first quantum units and second quantum units may be included in a given quantum circuit implementing a quantum kernel function. The use of multiple layers of first and second quantum units may increase the expressive power of the quantum vector machine model, allowing the quantum vector machine model to better capture nonlinear relationships between sample feature data.
The present specification also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, causes the computer to perform the functions of the index generation system, the training system of the index generation model, or the quantum computing system of any of the above embodiments.
The present description also provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the functions of the index generation system, the training system of the index generation model, or the quantum computing system of any of the above embodiments.
Referring to fig. 7, the present disclosure may provide an electronic device, including: a memory, and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement the method of any of the embodiments described above.
In some embodiments, the electronic device may include a processor, a non-volatile storage medium, an internal memory, a communication interface, a display device, and an input device connected by a system bus. The non-volatile storage medium may store an operating system and associated computer programs.
User information or user account information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, etc.) referred to in various embodiments of the present description are information and data that are authorized by the user or sufficiently authorized by the parties, and the collection, use, and processing of relevant data requires compliance with relevant legal regulations and standards of the relevant countries and regions, and is provided with corresponding operation portals for the user to select authorization or denial.
It will be appreciated that the specific examples herein are intended only to assist those skilled in the art in better understanding the embodiments of the present disclosure and are not intended to limit the scope of the present invention.
It should be understood that, in various embodiments of the present disclosure, the sequence number of each process does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
It will be appreciated that the various embodiments described in this specification may be implemented either alone or in combination, and are not limited in this regard.
Unless defined otherwise, all technical and scientific terms used in the embodiments of this specification have the same meaning as commonly understood by one of ordinary skill in the art to which this specification belongs. The terminology used in the description is for the purpose of describing particular embodiments only and is not intended to limit the scope of the description. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be appreciated that the processor of the embodiments of the present description may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital signal processor (Digital SignalProcessor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in the embodiments of this specification may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), or a flash memory, among others. The volatile memory may be Random Access Memory (RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present specification.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and unit may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this specification, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present specification may be integrated into one processing unit, each unit may exist alone physically, or two or more units may be integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present specification may be essentially or portions contributing to the prior art or portions of the technical solutions may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present specification. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
The foregoing is merely specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope disclosed in the present disclosure, and should be covered by the scope of the present disclosure. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An index generation system comprising a memory storing a computer program and a processor configured to implement the following steps when executing the computer program:
generating a model by using a plurality of primary indexes respectively based on the first data generated in the first time period, and generating a primary index value of the second data aiming at the target index in the second time period; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed through a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit; wherein the end time of the second time period is later than the end time of the first time period;
and determining a target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models.
2. The system of claim 1, wherein the step of generating primary index values for the target index for the second data over the second time period using the plurality of primary index generation models, respectively, based on the first data generated over the first time period, comprises:
Acquiring a plurality of specified index values of the first data aiming at a plurality of specified indexes in the first time period;
respectively inputting the plurality of designated index values into the plurality of primary index generation models to obtain primary index values of second data aiming at the target index in a second time period fed back by the plurality of primary index generation models respectively; wherein the end time of the second time period is later than the end time of the first time period.
3. The system of claim 1, wherein the range of values of the target indicator is formed from a plurality of different candidate indicator values; the primary index value is one of a plurality of candidate index values; the step of determining a target index value of the target index based on the plurality of primary index values respectively generated by the plurality of primary index generation models includes:
and based on the times that the candidate index values are respectively generated by the plurality of primary index generation models, determining the candidate index value with the largest generation times as the target index value of the target index.
4. A training system for an index generation model, wherein the index generation model comprises a plurality of primary index generation models; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of at least one vector machine model is obtained through the operation of a specified quantum circuit;
The training system of the metric generation model comprises a memory storing a computer program and a processor configured to implement the following steps when executing the computer program:
and respectively training the plurality of primary index generation models by using the constructed training samples, and forming the index generation models by the plurality of trained primary index generation models.
5. The system of claim 4, wherein the kernel function is a vector machine model implemented by specifying quantum wires as a quantum vector machine model; the step of training the plurality of primary index generation models by using the constructed training samples respectively comprises the following steps:
inputting the sample characteristic data into the quantum vector machine model, so that a quantum computing system is called when the quantum vector machine model is operated to calculate an output value of a kernel function aiming at the sample characteristic data according to the appointed quantum circuit, and an initial index value corresponding to the quantum vector machine model is obtained;
and adjusting parameters of the quantum vector machine model based on the difference between the initial index value fed back by the quantum vector machine model and the target index value until the difference meets the specified condition, and obtaining the quantum vector machine model after training.
6. A quantum computing system, the quantum computing system comprising a base computing unit and a quantum computing unit; the quantum computing system is used for training a quantum vector machine model; wherein the quantum vector machine model is an index generation model comprising at least one of a plurality of primary index generation models; the plurality of primary index generation models comprise a plurality of vector machine models constructed by a support vector machine; at least part of the vector machine models are different in kernel functions used in training; the output value of the kernel function of the quantum vector machine model is obtained through specified quantum circuit operation; the index generation model generates primary index values of second data aiming at target indexes in a second time period based on first data generated in a first time period by using a plurality of primary index generation models respectively so as to determine the target index values of the target indexes based on the plurality of primary index values generated by the plurality of primary index generation models respectively; wherein the data types of the first data and the second data are the same;
the basic calculation unit is used for receiving sample characteristic data provided by a training system of the index generation model when the quantum vector machine model is trained, and sending the sample characteristic data to the quantum calculation unit;
The quantum computing unit includes a plurality of qubits; the quantum computing unit is used for acting quantum gates corresponding to the sample characteristic data on the plurality of quantum bits according to a specified quantum circuit so as to obtain the similarity between the sample characteristic data based on the measurement results of the plurality of quantum bits and serve as an output value of a kernel function of the quantum vector machine model.
7. The system of claim 6, wherein the sample characteristic data comprises first sample characteristic data and second sample characteristic data; the quantum computing unit is configured to apply a quantum gate corresponding to the sample feature data to the plurality of qubits according to a specified quantum line, so as to obtain a similarity between the sample feature data based on measurement results of the plurality of qubits, and includes:
operating the plurality of qubits based on the first quantum unit; wherein the first quantum unit comprises a plurality of first quantum gates; the parameters of the first quantum gate are determined through the characteristic values of the first sample characteristic data;
operating the plurality of qubits based on the second quantum unit; wherein the second quantum unit comprises a plurality of second quantum gates; the parameters of the second quantum gate are determined through the characteristic values of the second sample characteristic data;
And measuring the probability that the qubit is in a specified quantum state as the similarity between the first sample characteristic data and the second sample characteristic data.
8. The system of claim 7, wherein the second quantum gate is a conjugate transposed quantum gate of the first quantum gate; the parameters of the plurality of first quantum gates respectively correspond to one characteristic value of the first sample characteristic data; the characteristic values of the first sample characteristic data corresponding to different first quantum gates are different; the parameters of the plurality of second quantum gates respectively correspond to one characteristic value of the second sample characteristic data; the characteristic values of the second sample characteristic data corresponding to different second quantum gates are different;
a step of operating on the qubit based on a first quantum unit, comprising: encoding the characteristic values of the first sample characteristic data to the plurality of quantum bits through an angle encoding method based on the first quantum gate; applying a specified quantum gate to the plurality of qubits such that quantum entanglement is achieved between the plurality of qubits; respectively acting the first quantum gate on the plurality of quantum bits;
Accordingly, the step of applying a second quantum unit to the plurality of qubits comprises: respectively acting the second quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits such that quantum entanglement is achieved between the plurality of qubits; and respectively applying the second quantum gate to the plurality of quantum bits.
9. The system of claim 7, the first quantum gate comprising a first type quantum gate and a second type quantum gate of different types; the parameters of the first type quantum gate and the second type quantum gate are one characteristic value in the first sample characteristic data; the characteristic values of the first sample characteristic data corresponding to the different first type quantum gates and the second type quantum gates are different;
a step of operating on the qubit based on a first quantum unit, comprising: respectively acting the first type quantum gate on the plurality of quantum bits; respectively acting the second type quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits to cause quantum entanglement between the plurality of qubits; respectively acting the first type quantum gate on the plurality of quantum bits;
Correspondingly, the second quantum gate comprises a third type quantum gate and a fourth type quantum gate of different types; the third type quantum gate and the fourth type quantum gate parameter are one feature value in the second sample feature data; the characteristic values of the second sample characteristic data corresponding to the different third type quantum gates and fourth type quantum gates are different; the third type quantum gate is a conjugate transposed quantum gate of the first type quantum gate; the fourth type quantum gate is a conjugate transposed quantum gate of the second type quantum gate;
a step of manipulating the plurality of qubits based on the second quantum unit, comprising: respectively acting the third type quantum gate on the plurality of quantum bits; respectively acting the fourth type quantum gate on the plurality of quantum bits; applying the specified quantum gate to the plurality of qubits to cause quantum entanglement between the plurality of qubits; the third type quantum gate is applied to the plurality of qubits, respectively.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which when executed by at least one processor, implements the functions of the index generation system of any one of claims 1-3, or the training system of the index generation model of any one of claims 4-5, or the functions of the quantum computing system of any one of claims 6-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311559013.8A CN117574285A (en) | 2023-11-21 | 2023-11-21 | Index generating system and related device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311559013.8A CN117574285A (en) | 2023-11-21 | 2023-11-21 | Index generating system and related device thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117574285A true CN117574285A (en) | 2024-02-20 |
Family
ID=89860180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311559013.8A Pending CN117574285A (en) | 2023-11-21 | 2023-11-21 | Index generating system and related device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117574285A (en) |
-
2023
- 2023-11-21 CN CN202311559013.8A patent/CN117574285A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE112021004908T5 (en) | COMPUTER-BASED SYSTEMS, COMPUTATION COMPONENTS AND COMPUTATION OBJECTS SET UP TO IMPLEMENT DYNAMIC OUTLIVER DISTORTION REDUCTION IN MACHINE LEARNING MODELS | |
CN107730087A (en) | Forecast model training method, data monitoring method, device, equipment and medium | |
US20210303970A1 (en) | Processing data using multiple neural networks | |
CN110852881B (en) | Risk account identification method and device, electronic equipment and medium | |
KR102308751B1 (en) | Method for prediction of precipitation based on deep learning | |
CN110751557A (en) | Abnormal fund transaction behavior analysis method and system based on sequence model | |
CN111435463A (en) | Data processing method and related equipment and system | |
US11521214B1 (en) | Artificial intelligence payment timing models | |
CN115759413B (en) | Meteorological prediction method and device, storage medium and electronic equipment | |
Figini et al. | Bayesian operational risk models | |
Heaton et al. | Explainable AI via learning to optimize | |
CN116777591A (en) | Training method of repayment capability prediction model, repayment capability prediction method and repayment capability prediction device | |
US20200160200A1 (en) | Method and System for Predictive Modeling of Geographic Income Distribution | |
CN118096170A (en) | Risk prediction method and apparatus, device, storage medium, and program product | |
CN117113222A (en) | Data analysis model generation method and device and electronic equipment | |
WO2023229474A1 (en) | Methods, systems and computer program products for determining models for predicting reoccurring transactions | |
CN117574285A (en) | Index generating system and related device thereof | |
CN113723712B (en) | Wind power prediction method, system, equipment and medium | |
CN115907954A (en) | Account identification method and device, computer equipment and storage medium | |
CN114219184A (en) | Product transaction data prediction method, device, equipment, medium and program product | |
Tomar | A critical evaluation of activation functions for autoencoder neural networks | |
Petrlik et al. | Multiobjective selection of input sensors for svr applied to road traffic prediction | |
Aggarwal | Research on Anomaly Detection in Time Series: Exploring United States Exports and Imports Using Long Short-Term Memory | |
Lee et al. | A Comparison of Machine Learning Models for Currency Pair Forecasting | |
US20240119470A1 (en) | Systems and methods for generating a forecast of a timeseries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |