CN117993150A - Agent model training method, target object optimizing method and related devices - Google Patents

Agent model training method, target object optimizing method and related devices Download PDF

Info

Publication number
CN117993150A
CN117993150A CN202211353203.XA CN202211353203A CN117993150A CN 117993150 A CN117993150 A CN 117993150A CN 202211353203 A CN202211353203 A CN 202211353203A CN 117993150 A CN117993150 A CN 117993150A
Authority
CN
China
Prior art keywords
target object
target
proxy model
quantum
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211353203.XA
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Benyuan Quantum Computing Technology Hefei Co ltd
Original Assignee
Benyuan Quantum Computing Technology Hefei Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Benyuan Quantum Computing Technology Hefei Co ltd filed Critical Benyuan Quantum Computing Technology Hefei Co ltd
Priority to CN202211353203.XA priority Critical patent/CN117993150A/en
Publication of CN117993150A publication Critical patent/CN117993150A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a proxy model training method, a target object optimizing method and a related device, wherein the method comprises the following steps: sampling a target object to obtain sample points; obtaining a target linear equation set for the target object based on the sample points; constructing a variable component sub-circuit for solving the target linear equation set; obtaining a calculation result by utilizing the variable component sub-circuit; and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained. By using the embodiment of the invention, the data required by the proxy model training is obtained through quantum computation, and the computation speed is improved, so that the model training efficiency is improved.

Description

Agent model training method, target object optimizing method and related devices
Technical Field
The invention belongs to the technical field of quantum computing, in particular to a proxy model training method, a target object optimizing method and a related device.
Background
Most engineering design problems require simulation experiments to evaluate the objective function and constraint function when different design parameters are employed, e.g. to find the optimal aerodynamic profile of an aircraft, the airflow around the aircraft is often simulated for different shape parameters (length, curvature, material, etc.). For many practical problems, similar design optimization requires thousands, even millions, of simulations to complete, with the original model, and a single simulation may take minutes, hours, or even days to complete, thus directly simulating the original model is very time consuming.
One way to improve this is to use a proxy model instead of the original model, the computation result of which is very close to the original model, and the computational effort of solving is small. The trained agent model can replace an original model, a large number of calculation results corresponding to sample points are needed in the training of the agent model, and in order to obtain the calculation results required by the model training, the required calculation amount is very large, so that the calculation time is relatively high, and the model training efficiency is relatively low.
Disclosure of Invention
The invention aims to provide a proxy model training method, a target object optimizing method and a related device, which are used for obtaining data required by proxy model training through quantum computation and improving the computation speed so as to improve the model training efficiency.
One embodiment of the application provides a proxy model training method, which comprises the following steps:
sampling a target object to obtain sample points;
obtaining a target linear equation set for the target object based on the sample points;
Constructing a variable component sub-circuit for solving the target linear equation set;
obtaining a calculation result by utilizing the variable component sub-circuit;
and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
Optionally, the obtaining, based on the sample point, a target linear equation set for the target object includes:
acquiring initial conditions, boundary conditions, partial differential equation sets to be solved and calculation domains thereof of the target object, wherein the calculation domains are determined by utilizing the sample points;
discretizing the initial condition, the boundary condition and the calculation domain to obtain a discretized algebraic equation set corresponding to the partial differential equation set to be solved;
And determining a target linear equation set according to the discretized algebraic equation set.
Optionally, the constructing a variable component sub-line for solving the target linear equation set includes:
and respectively constructing a first sub-quantum circuit and a second sub-quantum circuit to form a variable component sub-circuit, wherein the first sub-quantum circuit is used for forming a sub-quantum state containing an approximate solution of the target linear equation set, and the second sub-quantum circuit is used for acquiring a value corresponding to a loss function, and the loss function is constructed based on the approximate solution.
Optionally, the obtaining a calculation result by using the variable component sub-line includes:
The target linear equation set operates and measures the variable component sub-line to obtain an approximate solution of the target linear equation set, and measures the variable component sub-line to obtain an approximate solution of the target linear equation set;
judging whether a value corresponding to the loss function meets convergence accuracy or not based on the approximate solution;
If yes, the approximate solution is used as a calculation result, otherwise, the variation parameters contained in the variation sub-circuit are updated, and the approximate solution of the target linear equation corresponding to the updated variation parameters is obtained again until the approximate solution meeting the condition that the value corresponding to the loss function accords with the convergence precision is obtained.
Optionally, the target parameters required for the proxy model training are obtained by using a quantum algorithm.
Optionally, training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained, including:
Inputting the sample points into a proxy model to obtain a predicted value;
obtaining model precision based on the calculation result and the predicted value;
judging whether the model precision accords with a training termination condition or not;
If not, adding a new sample point, obtaining a corresponding calculation result by using the variable component sub-line based on the new sample point, and returning to the step of inputting the sample point into the proxy model to obtain a predicted value until a trained proxy model is obtained.
Still another embodiment of the present application provides a target object optimization method, including:
Obtaining the shape information of the parameterized modeling target object;
Inputting the shape information of the target object into a proxy model trained by the proxy model training method provided by the application to obtain a predicted value;
Based on the predicted value, optimizing information is obtained by utilizing an optimizing algorithm;
updating the target object by using the optimization information;
and obtaining the optimal target object based on the target objects before and after updating.
Optionally, the target object is an aerodynamic shape of an aircraft.
Optionally, the optimization algorithm is a quantum genetic algorithm.
Optionally, the obtaining the optimal target object based on the target objects before and after updating includes:
comparing the target objects before and after updating, and judging whether the optimization is terminated;
if so, determining an optimal target object based on the target objects before and after updating;
If not, determining a target object to be updated, reselecting optimization data from the optimization information, updating the target object to be updated by utilizing the optimization data, and returning to execute the step of inputting the appearance information of the target object into a proxy model trained by the proxy model training method provided by the application to obtain a predicted value.
Yet another embodiment of the present application provides a proxy model training apparatus, including:
The first acquisition module is used for sampling the target object to acquire sample points;
A second obtaining module, configured to obtain a target linear equation set for the target object based on the sample point;
The construction module is used for constructing a variable component sub-line for solving the target linear equation set;
the third obtaining module is used for obtaining a calculation result by utilizing the variable component sub-circuit;
And the training module is used for training the proxy model by utilizing the sample points and the calculation result until a trained proxy model is obtained.
Optionally, the second obtaining module is specifically configured to:
acquiring initial conditions, boundary conditions, partial differential equation sets to be solved and calculation domains thereof of the target object, wherein the calculation domains are determined by utilizing the sample points;
discretizing the initial condition, the boundary condition and the calculation domain to obtain a discretized algebraic equation set corresponding to the partial differential equation set to be solved;
And determining a target linear equation set according to the discretized algebraic equation set.
Optionally, the construction module is specifically configured to:
and respectively constructing a first sub-quantum circuit and a second sub-quantum circuit to form a variable component sub-circuit, wherein the first sub-quantum circuit is used for forming a sub-quantum state containing an approximate solution of the target linear equation set, and the second sub-quantum circuit is used for acquiring a value corresponding to a loss function, and the loss function is constructed based on the approximate solution.
Optionally, the third obtaining module is specifically configured to:
The target linear equation set operates and measures the variable component sub-line to obtain an approximate solution of the target linear equation set, and measures the variable component sub-line to obtain an approximate solution of the target linear equation set;
judging whether a value corresponding to the loss function meets convergence accuracy or not based on the approximate solution;
If yes, the approximate solution is used as a calculation result, otherwise, the variation parameters contained in the variation sub-circuit are updated, and the approximate solution of the target linear equation corresponding to the updated variation parameters is obtained again until the approximate solution meeting the condition that the value corresponding to the loss function accords with the convergence precision is obtained.
Optionally, the target parameters required for the proxy model training are obtained by using a quantum algorithm.
Optionally, the training module is specifically configured to:
Inputting the sample points into a proxy model to obtain a predicted value;
obtaining model precision based on the calculation result and the predicted value;
judging whether the model precision accords with a training termination condition or not;
If not, adding a new sample point, obtaining a corresponding calculation result by using the variable component sub-line based on the new sample point, and returning to the step of inputting the sample point into the proxy model to obtain a predicted value until a trained proxy model is obtained.
Still another embodiment of the present application provides a target object optimizing apparatus, including:
the fourth obtaining module is used for obtaining the appearance information of the parameterized modeling target object;
a fifth obtaining module, configured to input the shape information of the target object into a proxy model trained by using the proxy model training method provided by the present application, to obtain a predicted value;
a sixth obtaining module, configured to obtain optimization information by using an optimization algorithm based on the predicted value;
The updating module is used for updating the target object by utilizing the optimization information;
and a seventh obtaining module, configured to obtain an optimal target object based on the target objects before and after updating.
Optionally, the target object is an aerodynamic shape of an aircraft.
Optionally, the optimization algorithm is a quantum genetic algorithm.
Optionally, the seventh obtaining module is specifically configured to:
comparing the target objects before and after updating, and judging whether the optimization is terminated;
if so, determining an optimal target object based on the target objects before and after updating;
If not, determining a target object to be updated, reselecting optimization data from the optimization information, updating the target object to be updated by utilizing the optimization data, and returning to execute the fifth obtaining module.
One embodiment of the present application provides a computer-readable storage medium storing a computer program that, when executed by a processor, implements the proxy model training method or the target object optimization method.
An embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the computer program implements the proxy model training method or the target object optimization method when executed by the processor.
The agent model optimizing method provided by the invention firstly samples a target object to obtain sample points; then, based on the sample points, obtaining a target linear equation set for the target object; constructing a variable component sub-circuit for solving the target linear equation set, and obtaining a calculation result by using the variable component sub-circuit; and finally, training the proxy model by utilizing the sample points and the calculation result until a trained proxy model is obtained. The data required by the proxy model training is obtained through quantum computation, so that the computation speed is improved, and the model training efficiency is improved.
Drawings
FIG. 1 is a hardware block diagram of a computer terminal of a proxy model training method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a proxy model training method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a target object optimization method according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a proxy model training device according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a target object optimizing apparatus according to an embodiment of the present invention.
Detailed Description
The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
The embodiment of the invention firstly provides a proxy model training method which can be applied to electronic equipment such as computer terminals, in particular to common computers, quantum computers and the like.
The quantum computer is a kind of physical device which performs high-speed mathematical and logical operation, stores and processes quantum information according to the law of quantum mechanics. When a device processes and calculates quantum information and operates on a quantum algorithm, the device is a quantum computer. Quantum computers are a key technology under investigation because of their ability to handle mathematical problems more efficiently than ordinary computers, for example, to accelerate the time to crack RSA keys from hundreds of years to hours.
The following describes the operation of the computer terminal in detail by taking it as an example. Fig. 1 is a hardware block diagram of a computer terminal of a proxy model training method according to an embodiment of the present invention. As shown in fig. 1, the computer terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the computer terminal described above. For example, the computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the proxy model training method in the embodiment of the present application, and the processor 102 executes the software programs and modules stored in the memory 104 to perform various functional applications and data processing, i.e., implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the computer terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a computer terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The quantum computing is a novel computing mode for regulating and controlling the quantum information unit to compute according to a quantum mechanical law, wherein the most basic principle based on the quantum computing is a quantum mechanical state superposition principle, and the quantum mechanical state superposition principle enables the state of the quantum information unit to be in a superposition state with multiple possibilities, so that quantum information processing has greater potential compared with classical information processing in efficiency. A quantum system comprises a plurality of particles which move according to the law of quantum mechanics, the system is in a certain quantum state in a state space, and for chemical molecules, quantum chemical simulation can be realized, so that research support is provided for quantum computing.
It should be noted that a real quantum computer is a hybrid structure, which includes two major parts: part of the computers are classical computers and are responsible for performing classical computation and control; the other part is quantum equipment, which is responsible for running quantum programs so as to realize quantum computation. The quantum program is a series of instruction sequences written in a quantum language such as QRunes language and capable of running on a quantum computer, so that the support of quantum logic gate operation is realized, and finally, quantum computing is realized. Specifically, the quantum program is a series of instruction sequences for operating the quantum logic gate according to a certain time sequence.
In practical applications, quantum computing simulations are often required to verify quantum algorithms, quantum applications, etc., due to the development of quantum device hardware. Quantum computing simulation is a process of realizing simulated operation of a quantum program corresponding to a specific problem by means of a virtual architecture (namely a quantum virtual machine) built by resources of a common computer. In general, it is necessary to construct a quantum program corresponding to a specific problem. The quantum program, namely the program for representing the quantum bit and the evolution thereof written in the classical language, wherein the quantum bit, the quantum logic gate and the like related to quantum computation are all represented by corresponding classical codes.
Quantum circuits, which are one embodiment of quantum programs, also weigh sub-logic circuits, are the most commonly used general quantum computing models, representing circuits that operate on qubits under an abstract concept, the composition of which includes qubits, circuits (timelines), and various quantum logic gates, and finally the results often need to be read out by quantum measurement operations.
Unlike conventional circuits, which are connected by metal lines to carry voltage or current signals, in a quantum circuit, the circuit can be seen as being connected by time, i.e., the state of the qubit naturally evolves over time, as indicated by the hamiltonian operator, during which it is operated until a logic gate is encountered.
One quantum program is corresponding to one total quantum circuit, and the quantum program refers to the total quantum circuit, wherein the total number of quantum bits in the total quantum circuit is the same as the total number of quantum bits of the quantum program. It can be understood that: one quantum program may consist of a quantum circuit, a measurement operation for the quantum bits in the quantum circuit, a register to hold the measurement results, and a control flow node (jump instruction), and one quantum circuit may contain several tens to hundreds or even thousands of quantum logic gate operations. The execution process of the quantum program is a process of executing all quantum logic gates according to a certain time sequence. Note that the timing is the time sequence in which a single quantum logic gate is executed.
It should also be noted that the present invention relates to a quantum computer, in which the unit of the processing chip is a CMOS tube in a common computing device based on a silicon chip, such a computing unit is not limited by time and dryness, i.e. such a computing unit is not limited by the length of time of use, and is ready to use. Furthermore, currently, the number of such calculation units in a silicon chip is sufficient, i.e. the number of calculation units in one chip is thousands of at present. The number of computational cells is sufficient and the CMOS transistor selectable computational logic is fixed, for example: and AND logic. When the CMOS tube is used for operation, a large number of CMOS tubes are combined with limited logic functions, so that the operation effect is realized.
Unlike such logic units in conventional computing devices, in current quantum computers the basic computing unit is a qubit, the input of which is limited by coherence and also by coherence time, i.e. the qubit is limited in terms of time of use and is not readily available. Full use of qubits within the usable lifetime of the qubits is a critical challenge for quantum computing. Furthermore, the critical challenges of quantum computing are related to the number of qubits in a quantum computer. Furthermore, the number of qubits in a quantum computer is one of the representative indicators of the performance of the quantum computer, each of the qubits realizes a calculation function by a logic function configured as needed, whereas the logic function in the field of quantum calculation is diversified in view of the limited number of qubits, for example: hadamard gates (Hadamard gates, H gates), brix-gates (X gates), brix-Y gates (Y gates), brix-Z gates (Z gates), RX gates, RY gates, RZ gates, CNOT gates, CR gates, iSWAP gates, toffoli gates, and the like. Quantum logic gates are typically represented using unitary matrices, which are not only in matrix form, but also an operation and transformation. The effect of a general quantum logic gate on a quantum state is calculated by multiplying the unitary matrix by the matrix corresponding to the right vector of the quantum state. During quantum computation, the operation effect is realized by combining limited quantum bits with various logic function combinations.
Based on these differences of the quantum computer, the design of the logic function on the quantum bits (including the design of whether the quantum bits are used or not and the design of the use efficiency of each quantum bit) is a key for improving the operation performance of the quantum computer, and special design is required. The above design for qubits is a technical problem that is not considered nor faced by common computing devices. Based on the technical problem that the efficiency of model training is low, the application provides a proxy model training method, a target object optimizing method and a related device, and aims to achieve the technical effect of improving the proxy model training efficiency through quantum computing.
Referring to fig. 2, fig. 2 is a schematic flow chart of a proxy model training method according to an embodiment of the present invention, which may include the following steps:
S201: and sampling the target object to obtain sample points.
The target object is an object to be optimized, such as may be the aerodynamic profile of an aircraft wing, or the like. The sampling method can be Latin hypercube sampling, uniform design, orthogonal experimental design and the like. Specifically, the sample points obtained using the selected sampling method are data of a target object required for training the proxy model, may be coordinate points, or the like.
S202: based on the sample points, a target linear system of equations for the target object is obtained.
The sample points obtained by sampling can be used for obtaining a target linear equation set of a target object, the sample points can be processed, the processed data can be used for obtaining the target linear equation set, the sample points can be used as elements in the target linear equation set, and the sample points can be converted into data required by the target linear equation set in other modes.
In some possible embodiments of the present invention, the obtaining a target linear equation set for the target object based on the sample point may include:
acquiring initial conditions, boundary conditions, partial differential equation sets to be solved and calculation domains thereof of the target object, wherein the calculation domains are determined by utilizing the sample points;
discretizing the initial condition, the boundary condition and the calculation domain to obtain a discretized algebraic equation set corresponding to the partial differential equation set to be solved;
And determining a target linear equation set according to the discretized algebraic equation set.
Specifically, the system of linear equations to be solved may be a system of partial differential equations transformed by a discrete method. If the partial derivative of the multivariate function occurs in a system of differential equations, or if the unknown function is related to several variables and the derivative of the unknown function to several variables occurs in a system of equations, then such a system of differential equations is a system of partial differential equations. Partial differential equations are an important branch of modern mathematics, and are used to describe problems in the fields of mechanics, control processes, ecological and economic systems, chemical circulatory systems, epidemiology, etc., both in theory and in practice.
The solutions of partial differential equations are generally plural, but when solving a specific physical problem, a desired solution must be selected from them, and therefore, additional conditions, that is, the initial conditions and the boundary conditions and the calculation fields thereof must be known.
The sample points determine the geometry of the target object, and based on the determined geometry, a computational domain can be obtained. In order to calculate the partial differential equation set, it is first necessary to discretize the calculation region, that is, divide the spatially continuous calculation region into a plurality of sub-regions, and determine the nodes in each region, thereby generating a grid, and then discretize the partial differential equation set on the grid, that is, convert the partial differential format equation set into an algebraic equation set on each node. Different types of discretization methods such as a finite difference method and a finite volume method are formed due to the fact that the distribution assumption of the strain quantity among the nodes and the method for deriving the discrete equation are different.
Illustratively, the partial differential equation set is discretized by using a finite difference method, a calculation domain is replaced by a grid formed by a finite number of discrete points, and a function of continuous variables on the calculation domain is used on the grid, so as to obtain the discretized algebraic equation set, namely the finite difference equation set.
Specifically, the discretized algebraic equation set is subjected to a linear transformation method to obtain a target linear equation set.
Illustratively, taking a linear transformation method in a finite volume method as an example, a specific process of determining a target linear equation set according to an algebraic equation set to be discretized is described:
Taking the steady-state, incompressible Navier-Stokes equation as an example:
where U is the velocity vector unknown to each grid, ρ is the density, Is the pressure gradient, v is the dynamic viscosity,G is the gravity term, g is the velocity gradient.
Discretizing by using a three-dimensional polyhedral grid, in order to solve the Navier-Stokes equation by SIMPLE (Semi-implicit method of pressure coupling equation set) algorithm, the equation needs to be rewritten as the following equation in matrix form:
AU=b
Where A is a coefficient matrix, U is a velocity vector unknown to each grid, and b is the right term (known vector).
In the second order finite volume method, flow variables (P, T, U) are varied along grid lines, and these flow variables, including pressure, temperature, velocity, etc., are stored in grid centers P, again taking into account adjacent grids, typically M adjacent grids per grid, where the flow variables are stored.
The Navier-Stokes equation above needs to be integrated along the grid center P to obtain:
wherein V is the mesh volume.
The different terms in brackets are separated by integral addition operation, and are integrated one by one to obtain:
Wherein, Representing convection item,/>Representing the pressure gradient term,Denote diffusion terms, +. V [ g ] dV denote source terms, i.e., gravity terms.
For the source term ≡ V [ g ] dV, since the gravitational acceleration g is constant, it can be mentioned that outside the integral number, we get:
V[g]dV=gVp
Wherein V p is the volume of the grid.
It should be noted that, since gradient calculation exists in the convection term and the diffusion term, the following description mainly describes a method for processing the convection term, and a method for processing the diffusion term is similar, so that a description thereof will not be repeated.
The flow term is generally processed by adopting a divergence theorem, wherein the divergence theorem refers to that the divergence gradient integral of any vector is equivalent to the curved surface integral of the vector, namely:
therefore, the convection term and the diffusion term can be modified by the divergence theorem, and the integral can be calculated to obtain:
the dot product of the normal vector of velocity units and the surface is the volumetric flow out of the surface, i.e.:
The speed U is the quantity to be solved, and the integral of the curved surface is decomposed, namely:
The change in velocity along the surface is linear and thus can be obtained by taking the velocity U fi at the center of the surface as an approximation of the velocity surface integral:
Where M is the number of adjacent grids.
The integration can be eliminated by the above method, and only summation is needed, but since U fi is unknown, calculation is needed by interpolation, for example, the face velocity U fi is solved by interpolation modes such as windward format, second-order/linear windward format, center differential format, QUICK and the like, so that the convection term can be written as:
As shown below, the diagonal terms and the off-diagonal terms result from the convection terms, which are an expression of the connectivity of the adjacent grids:
thus, each term in the Navier-Stokes equation can be integrated one by one, each term contributing differently to matrix A, and these contributions need to be added up to form a complete matrix form.
The partial differential equation set to be processed is converted into a target linear equation set by the linear conversion mode, namely: au=b.
S203: and constructing a variable component sub-line for solving the target linear equation set.
Specifically, the constructed variable component sub-circuit has the function of solving the linear equation, and can be constructed based on the characteristics of the target linear equation set, or can be constructed by utilizing a quantum linear equation solving algorithm, and of course, can be constructed in other modes, and the detailed description is omitted here.
In some possible embodiments of the present invention, the constructing a variable component sub-circuit for solving the target linear equation set includes:
and respectively constructing a first sub-quantum circuit and a second sub-quantum circuit to form a variable component sub-circuit, wherein the first sub-quantum circuit is used for forming a sub-quantum state containing an approximate solution of the target linear equation set, and the second sub-quantum circuit is used for acquiring a value corresponding to a loss function, and the loss function is constructed based on the approximate solution.
Specifically, the first sub-quantum circuit is essentially used for solving a target linear equation set, and the second sub-quantum circuit is essentially used for obtaining a judgment basis for whether the solution converges or not. The first sub-quantum circuit can obtain a sub-quantum state of an approximate solution, and the second sub-quantum circuit obtains a value corresponding to a loss function based on the sub-quantum state.
It should be noted that, the loss function may include elements, approximation solutions, variation parameters, and the like in the target linear equation set, and the value corresponding to the loss function may be a calculated value obtained by calculating the loss function, or may be a gradient of the loss function. The variation parameter is a parameter included in the variation sub-line, specifically, a parameter included in a quantum logic gate in the sub-line. By changing the variation parameters, different quantum states can be obtained, and different approximate solutions can be obtained.
S204: and obtaining a calculation result by using the variable component sub-circuit.
And operating the variable component sub-circuit, and obtaining a calculation result based on the final quantum state of the variable component sub-circuit when the operation is converged. The calculation result may be data obtained by calculating by using sample points, specifically, may be performance parameters of a measured target object, the parameters of the target object corresponding to different calculation results are very likely to be different, taking the target object as an example of the aerodynamic shape of the aircraft, the calculation result may be lift-drag ratio, and the lift-drag ratio is large, which indicates that the aerodynamic shape of the aircraft is better.
In some possible embodiments of the present invention, the obtaining a calculation result using the variable component sub-line may include:
Operating and measuring the variable component sub-line to obtain an approximate solution of the target linear equation set;
judging whether a value corresponding to the loss function meets convergence accuracy or not based on the approximate solution;
If yes, the approximate solution is used as a calculation result, otherwise, the variation parameters contained in the variation sub-circuit are updated, and the approximate solution of the target linear equation corresponding to the updated variation parameters is obtained again until the approximate solution meeting the condition that the value corresponding to the loss function accords with the convergence precision is obtained.
Before the measurement of the variable component sub-lines, information of a linear equation, one of which is a linear combination of S unitary matrices decomposed by matrix a, may be input to the variable component sub-lines in order to encode matrix a into the sub-lines. Here, a may be expressed as: Where l s is the coefficient of the linear combination, σ s is the unitary matrix (unitary operator); another information for inputting a linear equation to the variable component sub-line is a unitary matrix U b encoded by a vector b, and the unitary matrix U b is used to prepare a quantum state |b > proportional to the vector b, that is: vector b is normalized and encoded into the quantum wire in the form of |b > = U b |0>. The solution of a linear system is expressed as/>, using a quantum state heuristic wave function of a variable approximation The solution is the ground state of the following construction hamiltonian.
Specifically, after the variable component sub-circuit, the final state is obtainedTo read the quantum state information, a pre-constructed hamiltonian/>, may be utilizedMeasuring the final state to obtain the expected value/>The key to this process is the pre-constructed hamiltonian/>Expected value/>I.e. the loss function.
After obtaining the approximate solution, substituting the approximate solution into the loss function to obtain a value corresponding to the loss function.
The convergence accuracy can be set by the user according to the calculation requirement, for example, 10 -6 or 0 is taken.
If the value corresponding to the loss function meets the convergence precision, the calculation is terminated, and the obtained approximate solution is the calculation result; otherwise, updating the variation parameters in the variation sub-circuit through an optimization algorithm.
Illustratively, the variation parameters are updated by the following equation using a conventional optimization method, gradient descent method
Wherein k is an integer not less than 1, beta is a learning rate,Gradient of the loss function versus θ.
And then, transmitting the updated variation parameters to a variation sub-line, continuously executing evolution and measurement of the steps, updating the approximate solution and solving the loss function by continuously iterating the variation parameters until the approximate solution meeting the accuracy of the value corresponding to the loss function is obtained, and taking the approximate solution as the solution of the target linear equation, namely the calculation result.
S205: and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
In some embodiments of the invention, the proxy model may be a Kriging model, an artificial neural network (ARTIFICIAL NEURAL NETWORK, ANN), a radial basis function (Radial Basis Functions, RBF), a support vector regression model (Support Vector Machine, SVM), or a polynomial regression model (Polynomial Regression, PRG), or the like.
The sample points are input into a pre-selected agent model, the agent model is continuously trained, whether the agent model is trained is judged by using a calculation result, and when the agent model is trained, the agent model can be used for carrying out operations such as optimization of a target object.
In some possible embodiments of the present invention, the target parameters required for the proxy model training may be obtained using a quantum algorithm.
When the proxy model is trained, the target parameters are calculated by using sample points to train the proxy model, and then the proxy model with the target parameters is trained.
Taking a Kriging model as an example, the basic principle of the model, and the relationship between target parameters and sample points are described as follows:
The Kriging model assumes that the functional relationship between response values and arguments can be approximated as follows:
y=f(x)Tβ+z(x)
Wherein f (x) = [ f 1(x),f2(x),…,fp(x)]T ] is a linear combination of p given functions, called a regression model, p is a polynomial term of the regression model, the regression model may be a constant model, a first order polynomial model or a second order polynomial model, and β is a corresponding regression parameter. z (x) is a gaussian random process error, and is specifically the following statistical feature:
E[z(x)]=0
Var[z(x)]=σ2
Cov[Z(xi),Z(xj)]=σ2R(θ,xi,xj)
Wherein σ 2 is the variance of the random process; r (θ, x i,xj) is between any two sample points x i and x j Spline functions, and the like.
Assume that the k sample points used to construct the Kriging model are: x 1,…,xk∈Rn, where x i (i=1,..n) is an n-dimensional vector, the corresponding response value is y= { Y 1,...,yk}T. The correlation matrix can then be derived based on the sample points and the response values:
And at the predicted point x, the correlation matrix between the predicted point and the sample point is:
In addition, based on the sample points, the following coefficient matrix can be obtained:
the response value of the point to be predicted is predicted by adopting the linear weighting of the response value of the sample point, and the method comprises the following steps:
Where ω (x) is a weight coefficient vector, ω (x) = (ω 12,…,ωk)T).
The error between the predicted value and the true value of the point to be predicted is:
wherein z= [ Z 1,z2,…,zk]T ] is the error of the sample point due to ThenI.e., F T ω (x) =f (x).
The variance predicted in this way is:
/>
efficient Kriging proxy model requires prediction variance Minimum, then/>The weighting coefficient in (a) can be obtained by solving the optimal problem of the equation, and the constraint condition of F T w (x) =f (x) needs to be satisfied. To solve this optimal problem, a lagrangian equation of weighting coefficients is introduced:
L(ω,λ)=σ2(1+ωTRω-2ωTr)-λT(FTω-f)
Deriving the weighting coefficients:
let the reciprocal of the above formula be 0, and the joint constraint F T w (x) =f (x) can give the system of equations:
Wherein, Solving the above equation can result in:
The correlation matrix R obtained based on the sample points and the response values is the target parameter, and the process of inverting R can be obtained by using a quantum algorithm, specifically, can be obtained by using a quantum linear solving algorithm, so that the training of the whole proxy model can be accelerated.
Carry the upper part intoObtaining a predicted value of the predicted point:
Wherein the method comprises the steps of The polynomial parameter of the regression model is obtained by unbiased estimation of the Kriging model of the one-dimensional problem.
To sum up, given a sample point,And/>Is determined, so that only f (x) and r (x) need be calculated when predicting an unknown point.
The desired target parameters may vary from agent model to agent model, the manner in which the target parameters are obtained may vary, and the quantum algorithm used to obtain the target parameters may vary, for example, if the target parameters are solved using equations, a quantum equation solving algorithm may be used, and if the target parameters are searched for data, a quantum search algorithm may be used.
In some possible embodiments of the present invention, the training the proxy model using the sample points and the calculation result until a trained proxy model is obtained may include:
Inputting the sample points into a proxy model to obtain a predicted value;
obtaining model precision based on the calculation result and the predicted value;
judging whether the model precision accords with a training termination condition or not;
If not, adding a new sample point, obtaining a corresponding calculation result by using the variable component sub-line based on the new sample point, and returning to the step of inputting the sample point into the proxy model to obtain a predicted value until a trained proxy model is obtained.
Taking the Kriging model as an example, the sample points are input into the Kriging model to obtainI.e. the predicted value. By using the predicted value and the calculation result, model accuracy verification can be performed, and model accuracy judgment standards include average error, empirical accumulated variance, root mean square difference, standard residual error and the like, and the training termination condition can be in a preset numerical range.
The model accuracy verification method may use a dotted verification method or a cross verification method. The difference between the two is mainly that the former needs to add additional sample points, and the latter can complete verification work by only using the existing sample points.
The basic idea of the point verification method can be described briefly as follows: first, a plurality of new sample points are randomly generated, and a calculation result and a predicted value are obtained for each sample point. And substituting the calculation results and the predicted values of all the new sample points into a model precision judgment standard calculation formula to obtain the precision of the estimated proxy model, and judging whether the model meets the subsequent use requirement or not based on the precision, namely, whether the training termination condition is met or not.
The basic idea of the cross-validation method can be described briefly as follows: firstly, randomly dividing a sample point set S into m sample point subsets S { x, y } = S 1,S2,…,Sm with approximately equal sample numbers, taking any one sample point subset as data to be verified when verifying the accuracy of a proxy model, and constructing m-1 proxy models to be tested, so as to obtain a cross verification error e XV and a variance corresponding to the subsetWhen m times of verification are performed, all sample point subsets serve as verification data, and then cross verification errors and variances of all the subsets can be obtained:
Wherein the method comprises the steps of And/>And respectively taking the ith subset as a test sample, and constructing the predicted value and variance of the ith subset when constructing the proxy model by using the rest m-1 subsets. And after the cross validation error and variance are obtained, judging the model precision by adopting a standard residual error.
In a special case, when m=n, the number of the sample point subsets is equal to the total number of the sample points, and the verification method at this time is called a one-by-one cross verification method.
When the model precision does not meet the training termination condition, the model is not trained, more sample points are required to be added for training, at the moment, the target object is continuously sampled, the corresponding calculation result is calculated based on the new sample points, the new sample points are utilized to obtain the predicted value, and finally, all the calculation results and all the predicted values (including the calculation results and the predicted values corresponding to the previous sample points) are utilized to judge whether the model training is terminated.
Therefore, the application firstly samples the target object to obtain the sample point; then, based on the sample points, obtaining a target linear equation set for the target object; constructing a variable component sub-circuit for solving the target linear equation set, and obtaining a calculation result by using the variable component sub-circuit; and finally, training the proxy model by utilizing the sample points and the calculation result until a trained proxy model is obtained. The data required by the proxy model training is obtained through quantum computation, so that the computation speed is improved, and the model training efficiency is improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of a target object optimization method according to an embodiment of the present invention, which may include the following steps:
s301: and obtaining the appearance information of the parameterized modeling target object.
The method for parametric modeling of the target object can be a polynomial and spline function method, a CAD (Computer AIDED DESIGN ) method, a free deformation (Free Form Deformation, FFD) method and a class function/type function (CLASS AND SHAPE Transformation, CST) method, and the polynomial and spline function method are used for performing geometric shape parametrization of functions such as B-spline, bezier and NLTRBS (Non uniform rational B-spline, non-uniform rational spline), so that the method has strong modeling capability and is one of the most widely applied parametric modeling methods.
After parameterizing and modeling the target object, the appearance information of the target object can be obtained, and the obtained appearance information can be presented in the form of data. The shape information and the information contained in the sample point are the same, for example, the sample point is the coordinate of the target object, the shape information is also the coordinate of the target object, and for one shape information, a plurality of data may be contained, and a plurality of different types of data may be contained, for example, coordinates, angles, and the like.
In some possible embodiments of the invention, the target object may be an aircraft aerodynamic profile. The aircraft may be an aircraft, spacecraft, rocket, or missile.
S302: and inputting the appearance information of the target object into the proxy model trained by the proxy model training method provided by the embodiment to obtain the predicted value.
The shape information is input into the agent model trained by the model training method provided by the application, and the predicted value can be obtained.
S303: and based on the predicted value, obtaining optimization information by using an optimization algorithm.
The predicted value is input into an optimization algorithm, so that optimization information can be obtained, wherein the optimization information can comprise the weight of target object optimization, and can also comprise the gradient and the optimization direction of optimization.
The optimization algorithm may be a simulated annealing algorithm, a tabu search algorithm, a particle swarm optimization algorithm, a machine learning algorithm, a gradient descent algorithm, or the like. In order to calculate the optimization information faster and thereby accelerate the optimization of the target object, the optimization algorithm may also be a quantum genetic algorithm. The quantum genetic algorithm is a quantum algorithm, replaces the original classical genetic algorithm, and realizes quantization, so that acceleration can be realized based on the characteristic of quanta.
S304: and updating the target object by using the optimization information.
After the optimization information is obtained, a plurality of values are calculated by utilizing the optimization information, and the values are utilized to adjust the target object, and parameters corresponding to the target object can be adjusted based on the optimization information, so that the target object is updated.
S305: and obtaining the optimal target object based on the target objects before and after updating.
In some embodiments of the present invention, it may be determined whether to complete the optimization based on the target objects before and after the update, if the optimization is completed, the optimization is selected from all the updated optimization objects at present, and if the optimization is not completed, the optimization is continued.
In some possible embodiments of the present invention, obtaining an optimal target object based on target objects before and after updating may include:
comparing the target objects before and after updating, and judging whether the optimization is terminated;
if so, determining an optimal target object based on the target objects before and after updating;
If not, determining a target object to be updated, reselecting optimization data from the optimization information, updating the target object to be updated by utilizing the optimization data, and returning to execute the step of inputting the appearance information of the target object into the agent model trained by the agent model training method provided by the application to obtain the predicted value.
The decision as to whether to terminate the optimization may be two of:
First kind: and judging whether the difference between the predicted values corresponding to the target objects before and after updating is within a preset range.
Second kind: there are various methods for determining whether the updated target object is better than the target object before updating, for example, the predicted value of the target object before and after updating can be compared, the advantages and disadvantages can be determined, the predicted value is an improvement ratio, the improvement ratio is larger, the target object is better, when the updated target object is better than the updated target object before updating is determined, whether the optimization is converged is continuously determined, whether the difference value of the predicted value before and after updating is within a certain range or whether the corresponding variance of the predicted value before and after updating is within a certain range or not is within a certain range, and the like, if the optimization is converged, the optimal target object is determined to terminate the optimization, and if the updated target object is better than the updated target object after updating or is not converged, the optimization is continuously performed.
Based on different optimization termination conditions, the optimization target object may be a target object before update or may be a target object after update, for example, taking the second mode of judging whether to terminate or not as an example, the optimization target object is a target object after update.
When the termination optimization condition is not met, the optimization needs to be continued, and at the moment, the target object to be updated needs to be the optimal target object in the initial target object and the target object which is updated currently. The optimization information contains a plurality of sets of data, and a set of unused data can be selected from the optimization information as optimization data, and the target object to be updated is updated by using the optimization data. And inputting the updated appearance information of the target object to the proxy model, and continuing to optimize.
According to the method provided by the embodiment of the invention, the training of the agent model is accelerated through sub-calculation, and then the optimization of the target object is accelerated, so that the development of the field of optimization of the target object can be promoted, and taking the target object as the aerodynamic shape of the aircraft as an example, the optimization of the aerodynamic shape of the aircraft can be accelerated, the design period of the aircraft is shortened, and the development of aerospace industry is promoted.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a proxy model training device according to an embodiment of the present invention, corresponding to the flow shown in fig. 2, where the device includes:
A first obtaining module 401, configured to sample a target object to obtain a sample point;
A second obtaining module 402, configured to obtain a target linear equation set for the target object based on the sample point;
a construction module 403, configured to construct a variable component sub-line for solving the target linear equation set;
a third obtaining module 404, configured to obtain a calculation result by using the variable component sub-line;
And the training module 405 is configured to train the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
In some possible embodiments of the present invention, the second obtaining module 402 may be specifically configured to:
acquiring initial conditions, boundary conditions, partial differential equation sets to be solved and calculation domains thereof of the target object, wherein the calculation domains are determined by utilizing the sample points;
discretizing the initial condition, the boundary condition and the calculation domain to obtain a discretized algebraic equation set corresponding to the partial differential equation set to be solved;
And determining a target linear equation set according to the discretized algebraic equation set.
In some possible embodiments of the present invention, the building block 403 may be specifically configured to:
and respectively constructing a first sub-quantum circuit and a second sub-quantum circuit to form a variable component sub-circuit, wherein the first sub-quantum circuit is used for forming a sub-quantum state containing an approximate solution of the target linear equation set, and the second sub-quantum circuit is used for acquiring a value corresponding to a loss function, and the loss function is constructed based on the approximate solution.
In some possible embodiments of the present invention, the third obtaining module 404 may be specifically configured to:
The target linear equation set operates and measures the variable component sub-line to obtain an approximate solution of the target linear equation set, and measures the variable component sub-line to obtain an approximate solution of the target linear equation set;
judging whether a value corresponding to the loss function meets convergence accuracy or not based on the approximate solution;
If yes, the approximate solution is used as a calculation result, otherwise, the variation parameters contained in the variation sub-circuit are updated, and the approximate solution of the target linear equation corresponding to the updated variation parameters is obtained again until the approximate solution meeting the condition that the value corresponding to the loss function accords with the convergence precision is obtained.
In some possible embodiments of the present invention, the target parameters required for the proxy model training are obtained using a quantum algorithm.
In some possible embodiments of the present invention, the training module 405 may be specifically configured to:
Inputting the sample points into a proxy model to obtain a predicted value;
obtaining model precision based on the calculation result and the predicted value;
judging whether the model precision accords with a training termination condition or not;
If not, adding a new sample point, obtaining a corresponding calculation result by using the variable component sub-line based on the new sample point, and returning to the step of inputting the sample point into the proxy model to obtain a predicted value until a trained proxy model is obtained.
Therefore, the application firstly samples the target object to obtain the sample point; then, based on the sample points, obtaining a target linear equation set for the target object; constructing a variable component sub-circuit for solving the target linear equation set, and obtaining a calculation result by using the variable component sub-circuit; and finally, training the proxy model by utilizing the sample points and the calculation result until a trained proxy model is obtained. The data required by the proxy model training is obtained through quantum computation, so that the computation speed is improved, and the model training efficiency is improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a target object optimizing apparatus according to an embodiment of the present invention, corresponding to the flow shown in fig. 3, where the apparatus includes:
A fourth obtaining module 501, configured to obtain shape information of a parameterized modeled target object;
A fifth obtaining module 502, configured to input the shape information of the target object into a proxy model trained by using the proxy model training method provided by the present application, to obtain a predicted value;
a sixth obtaining module 503, configured to obtain optimization information by using an optimization algorithm based on the predicted value;
An updating module 504, configured to update the target object with the optimization information;
A seventh obtaining module 505 is configured to obtain an optimal target object based on the target objects before and after updating.
In some possible embodiments of the invention, the target object may be an aerodynamic profile of an aircraft.
In some possible embodiments of the invention, the optimization algorithm may be a quantum genetic algorithm.
In some possible embodiments of the present invention, the seventh obtaining module 505 may be specifically configured to:
comparing the target objects before and after updating, and judging whether the optimization is terminated;
if so, determining an optimal target object based on the target objects before and after updating;
If not, determining a target object to be updated, reselecting optimization data from the optimization information, updating the target object to be updated with the optimization data, and returning to executing the fifth obtaining module 502.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the proxy model training method or the target object optimizing method provided by the embodiment is realized. The computer readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, etc. various media capable of storing program codes.
Specifically, in an embodiment of the present invention, the above-described computer storage medium may be configured to store a computer program for implementing the steps of:
S201: sampling a target object to obtain sample points;
s202: obtaining a target linear equation set for the target object based on the sample points;
s203: constructing a variable component sub-circuit for solving the target linear equation set;
S204: obtaining a calculation result by utilizing the variable component sub-circuit;
S205: and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
Specifically, in an embodiment of the present invention, the above-described computer storage medium may be further configured to store a computer program for implementing the steps of:
S301: obtaining the shape information of the parameterized modeling target object;
s302: inputting the appearance information of the target object into a proxy model trained by the proxy model training method provided by the embodiment to obtain a predicted value;
S303: based on the predicted value, optimizing information is obtained by utilizing an optimizing algorithm;
S304: updating the target object by using the optimization information;
s305: and obtaining the optimal target object based on the target objects before and after updating.
The embodiment of the invention also provides electronic equipment which can comprise a processor and a memory. The processor and memory may communicate via a system bus. And, the memory stores computer program, when the electronic device is a proxy model training device, the processor reads and executes the computer program corresponding to the above embodiment in the memory, so as to implement the proxy model training method provided by the embodiment of the invention. When the electronic device is the target object optimizing device, the processor reads and executes the computer program corresponding to the above embodiment in the memory to implement the target object optimizing method provided by the embodiment of the present invention.
Specifically, in this embodiment, the above-mentioned processor may be configured to implement the following steps by a computer program:
S201: sampling a target object to obtain sample points;
s202: obtaining a target linear equation set for the target object based on the sample points;
s203: constructing a variable component sub-circuit for solving the target linear equation set;
S204: obtaining a calculation result by utilizing the variable component sub-circuit;
S205: and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
Specifically, in this embodiment, the above-mentioned processor may be further configured to implement the following steps by a computer program:
S301: obtaining the shape information of the parameterized modeling target object;
s302: inputting the appearance information of the target object into a proxy model trained by the proxy model training method provided by the embodiment to obtain a predicted value;
S303: based on the predicted value, optimizing information is obtained by utilizing an optimizing algorithm;
S304: updating the target object by using the optimization information;
S305: and obtaining the optimal target object based on the target objects before and after updating. The memory may be an information recording device based on any electronic, magnetic, optical, or other physical principles for recording execution instructions, data, etc. In some embodiments, the memory may be, but is not limited to, volatile memory, non-volatile memory, storage drives, and the like.
The processor may be an integrated circuit chip having signal processing capabilities and may include one or more processing cores (e.g., a single-core processor or a multi-core processor). By way of example only, the Processor may include a central processing unit (Central Processing Unit, CPU), application SPECIFIC INTEGRATED Circuit (ASIC), special purpose instruction set Processor (Application Specific Instruction-set Processor, ASIP), graphics processing unit (Graphics Processing Unit, GPU), physical processing unit (Physics Processing Unit, PPU), digital signal Processor (DIGITAL SIGNAL Processor, DSP), field programmable gate array (Field Programmable GATE ARRAY, FPGA), programmable logic device (Programmable Logic Device, PLD), controller, microcontroller unit, reduced instruction set computer (Reduced Instruction Set Computing, RISC), microprocessor, or the like, or any combination thereof.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (14)

1. A method of proxy model training, the method comprising:
sampling a target object to obtain sample points;
obtaining a target linear equation set for the target object based on the sample points;
Constructing a variable component sub-circuit for solving the target linear equation set;
obtaining a calculation result by utilizing the variable component sub-circuit;
and training the proxy model by using the sample points and the calculation result until a trained proxy model is obtained.
2. The method of claim 1, wherein the obtaining a set of target linear equations for the target object based on the sample points comprises:
acquiring initial conditions, boundary conditions, partial differential equation sets to be solved and calculation domains thereof of the target object, wherein the calculation domains are determined by utilizing the sample points;
discretizing the initial condition, the boundary condition and the calculation domain to obtain a discretized algebraic equation set corresponding to the partial differential equation set to be solved;
And determining a target linear equation set according to the discretized algebraic equation set.
3. The method of claim 2, wherein said constructing a variable component sub-circuit that solves the set of target linear equations comprises:
and respectively constructing a first sub-quantum circuit and a second sub-quantum circuit to form a variable component sub-circuit, wherein the first sub-quantum circuit is used for forming a sub-quantum state containing an approximate solution of the target linear equation set, and the second sub-quantum circuit is used for acquiring a value corresponding to a loss function, and the loss function is constructed based on the approximate solution.
4. A method according to claim 3, wherein said obtaining a calculation result using said variable component sub-line comprises:
Operating and measuring the variable component sub-line to obtain an approximate solution of the target linear equation set;
judging whether a value corresponding to the loss function meets convergence accuracy or not based on the approximate solution;
If yes, the approximate solution is used as a calculation result, otherwise, the variation parameters contained in the variation sub-circuit are updated, and the approximate solution of the target linear equation corresponding to the updated variation parameters is obtained again until the approximate solution meeting the condition that the value corresponding to the loss function accords with the convergence precision is obtained.
5. The method of claim 1, wherein the target parameters required for the proxy model training are obtained using a quantum algorithm.
6. The method of any of claims 1-5, wherein training the proxy model using the sample points and the calculation results until a trained proxy model is obtained comprises:
Inputting the sample points into a proxy model to obtain a predicted value;
obtaining model precision based on the calculation result and the predicted value;
judging whether the model precision accords with a training termination condition or not;
If not, adding a new sample point, obtaining a corresponding calculation result by using the variable component sub-line based on the new sample point, and returning to the step of inputting the sample point into the proxy model to obtain a predicted value until a trained proxy model is obtained.
7. A method of optimizing a target object, the method comprising:
Obtaining the shape information of the parameterized modeling target object;
inputting the shape information of the target object into a proxy model trained by the proxy model training method provided by any one of claims 1-6 to obtain a predicted value;
Based on the predicted value, optimizing information is obtained by utilizing an optimizing algorithm;
updating the target object by using the optimization information;
and obtaining the optimal target object based on the target objects before and after updating.
8. The method of claim 7, wherein the target object is an aircraft aerodynamic profile.
9. The method according to claim 7 or 8, wherein the optimization algorithm is a quantum genetic algorithm.
10. The method according to claim 9, wherein the obtaining the optimal target object based on the target objects before and after updating includes:
comparing the target objects before and after updating, and judging whether the optimization is terminated;
if so, determining an optimal target object based on the target objects before and after updating;
If not, determining a target object to be updated, reselecting optimization data from the optimization information, updating the target object to be updated by utilizing the optimization data, and returning to execute the step of inputting the appearance information of the target object into a proxy model trained by the proxy model training method provided by any one of claims 1-6 to obtain a predicted value.
11. A proxy model training apparatus, the apparatus comprising:
The first acquisition module is used for sampling the target object to acquire sample points;
A second obtaining module, configured to obtain a target linear equation set for the target object based on the sample point;
The construction module is used for constructing a variable component sub-line for solving the target linear equation set;
the third obtaining module is used for obtaining a calculation result by utilizing the variable component sub-circuit;
And the training module is used for training the proxy model by utilizing the sample points and the calculation result until a trained proxy model is obtained.
12. A target object optimization device, the device comprising:
the fourth obtaining module is used for obtaining the appearance information of the parameterized modeling target object;
a fifth obtaining module, configured to input the shape information of the target object into a proxy model trained by the proxy model training method provided in any one of claims 1 to 6, to obtain a predicted value;
a sixth obtaining module, configured to obtain optimization information by using an optimization algorithm based on the predicted value;
The updating module is used for updating the target object by utilizing the optimization information;
and a seventh obtaining module, configured to obtain an optimal target object based on the target objects before and after updating.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the proxy model training method of any one of claims 1-6 or the target object optimization method of any one of claims 7-10.
14. An electronic device comprising a processor and a memory storing a computer program which, when executed by the processor, implements the proxy model training method of any one of claims 1-6 or the target object optimization method of any one of claims 7-10.
CN202211353203.XA 2022-10-31 2022-10-31 Agent model training method, target object optimizing method and related devices Pending CN117993150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211353203.XA CN117993150A (en) 2022-10-31 2022-10-31 Agent model training method, target object optimizing method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211353203.XA CN117993150A (en) 2022-10-31 2022-10-31 Agent model training method, target object optimizing method and related devices

Publications (1)

Publication Number Publication Date
CN117993150A true CN117993150A (en) 2024-05-07

Family

ID=90893203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211353203.XA Pending CN117993150A (en) 2022-10-31 2022-10-31 Agent model training method, target object optimizing method and related devices

Country Status (1)

Country Link
CN (1) CN117993150A (en)

Similar Documents

Publication Publication Date Title
CN114580647B (en) Quantum system simulation method, computing device, device and storage medium
CN112073126B (en) Method and device for ordering network node importance
Chen et al. APIK: Active physics-informed kriging model with partial differential equations
CN114492814B (en) Method, device and medium for calculating energy of simulation target system based on quanta
CN114819168B (en) Quantum comparison method and device for matrix eigenvalues
CN114819163B (en) Training method and device for quantum generation countermeasure network, medium and electronic device
CN114492815B (en) Method, device and medium for calculating target system energy based on quantum chemistry
CN115809705A (en) Fluid dynamics computing system based on quantum computing and quantum computer
CN114528996B (en) Method, device and medium for determining initial parameters of target system test state
CN114519429B (en) Method, device and medium for obtaining observability quantity of target system
CN114372539B (en) Machine learning framework-based classification method and related equipment
CN117993150A (en) Agent model training method, target object optimizing method and related devices
CN117151231A (en) Method, device and medium for solving linear system by using variable component sub-line
CN114512195B (en) Calculation method, device and medium based on molecular dynamics simulation system property
Yang et al. Non-matching meshes data transfer using Kriging model and greedy algorithm
CN116258199B (en) Distributed training time prediction method and device for large-scale GPU cluster
WO2024046136A1 (en) Quantum neural network training method and device
CN115700572A (en) Quantum solving method and device for iteration of grid equation and physical equation
CN117669758A (en) Quantum simulation method, device and storage medium based on distributed VQE
CN117669757A (en) Hamiltonian volume construction method and device
CN117056646A (en) Variable component sub-linear solving method, device and medium for subspace
CN117951429A (en) Partial differential equation set solving method and device
CN117408345A (en) Quantum flow simulation method, device, medium and equipment based on LBM
CN116090571A (en) Quantum linear solving method, device and medium based on generalized minimum residual quantity
CN116090274A (en) Material deformation simulation method, device, terminal and medium based on quantum computation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination