CN117634278A - Design optimization method and design optimization device - Google Patents

Design optimization method and design optimization device Download PDF

Info

Publication number
CN117634278A
CN117634278A CN202210952590.2A CN202210952590A CN117634278A CN 117634278 A CN117634278 A CN 117634278A CN 202210952590 A CN202210952590 A CN 202210952590A CN 117634278 A CN117634278 A CN 117634278A
Authority
CN
China
Prior art keywords
design
neural network
training
simulation
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210952590.2A
Other languages
Chinese (zh)
Inventor
周浩
李洋
涂威威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
4Paradigm Beijing Technology Co Ltd
Original Assignee
4Paradigm Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 4Paradigm Beijing Technology Co Ltd filed Critical 4Paradigm Beijing Technology Co Ltd
Priority to CN202210952590.2A priority Critical patent/CN117634278A/en
Publication of CN117634278A publication Critical patent/CN117634278A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a design optimization method and a design optimization device, wherein the design optimization method is realized based on a simulation model comprising a simulation neural network and adjustable design parameters, the input parameters of the simulation neural network comprise space-time sampling points and adjustable design parameters, the output parameters comprise simulation values of target physical quantities at the input space-time sampling points, and the design optimization method comprises the following steps: acquiring a plurality of groups of sampling design parameters and corresponding space-time sampling points; based on a plurality of groups of sampling design parameters and corresponding space-time sampling points, pre-training the simulated neural network to obtain a pre-trained simulated neural network; acquiring initial design parameters; and taking the initial design parameters as initial values of the adjustable design parameters, performing multiple iterative training on the adjustable design parameters and the pre-training simulation neural network, and updating the adjustable design parameters and the pre-training simulation neural network based on the loss function in the process of each iterative training until the loss function meets the set conditions, so as to obtain the optimal design parameters and the optimal simulation neural network.

Description

Design optimization method and design optimization device
Technical Field
The present disclosure relates generally to the field of simulation technology, and more particularly, to a design optimization method and a design optimization apparatus.
Background
Almost all design and fabrication processes from chip fabrication to space shuttle are not simulated. In general, engineers need to minimize manufacturing costs by continuously optimizing and improving the geometry, materials, etc. of the design object so that the simulated results meet the design criteria. The conventional method needs to optimize the geometry and materials of the design by trial and error, and relies heavily on the experience of engineers. In addition, the physicochemical process simulation of a given geometry and material is usually very time-consuming, and in the traditional finite element method, engineers are required to continuously optimize the grid of the calculation region to ensure convergence of the result, and for the physical field design of a complex structure, the scheme may need a cycle of a week level or even a month level to obtain the design result meeting the requirement. In summary, the conventional method relies heavily on the experience of engineers and is very time consuming.
In the related art, a scheme of performing simulation in a form of training a neural network by using a deep learning method based on a PINN (Physics Informed Neural Network) and embedded with a physical knowledge neural network is proposed. Briefly summarized, the principle of the PINN is to input coordinates of some space-time sampling points into a neural network to be trained and output simulation values, further calculate a loss function by combining initial conditions and boundary conditions, and minimize the loss function by training the neural network to approximate the solution of PDE (Partial Differential Equation ). Wherein the loss function term comprises a residual term of the initial and boundary conditions, and PDE residuals at selected points in the region. And (3) reasoning after training is finished, namely inputting coordinates of the space-time points into a trained neural network, and obtaining simulation values on the corresponding space-time points. However, the existing PINN network technology is the same as the traditional method, only a certain design is still simulated, a trial and error mode is needed, a certain design is simulated firstly, then the design is adjusted according to a simulation result, and the simulation is continuously carried out on the adjusted design, so that the simulation is repeated for many times, and the problem that the design optimization period is long still exists.
Disclosure of Invention
The present disclosure provides a design optimization method and a design optimization apparatus for solving at least or not solving the above-mentioned problems.
According to an aspect of the present disclosure, there is provided a design optimization method implemented based on a simulation model including a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network including a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network including a simulation value of a target physical quantity at the input space-time sampling point, the design optimization method including: acquiring a plurality of groups of sampling design parameters and corresponding space-time sampling points; based on the multiple groups of sampling design parameters and corresponding space-time sampling points, pre-training the simulated neural network to obtain a pre-trained simulated neural network; acquiring initial design parameters; and taking the initial design parameters as initial values of the adjustable design parameters, performing multiple iterative training on the adjustable design parameters and the pre-training simulation neural network, and updating the adjustable design parameters and the pre-training simulation neural network based on a loss function in the process of each iterative training until the loss function meets a set condition, so as to obtain an optimal design parameter and an optimal simulation neural network.
Optionally, the loss function includes a boundary condition loss, a partial differential equation loss, a manufacturing cost loss, a design objective loss, the manufacturing cost loss being derived from the tunable design parameter, the design objective loss being derived from the output value of the pre-trained simulated neural network and the tunable design parameter.
Optionally, the step of taking the initial design parameter as an initial value of the adjustable design parameter, performing multiple iterative training on the adjustable design parameter and the pre-training simulation neural network, updating the adjustable design parameter and the pre-training simulation neural network based on a loss function in each iterative training process until the loss function meets a set condition, and obtaining the optimal design parameter and the optimal simulation neural network includes: taking the initial design parameters as initial values of the adjustable design parameters, and carrying out repeated alternate iterative training on the adjustable design parameters and the pre-training simulation neural network to obtain the optimal design parameters and the optimal simulation neural network; wherein each alternating iterative training comprises: iteratively training the adjustable design parameters based on the manufacturing cost loss and the design objective loss to update the adjustable design parameters; iteratively training the pre-training simulated neural network based on the updated adjustable design parameters, the boundary condition loss and the partial differential equation loss to update the pre-training simulated neural network; wherein the multiple alternating iteration training is ended in the case that the design target loss, the manufacturing cost loss, and the design target loss are all less than respective corresponding thresholds.
Optionally, before performing iterative training on the adjustable design parameter and the pre-training simulated neural network, each alternate iterative training further includes: and acquiring the latest first and second epoch numbers, wherein the first epoch number is used for iteratively training the adjustable design parameters, and the second epoch number is used for iteratively training the pre-training simulation neural network.
According to another aspect of the present disclosure, there is provided a design optimization method implemented based on a simulation model including a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network including a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network including a simulation value of a target physical quantity at the input space-time sampling point, the design optimization method including: acquiring a plurality of groups of sampling design parameters and corresponding space-time sampling points; based on the multiple groups of sampling design parameters and corresponding space-time sampling points, pre-training the simulated neural network to obtain a pre-trained simulated neural network; acquiring initial design parameters; determining a plurality of sets of candidate design parameters based on the initial design parameters; training the pre-training simulation neural network aiming at each group of the candidate design parameters to obtain a trained simulation neural network corresponding to the candidate design parameters, and determining manufacturing cost loss and design target loss according to the candidate design parameters and the trained simulation neural network, wherein the manufacturing cost loss is obtained by the candidate design parameters, and the design target loss is obtained by output values of the trained simulation neural network and the candidate design parameters; and selecting one group from the plurality of groups of candidate design parameters as an optimal design parameter according to the manufacturing cost loss and the design target loss corresponding to each group of candidate design parameters, and taking the trained simulation neural network corresponding to the optimal design parameter as an optimal simulation neural network.
Optionally, the obtaining the initial design parameters includes: and solving an approximate optimal solution of the adjustable design parameter based on the pre-training simulation neural network to serve as the initial design parameter.
Optionally, the design parameters include structural parameters and material coefficients, the simulation model further includes a partial differential equation module, where the adjustable design parameters include adjustable structural parameters and adjustable material coefficients, the input parameters of the partial differential equation module include output values of the simulated neural network and the adjustable material coefficients, and where the adjustable design parameters are adjustable structural parameters, the input parameters of the partial differential equation module include output values of the simulated neural network and fixed material coefficients.
According to another aspect of the present disclosure, there is provided a design optimization apparatus implemented based on a simulation model including a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network including a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network including a simulation value of a target physical quantity at the input space-time sampling point, the design optimization apparatus comprising: an acquisition unit configured to acquire a plurality of sets of sampling design parameters and corresponding spatio-temporal sampling points; the pre-training unit is configured to pre-train the simulated neural network based on the plurality of groups of sampling design parameters and corresponding space-time sampling points to obtain a pre-trained simulated neural network; the acquisition unit is further configured to acquire initial design parameters; the first training unit is configured to take the initial design parameters as initial values of the adjustable design parameters, perform multiple iterative training on the adjustable design parameters and the pre-training simulation neural network, update the adjustable design parameters and the pre-training simulation neural network based on a loss function in the process of each iterative training until the loss function meets a set condition, and obtain an optimal design parameter and an optimal simulation neural network.
Optionally, the loss function includes a boundary condition loss, a partial differential equation loss, a manufacturing cost loss, a design objective loss, the manufacturing cost loss being derived from the tunable design parameter, the design objective loss being derived from the output value of the pre-trained simulated neural network and the tunable design parameter.
Optionally, the first training unit is further configured to use the initial design parameter as an initial value of the adjustable design parameter, and perform multiple alternate iterative training on the adjustable design parameter and the pre-training simulation neural network to obtain the optimal design parameter and the optimal simulation neural network; wherein each alternating iterative training comprises: iteratively training the adjustable design parameters based on the manufacturing cost loss and the design objective loss to update the adjustable design parameters; iteratively training the pre-training simulated neural network based on the updated adjustable design parameters, the boundary condition loss and the partial differential equation loss to update the pre-training simulated neural network; wherein the multiple alternating iteration training is ended in the case that the design target loss, the manufacturing cost loss, and the design target loss are all less than respective corresponding thresholds.
Optionally, before performing iterative training on the adjustable design parameter and the pre-training simulated neural network, each alternate iterative training further includes: and acquiring the latest first and second epoch numbers, wherein the first epoch number is used for iteratively training the adjustable design parameters, and the second epoch number is used for iteratively training the pre-training simulation neural network.
According to another aspect of the present disclosure, there is provided a design optimization apparatus implemented based on a simulation model including a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network including a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network including a simulation value of a target physical quantity at the input space-time sampling point, the design optimization apparatus comprising: an acquisition unit configured to acquire a plurality of sets of sampling design parameters and corresponding spatio-temporal sampling points; the pre-training unit is configured to pre-train the simulated neural network based on the plurality of groups of sampling design parameters and corresponding space-time sampling points to obtain a pre-trained simulated neural network; the acquisition unit is further configured to acquire initial design parameters; a determining unit configured to determine a plurality of sets of candidate design parameters based on the initial design parameters; the second training unit is configured to train the pre-trained simulation neural network aiming at each group of the candidate design parameters to obtain a trained simulation neural network corresponding to the candidate design parameters, and determine manufacturing cost loss and design target loss according to the candidate design parameters and the trained simulation neural network, wherein the manufacturing cost loss is obtained by the candidate design parameters, and the design target loss is obtained by output values of the trained simulation neural network and the candidate design parameters; and a selection unit configured to select one set from the plurality of sets of candidate design parameters as an optimal design parameter according to the manufacturing cost loss and the design target loss corresponding to each set of candidate design parameters, and to use the trained simulated neural network corresponding to the optimal design parameter as an optimal simulated neural network.
Optionally, the obtaining unit is further configured to solve a near-optimal solution of the tunable design parameter as the initial design parameter based on the pre-trained simulated neural network.
Optionally, the design parameters include structural parameters and material coefficients, the simulation model further includes a partial differential equation module, where the adjustable design parameters include adjustable structural parameters and adjustable material coefficients, the input parameters of the partial differential equation module include output values of the simulated neural network and the adjustable material coefficients, and where the adjustable design parameters are adjustable structural parameters, the input parameters of the partial differential equation module include output values of the simulated neural network and fixed material coefficients.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform a design optimization method as described above.
According to another aspect of the present disclosure, there is provided a system comprising at least one computing device and at least one storage device storing instructions, wherein the instructions, when executed by the at least one computing device, cause the at least one computing device to perform the design optimization method as described above.
According to the design optimization method of the exemplary embodiment of the disclosure, by adding the non-fixed adjustable design parameters on the basis of the simulation neural network to form the simulation model and inputting the adjustable design parameters into the simulation neural network, the design parameters can be used as an adjustable variable and are included in the optimization calculation range of deep learning, and compared with the trial-and-error mode based on the determined design parameters, the calculation efficiency of design optimization can be improved, and the design optimization period is greatly shortened.
The exemplary embodiment of the disclosure also provides a training method for pre-training and then formally training the simulated neural network, compared with the existing error-testing optimization mode, the number of groups of sampling design parameters used in the pre-training is relatively small, that is, the pre-training of the simulated neural network can be realized by randomly sampling a small number of sampling design parameters, the obtained pre-training simulated neural network can learn the implicit function relation of the output simulation values with respect to the structural parameters and the material coefficients, the optimization rate of the subsequent formal training is facilitated to be accelerated, the overall optimization process is shortened, and the design optimization period is further shortened.
In addition, the exemplary embodiments of the present disclosure also provide two formal training methods. Updating the adjustable design parameters and the pre-training simulation neural network based on the loss function, and automatically calculating more accurate optimal design parameters and corresponding optimal simulation neural networks; and the optimization and simulation calculation are performed based on transfer learning, so that the accuracy of a calculation result can be ensured as much as possible under the condition of greatly reducing the calculation amount.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
These and/or other aspects and advantages of the present disclosure will become apparent from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram illustrating a typical heat conduction problem;
FIG. 2 is a flow diagram illustrating PINN technology;
FIG. 3 is a schematic diagram illustrating the structure of a simulation model according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a design optimization method in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 is a flow diagram illustrating a design optimization method according to an exemplary embodiment of the present disclosure;
FIG. 6 is a flow chart illustrating a design optimization method in accordance with another exemplary embodiment of the present disclosure;
FIG. 7 is a flow diagram illustrating a design optimization method according to another exemplary embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating a design optimization device in accordance with an exemplary embodiment of the present disclosure;
fig. 9 is a block diagram illustrating a design optimization device according to another exemplary embodiment of the present disclosure.
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of embodiments of the invention defined by the claims and their equivalents. Various specific details are included to aid understanding, but are merely to be considered exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be noted that, in this disclosure, "at least one of the items" refers to a case where three types of juxtaposition including "any one of the items", "a combination of any of the items", "an entirety of the items" are included. For example, "including at least one of a and B" includes three cases side by side as follows: (1) comprises A; (2) comprising B; (3) includes A and B. For example, "at least one of the first and second steps is executed", that is, three cases are juxtaposed as follows: (1) performing step one; (2) executing the second step; (3) executing the first step and the second step.
PINN can simulate the physical and chemical related processes under given conditions through the known physical and chemical basic theorem to obtain a simulated physical field. For example, knowing the heat transfer equation (partial differential equation PDE) in a certain space and the temperature outside its spatial boundary and the power of the heat source (boundary conditions), we can solve the temperature distribution of the whole space, i.e. the temperature field of the whole space, as the simulated physical field. A typical heat conduction problem is shown in figure 1.
The usual network structure and solution process of the PINN is shown in fig. 2.
Firstly, a neural network u (…) =phi (…; W, b) is constructed, wherein '…' represents sampling points, and a space sampling point or a time space sampling point can be selected according to actual needs, the former can be represented by coordinates (x, y) or (x, y, z), the points are extracted from the space to be simulated, the latter can be represented by coordinates (x, y, t) or (x, y, z, t), and the time dimension is increased on the basis of the space sampling points. Specifically, taking the heat dissipation structure of the design chip as an example, the temperature of each part of the chip during heat dissipation needs to be simulated, so that the space sampling points on the geometric shape of the chip can be extracted. If a simulation transient result is required, for example, the temperature of each part of the simulation chip in a certain period of time in the heat dissipation process is changed, a time dimension t is required to be added to reflect the state change of the space sampling points in a period of time, and the space sampling points with time are time space sampling points; if only a static result is required to be simulated, for example, a steady-state temperature distribution after the chip dissipates heat for a period of time, the time dimension t is not required to be added, and a space sampling point can be used. Here u (…) is taken as a simulation value output by the neural network, and corresponds to the temperature T of the sampling point in the chip heat dissipation problem.
And then sampling the calculation region omega and the boundary theta omega, namely determining sampling points to be simulated, and inputting the sampling points into a neural network for calculation to obtain a simulation value u.
And determining the value of the loss function based on the calculated simulation value u. Specifically, for sampling points within the calculation region, their partial differential equation Loss is calculated PDE = L (u) -g| and their boundary condition Loss is calculated for boundary points BC =‖u-v b II, where v b Total Loss loss=loss, which is the value of the corresponding point in the boundary condition PDE +Loss BC . Still taking the heat conduction problem as an example, where k is the heat transfer coefficient, g=0, thusThe boundary conditions include boundary conditions related to the heat source and initial temperature conditions, wherein the initial temperature conditions describe temperature distribution conditions at the initial moment of heat conduction, and can be understood as four-dimensional boundary conditions, the boundary conditions related to the heat source can be understood as three-dimensional boundary conditions, and the temperature value of the boundary conditions is taken as v b Substituting the temperature simulation value u of the corresponding boundary sampling point into a formula to obtain the corresponding Loss BC . The boundary conditions exemplified here are Dirichlet boundary conditions (Dirichlet boundary conditions), neumaBoundary conditions of nn (Norman), robin (Robin) and the like can also be added to the training loss function of the neural network in a similar form. Wherein the Dirichlet boundary condition gives the value of the unknown function on the boundary; neumann boundary conditions give the directional derivative of the unknown function at the normal outside the boundary; the robin boundary condition gives a linear combination of the function value of the unknown function on the boundary and the external normal derivative. Furthermore, the initial condition T (…, 0) is determined by the method of the Loss BC The same form is added to the loss function.
Finally, the weight W and the bias term b of the neural network phi (…; W, b) can be iteratively trained through a gradient descent method until the total Loss is smaller than a given threshold delta, and the calculation result can be considered to be converged, so that training is completed, and the trained neural network u (…) is obtained and is used as a simulation result. Any spatial point or temporal spatial point may be input to the trained neural network u (…) to obtain the simulation value u at the corresponding point. A plurality of space points are selected in a certain space, and simulation values u at the space points jointly form a static physical field of the space. Similarly, a plurality of space points are selected in a certain space, the time dimension is further increased, a plurality of time points are selected in a certain period, and a plurality of time space points can be obtained, and simulation values u at the time space points jointly form a transient physical field of the space in a corresponding period.
However, the sampling points used in the solution are obtained based on the determined design, so that the simulation calculation can only be performed on the determined design, and the optimization mode of trial and error with a longer period can only be adopted.
According to the design optimization method of the exemplary embodiment of the disclosure, by adding the non-fixed adjustable design parameters on the basis of the simulation neural network to form the simulation model and inputting the adjustable design parameters into the simulation neural network, the design parameters can be used as an adjustable variable and are included in the optimization calculation range of deep learning, and compared with the trial-and-error mode based on the determined design parameters, the calculation efficiency of design optimization can be improved, and the design optimization period is greatly shortened.
The exemplary embodiment of the disclosure also provides a training method for pre-training and then formally training the simulated neural network, compared with the existing error-testing optimization mode, the number of groups of sampling design parameters used in the pre-training is relatively small, that is, the pre-training of the simulated neural network can be realized by randomly sampling a small number of sampling design parameters, the obtained pre-training simulated neural network can learn the implicit function relation of the output simulation values with respect to the structural parameters and the material coefficients, the optimization rate of the subsequent formal training is facilitated to be accelerated, the overall optimization process is shortened, and the design optimization period is further shortened.
In addition, the exemplary embodiments of the present disclosure also provide two formal training methods. Updating the adjustable design parameters and the pre-training simulation neural network based on the loss function, and automatically calculating more accurate optimal design parameters and corresponding optimal simulation neural networks; and the optimization and simulation calculation are performed based on transfer learning, so that the accuracy of a calculation result can be ensured as much as possible under the condition of greatly reducing the calculation amount.
A design optimizing method and a design optimizing apparatus according to an exemplary embodiment of the present disclosure are described in detail below with reference to fig. 3 to 9.
The design optimization method according to the exemplary embodiments of the present disclosure is implemented based on a simulation model. Fig. 3 is a schematic diagram illustrating a structure of a simulation model according to an exemplary embodiment of the present disclosure. Referring to fig. 3, the simulation model includes a simulated neural network and tunable design parameters, and also includes a partial differential equation module similar to the related art.
With respect to the tunable design parameters, it is first noted that the design parameters include structural parameters and material coefficients. The structural parameters used to describe the geometry of the design often include a number of parameter values, such as the length, width, and height of the chip. The material coefficient is used to reflect the properties of the material in a particular problem, such as the thermal conduction problem shown in FIG. 1, and may be the heat transfer coefficient k. Depending on the type of material used, the material coefficients may comprise at least one material coefficient value. The adjustable design parameters are parts of the design parameters that can be adjusted. In particular, where simultaneous optimization of geometry and materials is desired, the adjustable design parameters include adjustable structural parameters and adjustable material coefficients; in the case that only the geometry structure needs to be optimized, the adjustable design parameters include adjustable structural parameters, and at this time, the material coefficient is a known fixed value due to the determination of the material, and calculation is not needed; in the case where only the material needs to be optimized, since the geometry structure is determined, the physical field can be obtained by theoretical calculation, i.e., the physical field does not have to be calculated by applying the artificial neural network, and thus the disclosure will not be discussed. By inputting the adjustable design parameters into the simulation neural network, the output simulation values and the adjustable design parameters can be associated, so that the adjustable design parameters participate in simulation calculation, the optimization of the corresponding design parameters can be realized in the simulation process, the calculation efficiency of design optimization is improved, and the design optimization period can be shortened greatly.
Regarding the artificial neural network, as in the related art, the input parameters thereof include space-time sampling points, and the output parameters thereof include an artificial value of a target physical quantity (such as the temperature in the aforementioned heat conduction problem) at the input space-time sampling points. Unlike the related art, the input parameters of the simulated neural network of the exemplary embodiments of the present disclosure also include adjustable design parameters. As can be seen from the above, in the case of optimizing geometry and materials simultaneously, the adjustable structural parameters and the adjustable material coefficients are input into the artificial neural network simultaneously (see fig. 3); under the condition that only the geometric structure is required to be optimized, the material coefficient is not required to be optimized, so that only the adjustable structural parameter can be input into the simulation neural network, at the moment, the fixed material coefficient is not required to be input into the simulation neural network, and the material coefficient can be input into the simulation neural network. It is verified whether a fixed material coefficient is input at this time, and the influence on the output simulation value is not great, which is not limited by the present disclosure.
Regarding the partial differential equation module, as can be seen with reference to the foregoing description, the input parameters of the partial differential equation module include the output values of the simulated neural network and the material coefficients. Correspondingly, under the condition of optimizing the geometric structure and the material at the same time, the material coefficient input into the partial differential equation module is an adjustable material coefficient; in the case where only the geometry structure needs to be optimized, the material coefficients input to the partial differential equation module are fixed material coefficients. By determining whether the material coefficient of the partial differential equation module is adjustable according to different optimization objects, namely according to the adjustment range of the design parameters, the computational consistency of the partial differential equation module and the simulated neural network can be ensured, and the reliable computation of the partial differential equation module can be ensured.
FIG. 4 is a flowchart illustrating a design optimization method according to an exemplary embodiment of the present disclosure. Fig. 5 is a flow diagram illustrating a design optimization method according to an exemplary embodiment of the present disclosure. The design optimization method according to the exemplary embodiments of the present disclosure may be implemented in a computing device having sufficient operational capability.
Referring to fig. 4 and 5, in step S401, a plurality of sets of sampling design parameters and corresponding spatio-temporal sampling points are acquired.
In step S402, the simulated neural network is pre-trained based on the plurality of sets of sampling design parameters and the corresponding space-time sampling points, to obtain a pre-trained simulated neural network.
With reference to the foregoing, it should be appreciated that a set of sampling design parameters, including sampling structural parameters and sampling material coefficients, may be obtained by random sampling over a range of values. The sampling material coefficients may include at least one material coefficient value depending on the kind of material used, and when a plurality of materials are used, different materials may be used at different locations, and thus each material coefficient value is accompanied by information describing the location to which the material is applied. Accordingly, a set of sampling design parameters may describe structural parameters and material coefficients in terms of different locations, while including positional relationships between the different locations. The present disclosure is not limited to the particular form of sampling design parameters, so long as clearly designed structures and materials can be described.
Where only geometry optimization is required, the sampling material coefficients between the sets of sampling design parameters are the same. It should be understood that for the case of using multiple materials, such identity means that the material coefficient values of the same locations are the same. Under the condition that the geometric structure and the materials are required to be optimized simultaneously, sampling material coefficients among the groups of sampling design parameters are not identical, for example, a plurality of groups of sampling structure parameters and a plurality of groups of sampling material coefficients can be determined first, and then are arranged and combined to obtain a plurality of groups of sampling design parameters, and of course, the sampling material coefficients among the groups of sampling design parameters can also be completely different.
It will be appreciated that the boundary points differ for different geometric structures, and that corresponding spatio-temporal sampling points need to be determined for each set of sampling design parameters to ensure efficient sampling.
The pre-training process is similar to the simulation training process in the related art. It should be appreciated that the sampled design parameters, because they are sampled values, may be input into the simulation model as a constant and then pre-training is performed with reference to the training process in the related art. Specifically, according to the optimization target, the sampling structure parameter and the corresponding space-time sampling point in the sampling design parameter are input into the simulation neural network, or the sampling structure parameter, the sampling material coefficient and the corresponding space-time sampling point in the sampling design parameter are input into the simulation neural network, so that the simulation value (i.e. the physical field) of each space-time sampling point output by the simulation neural network can be obtained. For space-time sampling points in a calculation region described by sampling design parameters, partial differential equation losses can be calculated by inputting a physical field and sampling material coefficients into a partial differential equation module; for boundary space-time sampling points described by sampling design parameters, boundary condition loss can be calculated in combination with boundary conditions. And finally, according to partial differential equation loss and boundary condition loss, iteratively training the simulation neural network by a gradient descent method to obtain the pre-training simulation neural network.
Compared with the existing trial-and-error optimization mode, the number of groups of sampling design parameters used in the pre-training is relatively small, namely the pre-training of the simulation neural network can be realized by randomly sampling a small number of sampling design parameters, the obtained pre-training simulation neural network can learn the implicit function relation of the output simulation value with respect to the structural parameters and the material coefficients, the optimization rate of the follow-up formal training is facilitated, the overall optimization process is shortened, and the design optimization period is shortened.
In step S403, initial design parameters are acquired. The initial design parameters are initial values of the adjustable design parameters used in the follow-up formal training, and can be randomly given or can be given according to a certain rule.
Optionally, step S403 includes: based on the pre-training simulation neural network, the approximate optimal solution of the adjustable design parameters is solved and used as the initial design parameters. The pre-training simulation neural network obtained in the step S402 is specifically utilized to solve the approximate optimal solution, so that the initial design parameters are closer to the result of optimization calculation, the training process can be shortened, and the design optimization period can be further shortened. Specifically, the gradient descent method can be used to directly solve the approximate optimal solution of the adjustable design parameters, which belongs to the mature technology in the field and is not described herein.
In step S404, the initial design parameters are used as initial values of the adjustable design parameters, the adjustable design parameters and the pre-training simulation neural network are subjected to multiple iterative training, and the adjustable design parameters and the pre-training simulation neural network are updated based on the loss function in the process of each iterative training until the loss function meets the set conditions, so as to obtain the optimal design parameters and the optimal simulation neural network. This step defines the formal training process. By updating the adjustable design parameters and the pre-training simulation neural network based on the loss function, more accurate optimal design parameters and corresponding optimal simulation neural networks can be automatically calculated, so that optimal design and corresponding simulation results are synchronously obtained, the calculation efficiency of design optimization can be improved compared with an error-testing optimization mode, and the design optimization period is greatly shortened.
Alternatively, the loss function includes boundary condition loss, partial differential equation loss, manufacturing cost loss, design objective loss, although other reasonable losses may be included. The goal of training is to minimize the loss function. The boundary condition loss and the partial differential equation loss are the same as those of the related art, and are mainly used for guaranteeing convergence of the calculation result so as to accurately calculate the simulation result, namely, the simulation result is mainly aimed at training of the simulation neural network, and are not repeated here. Manufacturing cost loss and design objective loss are related to design parameters, primarily for optimizing the design, i.e., primarily for training of adjustable design parameters. The manufacturing cost loss is obtained by the adjustable design parameters, and the lower the value, the better the design manufacturing cost can be reflected, so as to realize the minimum manufacturing cost. In particular, manufacturing cost losses are affected by both structural and material factors. The larger the structural size is, the more materials are consumed, and the higher the cost is; the finer the structure is, the greater the processing difficulty is, and the higher the cost is; the higher the price of the material, the correspondingly higher the cost. In the calculation, referring to fig. 3, the part related to the structure in the manufacturing cost loss is split as the structure loss, and the part related to the material in the manufacturing cost loss is split as the material loss. In addition, if the optimization material is not needed, the optimization calculation of the structural parameters can be realized by using only the structural loss. Regarding the design target loss, as described above, the design target is satisfied by the result that the design and manufacturing needs to be simulated, and the simulation value of the region of interest is often required to reach a certain target value, based on which the design target loss obtained by configuring the output value of the pre-training simulation neural network and the adjustable design parameters can ensure that the simulation result satisfies the design index. It should be appreciated that if only the structure is optimized and the material is not optimized, then a fixed material coefficient may also be used as desired when calculating the design target loss. Referring to fig. 3, the design objective penalty is included in other penalties, which may further include other reasonable penalties. By adopting the loss function, the structural parameters, the material coefficients, the optimization targets and the limiting conditions can be added into the design of the simulation model, so that the design optimization calculation is realized.
In particular, structural, material design issues can be characterized as:
subject to H(θ)>target
the above expression represents that Cost (θ) minimization is achieved on the premise that H (θ) > target is satisfied. Where θ is a design parameter, cost (θ) is a manufacturing Cost, H (θ) is generally a result (QoI, quantity of Interest) of focusing on a physical quantity in a region of interest after simulation based on a structure and a material, target is a design target of QoI, and H (θ) may be a simulation value output by a simulated neural network, or may need to be obtained after further theoretical calculation based on the simulation value and the design parameter. Here, cost (θ) and H (θ) may also be set according to engineering practical conditions, for example, below a given Cost line, to maximize a certain index.
As an example, the loss function may be noted as
Loss=Loss PDE +Loss BC +αCost(θ)+β(target-H(θ)),
Wherein, loss PdE Representing partial differential equation Loss, loss BC The term "α" refers to a boundary condition loss, α Cost (θ) refers to a manufacturing Cost loss, and β (target-H (θ)) refers to a design target loss, and α and β are weight coefficients, may be fixed values, or may be adjusted during training, and the present disclosure is not limited thereto.
In addition, the design target loss may be expressed by β|target-H (θ) | such that H (θ) approaches target, and the resulting H (θ) is prevented from being too large, and at this time, the design target loss may limit only the distance between H (θ) and target, but not the relative size of both, so that H (θ) > target is the target used in the training process train May be slightly larger than tar get.
In some embodiments, optionally, step S404 may directly perform iterative training on the entire simulation model multiple times, by minimizing the total Loss through a gradient descent method.
In other embodiments, optionally, referring to fig. 5, step S404 includes: and taking the initial design parameter as an initial value of the adjustable design parameter, and carrying out repeated alternate iterative training on the adjustable design parameter and the pre-training simulation neural network to obtain an optimal design parameter and an optimal simulation neural network. Wherein each alternating iterative training comprises: based on the manufacturing cost loss and the design target loss, performing iterative training on the adjustable design parameters to update the adjustable design parameters; based on the updated adjustable design parameters, boundary condition loss and partial differential equation loss, performing iterative training on the pre-training simulation neural network to update the pre-training simulation neural network; under the condition that the design target loss, the manufacturing cost loss and the design target loss are smaller than the respective corresponding threshold values, the repeated alternating iteration training is finished. Through training the adjustable design parameters and the pre-training simulation neural network successively, the design parameters can be optimized in a concentrated mode, then the space-time sampling points (mainly boundary space-time sampling points) are updated based on the optimization results of the design parameters, and then the pre-training simulation neural network is optimized in a concentrated mode, so that the optimization efficiency of the adjustable design parameters and the pre-training simulation neural network can be improved, and the design optimization period can be further shortened. In addition, as part of structures in the model are updated, the loss corresponding to the other part of structures can also change, and each target of training can be ensured to be met by taking the design target loss, the manufacturing cost loss and the design target loss which are smaller than the corresponding threshold values as conditions for ending the repeated alternate iteration training, so that the design optimization result is reliable.
Optionally, before performing iterative training on the adjustable design parameters and the pre-training simulated neural network, each alternate iterative training further includes: and acquiring the latest first epoch number and the latest second epoch number, wherein the first epoch number is used for iteratively training the adjustable design parameters, and the second epoch number is used for iteratively training the pre-training simulation neural network. 1 epoch means that the model to be trained is trained once using all samples in the training set, meaning that each sample in the training set has the opportunity to be used to update model parameters. The number of epochs is used as an over-parameter, the working times of the learning algorithm in the whole training set are defined, and the larger the number of epochs is, the more the model parameter is updated, so that the curve of the model is gradually changed from under fitting to over fitting. Therefore, a reasonable epoch number can bring a proper calculation result, reduce the possibility of under-fitting and over-fitting, and help to improve the calculation performance. Before each alternate iterative training, the first epoch number corresponding to the adjustable design parameter and the second epoch number corresponding to the pre-training simulation neural network are obtained, and subsequent alternate iterative training is carried out according to the first epoch number and the second epoch number, so that the first epoch number and the second epoch number have adjustable performance, the adjustment of the adjustable design parameter and the update times of the pre-training simulation neural network in time is facilitated, and the calculation performance is improved.
Fig. 6 is a flowchart illustrating a design optimization method according to another exemplary embodiment of the present disclosure. Fig. 7 is a flow diagram illustrating a design optimization method according to another exemplary embodiment of the present disclosure. The design optimization method according to the exemplary embodiments of the present disclosure may be implemented in a computing device having sufficient operational capability.
Referring to fig. 6 and 7, in step S601, a plurality of sets of sampling design parameters and corresponding spatio-temporal sampling points are acquired.
In step S602, the simulated neural network is pre-trained based on the plurality of sets of sampling design parameters and the corresponding space-time sampling points, to obtain a pre-trained simulated neural network.
In step S603, initial design parameters are acquired.
The steps S601 to S603 refer to the aforementioned steps S401 to S403, and are not described herein.
In step S604, a plurality of sets of candidate design parameters are determined based on the initial design parameters.
Alternatively, step S604 may determine a neighborhood of the initial design parameters and then determine a plurality of sets of candidate design parameters from within the neighborhood.
In step S605, training the pre-training artificial neural network for each set of candidate design parameters to obtain a trained artificial neural network corresponding to the candidate design parameters, and determining a manufacturing cost loss and a design target loss according to the candidate design parameters and the trained artificial neural network, wherein the manufacturing cost loss is obtained by the candidate design parameters, and the design target loss is obtained by the output value of the trained artificial neural network and the candidate design parameters.
In step S606, a set of candidate design parameters is selected from the plurality of sets of candidate design parameters as an optimal design parameter according to the manufacturing cost loss and the design objective loss corresponding to each set of candidate design parameters, and the trained simulated neural network corresponding to the optimal design parameter is used as an optimal simulated neural network.
According to the design optimization method of the exemplary embodiment of the present disclosure, as described above, the initial design parameters obtained in step S603, specifically, the approximate optimal solution of the adjustable design parameters obtained by solving based on the pre-training simulated neural network, can make the initial design parameters relatively close to the result of the optimization calculation. On the basis, a plurality of groups of candidate design parameters are searched in a small range in the neighborhood of the initial design parameters during formal training, and one group of candidate design parameters is selected as a final optimal design parameter, so that the accuracy of a calculation result can be ensured as much as possible under the condition of greatly reducing the calculation amount. Specifically, for reducing the calculation amount, on one hand, training calculation of the adjustable design parameters is not needed, so that the calculation amount can be greatly reduced; on the other hand, the candidate design parameters come from the neighborhood of the initial design parameters, so that the candidate design parameters can be matched with the pre-training simulation neural network, and when the pre-training simulation neural network is trained aiming at each group of candidate design parameters, the parameters of the pre-training simulation neural network are only required to be slightly adjusted, so that the training calculation amount of the simulation neural network can be reduced. For the selection of the optimal design parameters, a group of the optimal design parameters which relatively more meet the design requirements can be selected by comparing the manufacturing cost loss and the design target loss corresponding to each group of the candidate design parameters, so that the rapid and reliable selection of the optimal design parameters is realized. The method uses a pre-training simulation neural network, accelerates and simplifies simulation calculation based on transfer learning. The loss function used in training is referred to in the foregoing description and will not be described in detail here.
FIG. 8 is a block diagram illustrating a design optimization device according to an exemplary embodiment of the present disclosure.
Referring to fig. 8, a design optimization apparatus 800 according to an exemplary embodiment of the present disclosure includes an acquisition unit 801, a pre-training unit 802, and a first training unit 803.
The acquisition unit 801 is configured to acquire a plurality of sets of sample design parameters and corresponding spatio-temporal sample points.
The pre-training unit 802 is configured to pre-train the simulated neural network based on the plurality of sets of sampling design parameters and the corresponding spatio-temporal sampling points to obtain a pre-trained simulated neural network.
The acquisition unit 801 is further configured to acquire initial design parameters.
The first training unit 803 is configured to perform multiple iterative training on the adjustable design parameter and the pre-training simulation neural network with the initial design parameter as an initial value of the adjustable design parameter, and update the adjustable design parameter and the pre-training simulation neural network based on the loss function in the process of each iterative training until the loss function meets the set condition, thereby obtaining the optimal design parameter and the optimal simulation neural network.
Optionally, the loss function includes a boundary condition loss, a partial differential equation loss, a manufacturing cost loss, a design objective loss, the manufacturing cost loss being derived from the tunable design parameters, the design objective loss being derived from the output values of the pre-trained simulated neural network and the tunable design parameters.
Optionally, the first training unit 803 is further configured to use the initial design parameter as an initial value of the adjustable design parameter, and perform multiple alternate iterative training on the adjustable design parameter and the pre-training simulation neural network to obtain an optimal design parameter and an optimal simulation neural network; wherein each alternating iterative training comprises: based on the manufacturing cost loss and the design target loss, performing iterative training on the adjustable design parameters to update the adjustable design parameters; based on the updated adjustable design parameters, boundary condition loss and partial differential equation loss, performing iterative training on the pre-training simulation neural network to update the pre-training simulation neural network; under the condition that the design target loss, the manufacturing cost loss and the design target loss are smaller than the respective corresponding threshold values, the repeated alternating iteration training is finished.
Optionally, before performing iterative training on the adjustable design parameters and the pre-training simulated neural network, each alternate iterative training further includes: and acquiring the latest first epoch number and the latest second epoch number, wherein the first epoch number is used for iteratively training the adjustable design parameters, and the second epoch number is used for iteratively training the pre-training simulation neural network.
Fig. 9 is a block diagram illustrating a design optimization device according to another exemplary embodiment of the present disclosure.
Referring to fig. 9, a design optimization apparatus 900 according to an exemplary embodiment of the present disclosure includes an acquisition unit 901, a pre-training unit 902, a determination unit 903, a second training unit 904, and a selection unit 905.
The acquisition unit 901 is configured to acquire a plurality of sets of sampling design parameters and corresponding spatiotemporal sampling points.
The pre-training unit 902 is configured to pre-train the simulated neural network based on the plurality of sets of sampling design parameters and the corresponding spatio-temporal sampling points, resulting in a pre-trained simulated neural network.
The acquisition unit 901 is also configured to acquire initial design parameters.
The determining unit 903 is configured to determine a plurality of sets of candidate design parameters based on the initial design parameters.
The second training unit 904 is configured to train the pre-trained artificial neural network for each set of candidate design parameters, to obtain a trained artificial neural network corresponding to the candidate design parameters, and determine a manufacturing cost loss and a design target loss according to the candidate design parameters and the trained artificial neural network, where the manufacturing cost loss is obtained by the candidate design parameters, and the design target loss is obtained by an output value of the trained artificial neural network and the candidate design parameters.
The selection unit 905 is configured to select one set from the plurality of sets of candidate design parameters as an optimal design parameter according to the manufacturing cost loss and the design target loss corresponding to each set of candidate design parameters, and to use the trained simulated neural network corresponding to the optimal design parameter as an optimal simulated neural network.
Optionally, the obtaining unit 801 and the obtaining unit 901 are further configured to solve an approximately optimal solution of the tunable design parameter based on the pre-training simulated neural network as the initial design parameter.
Optionally, the design parameters include structural parameters and material coefficients, the simulation model further includes a partial differential equation module, the input parameters of the partial differential equation module include an output value of the simulated neural network and the adjustable material coefficients in the case that the adjustable design parameters include the adjustable structural parameters and the adjustable material coefficients, and the input parameters of the partial differential equation module include the output value of the simulated neural network and the fixed material coefficients in the case that the adjustable design parameters are the adjustable structural parameters.
It should be understood that the specific implementation of the design optimization device 800 according to the exemplary embodiment of the present disclosure may be implemented with reference to the specific implementation of the design optimization method described in connection with fig. 3 to 5, and the specific implementation of the design optimization device 900 according to the exemplary embodiment of the present disclosure may be implemented with reference to the specific implementation of the design optimization method described in connection with fig. 3, 6, and 7, which will not be repeated here.
Design optimization methods and apparatuses according to exemplary embodiments of the present disclosure have been described above with reference to fig. 3 to 9.
The various elements in the design optimization device shown in fig. 8 and 9 may be configured as software, hardware, firmware, or any combination thereof that perform a particular function. For example, each unit may correspond to an application specific integrated circuit, may correspond to a pure software code, or may correspond to a module in which software is combined with hardware. Furthermore, one or more functions implemented by the respective units may also be uniformly performed by components in a physical entity device (e.g., a processor, a client, a server, or the like).
In addition, the design optimization method described with reference to fig. 3 to 7 may be implemented by a program (or instructions) recorded on a computer-readable storage medium. For example, according to an exemplary embodiment of the present disclosure, a computer-readable storage medium storing instructions may be provided, wherein the instructions, when executed by at least one computing device, cause the at least one computing device to perform a design optimization method according to the present disclosure.
The computer program in the above-described computer-readable storage medium may be run in an environment deployed in a computer device such as a client, a host, a proxy device, a server, etc., and it should be noted that the computer program may also be used to perform additional steps other than the above-described steps or to perform more specific processes when the above-described steps are performed, and the contents of these additional steps and further processes have been mentioned in the description of the related methods with reference to fig. 3 to 7, so that a detailed description will not be made here in order to avoid repetition.
It should be noted that each unit in the design optimizing apparatus according to the exemplary embodiment of the present disclosure may completely rely on the execution of the computer program to realize the corresponding function, i.e., each unit corresponds to each step in the functional architecture of the computer program, so that the entire system is called through a special software package (e.g., lib library) to realize the corresponding function.
On the other hand, each of the units shown in fig. 8 and 9 may also be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the corresponding operations may be stored in a computer-readable medium, such as a storage medium, so that the processor can perform the corresponding operations by reading and executing the corresponding program code or code segments.
For example, exemplary embodiments of the present disclosure may also be implemented as a computing device including a storage component and a processor, the storage component having stored therein a set of computer-executable instructions that, when executed by the processor, perform a design optimization method in accordance with exemplary embodiments of the present disclosure.
In particular, the computing devices may be deployed in servers or clients, as well as on node devices in a distributed network environment. Further, the computing device may be a PC computer, tablet device, personal digital assistant, smart phone, web application, or other device capable of executing the above set of instructions.
Here, the computing device is not necessarily a single computing device, but may be any device or aggregate of circuits capable of executing the above-described instructions (or instruction set) alone or in combination. The computing device may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with locally or remotely (e.g., via wireless transmission).
In a computing device, the processor may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
Some of the operations described in the design optimization method according to the exemplary embodiment of the present disclosure may be implemented in software, some of the operations may be implemented in hardware, and furthermore, the operations may be implemented in a combination of software and hardware.
The processor may execute instructions or code stored in one of the memory components, where the memory component may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory component may be integrated with the processor, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, the storage component may comprise a stand-alone device, such as an external disk drive, a storage array, or any other storage device usable by a database system. The storage component and the processor may be operatively coupled or may communicate with each other, such as through an I/O port, network connection, etc., such that the processor is able to read files stored in the storage component.
In addition, the computing device may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the computing device may be connected to each other via buses and/or networks.
Design optimization methods according to exemplary embodiments of the present disclosure may be described as various interconnected or coupled functional blocks or functional diagrams. However, these functional blocks or functional diagrams may be equally integrated into a single logic device or operate at non-exact boundaries.
Thus, the design optimization method described with reference to fig. 3-7 may be implemented by a system comprising at least one computing device and at least one storage device storing instructions.
According to an exemplary embodiment of the present disclosure, the at least one computing device is a computing device for performing a design optimization method according to an exemplary embodiment of the present disclosure, a storage device having stored therein a set of computer-executable instructions that, when executed by the at least one computing device, perform the design optimization method described with reference to fig. 3 to 7.
The foregoing description of exemplary embodiments of the present disclosure has been presented only to be understood as illustrative and not exhaustive, and the present disclosure is not limited to the exemplary embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. Accordingly, the scope of the present disclosure should be determined by the scope of the claims.

Claims (10)

1. A design optimization method, wherein the design optimization method is implemented based on a simulation model, the simulation model includes a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network include a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network include simulation values of a target physical quantity at the input space-time sampling point, the design optimization method includes:
Acquiring a plurality of groups of sampling design parameters and corresponding space-time sampling points;
based on the multiple groups of sampling design parameters and corresponding space-time sampling points, pre-training the simulated neural network to obtain a pre-trained simulated neural network;
acquiring initial design parameters;
and taking the initial design parameters as initial values of the adjustable design parameters, performing multiple iterative training on the adjustable design parameters and the pre-training simulation neural network, and updating the adjustable design parameters and the pre-training simulation neural network based on a loss function in the process of each iterative training until the loss function meets a set condition, so as to obtain an optimal design parameter and an optimal simulation neural network.
2. The design optimization method of claim 1, wherein,
the loss function comprises boundary condition loss, partial differential equation loss, manufacturing cost loss and design target loss, wherein the manufacturing cost loss is obtained by the adjustable design parameters, and the design target loss is obtained by the output value of the pre-training simulation neural network and the adjustable design parameters.
3. The design optimization method as set forth in claim 2, wherein the performing iterative training on the adjustable design parameter and the pre-training artificial neural network for a plurality of times with the initial design parameter as an initial value of the adjustable design parameter, updating the adjustable design parameter and the pre-training artificial neural network based on a loss function during each iterative training until the loss function satisfies a set condition, to obtain the optimal design parameter and the optimal artificial neural network, comprises:
Taking the initial design parameters as initial values of the adjustable design parameters, and carrying out repeated alternate iterative training on the adjustable design parameters and the pre-training simulation neural network to obtain the optimal design parameters and the optimal simulation neural network;
wherein each alternating iterative training comprises:
iteratively training the adjustable design parameters based on the manufacturing cost loss and the design objective loss to update the adjustable design parameters;
iteratively training the pre-training simulated neural network based on the updated adjustable design parameters, the boundary condition loss and the partial differential equation loss to update the pre-training simulated neural network;
wherein the multiple alternating iteration training is ended in the case that the design target loss, the manufacturing cost loss, and the design target loss are all less than respective corresponding thresholds.
4. The design optimization method of claim 3, wherein prior to iteratively training the tunable design parameters and the pre-trained simulated neural network, each alternate iterative training further comprises:
and acquiring the latest first and second epoch numbers, wherein the first epoch number is used for iteratively training the adjustable design parameters, and the second epoch number is used for iteratively training the pre-training simulation neural network.
5. A design optimization method, wherein the design optimization method is implemented based on a simulation model, the simulation model includes a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network include a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network include simulation values of a target physical quantity at the input space-time sampling point, the design optimization method includes:
acquiring a plurality of groups of sampling design parameters and corresponding space-time sampling points;
based on the multiple groups of sampling design parameters and corresponding space-time sampling points, pre-training the simulated neural network to obtain a pre-trained simulated neural network;
acquiring initial design parameters;
determining a plurality of sets of candidate design parameters based on the initial design parameters;
training the pre-training simulation neural network aiming at each group of the candidate design parameters to obtain a trained simulation neural network corresponding to the candidate design parameters, and determining manufacturing cost loss and design target loss according to the candidate design parameters and the trained simulation neural network, wherein the manufacturing cost loss is obtained by the candidate design parameters, and the design target loss is obtained by output values of the trained simulation neural network and the candidate design parameters;
And selecting one group from the plurality of groups of candidate design parameters as an optimal design parameter according to the manufacturing cost loss and the design target loss corresponding to each group of candidate design parameters, and taking the trained simulation neural network corresponding to the optimal design parameter as an optimal simulation neural network.
6. The design optimization method of any one of claims 1 to 5, wherein the obtaining initial design parameters comprises:
and solving an approximate optimal solution of the adjustable design parameter based on the pre-training simulation neural network to serve as the initial design parameter.
7. The design optimization method of any one of claims 1 to 5, wherein the design parameters include structural parameters and material coefficients, the simulation model further includes a partial differential equation module, and the input parameters of the partial differential equation module include an output value of the simulated neural network and the adjustable material coefficients in a case where the adjustable design parameters include an adjustable structural parameter and an adjustable material coefficient, and the input parameters of the partial differential equation module include an output value of the simulated neural network and a fixed material coefficient in a case where the adjustable design parameters are adjustable structural parameters.
8. A design optimization apparatus, characterized in that the design optimization apparatus is implemented based on a simulation model including a simulation neural network and an adjustable design parameter, input parameters of the simulation neural network including a space-time sampling point and the adjustable design parameter, output parameters of the simulation neural network including a simulation value of a target physical quantity at the input space-time sampling point, the design optimization apparatus comprising:
an acquisition unit configured to acquire a plurality of sets of sampling design parameters and corresponding spatio-temporal sampling points;
the pre-training unit is configured to pre-train the simulated neural network based on the plurality of groups of sampling design parameters and corresponding space-time sampling points to obtain a pre-trained simulated neural network;
the acquisition unit is further configured to acquire initial design parameters;
the first training unit is configured to take the initial design parameters as initial values of the adjustable design parameters, perform multiple iterative training on the adjustable design parameters and the pre-training simulation neural network, update the adjustable design parameters and the pre-training simulation neural network based on a loss function in the process of each iterative training until the loss function meets a set condition, and obtain an optimal design parameter and an optimal simulation neural network.
9. A computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform the design optimization method of any one of claims 1-7.
10. A system comprising at least one computing device and at least one storage device storing instructions that, when executed by the at least one computing device, cause the at least one computing device to perform the design optimization method of any one of claims 1-7.
CN202210952590.2A 2022-08-09 2022-08-09 Design optimization method and design optimization device Pending CN117634278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210952590.2A CN117634278A (en) 2022-08-09 2022-08-09 Design optimization method and design optimization device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210952590.2A CN117634278A (en) 2022-08-09 2022-08-09 Design optimization method and design optimization device

Publications (1)

Publication Number Publication Date
CN117634278A true CN117634278A (en) 2024-03-01

Family

ID=90032570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210952590.2A Pending CN117634278A (en) 2022-08-09 2022-08-09 Design optimization method and design optimization device

Country Status (1)

Country Link
CN (1) CN117634278A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117852315A (en) * 2024-03-07 2024-04-09 北京适创科技有限公司 Method for determining initial conditions of computer-aided engineering and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117852315A (en) * 2024-03-07 2024-04-09 北京适创科技有限公司 Method for determining initial conditions of computer-aided engineering and related device
CN117852315B (en) * 2024-03-07 2024-05-10 北京适创科技有限公司 Method for determining initial conditions of computer-aided engineering and related device

Similar Documents

Publication Publication Date Title
Krishnapriyan et al. Characterizing possible failure modes in physics-informed neural networks
Zehnder et al. Ntopo: Mesh-free topology optimization using implicit neural representations
Elsayed et al. Robust parameter design optimization using Kriging, RBF and RBFNN with gradient-based and evolutionary optimization techniques
US20230062600A1 (en) Adaptive design and optimization using physics-informed neural networks
Liu et al. A simple reliability-based topology optimization approach for continuum structures using a topology description function
JP7460627B2 (en) Prescription analysis in highly collinear response spaces
Ma et al. Research on slope reliability analysis using multi-kernel relevance vector machine and advanced first-order second-moment method
Chen et al. MGNet: a novel differential mesh generation method based on unsupervised neural networks
Lichtenstein et al. Deep eikonal solvers
Cordero-Gracia et al. An interpolation tool for aerodynamic mesh deformation problems based on octree decomposition
JP2023526177A (en) Predictive modeling of manufacturing processes using a set of inverted models
CN116341097B (en) Transonic wing optimal design method based on novel high-dimensional proxy model
CN117634278A (en) Design optimization method and design optimization device
Hao et al. Design optimization by integrating limited simulation data and shape engineering knowledge with Bayesian optimization (BO-DK4DO)
Wen et al. Cost reduction for data acquisition based on data fusion: Reconstructing the surface temperature of a turbine blade
Basterrech et al. Evolutionary Echo State Network: A neuroevolutionary framework for time series prediction
Yang et al. Dmis: Dynamic mesh-based importance sampling for training physics-informed neural networks
Meng et al. Efficient uncertainty quantification for unconfined flow in heterogeneous media with the sparse polynomial chaos expansion
Chen et al. Gpt-pinn: Generative pre-trained physics-informed neural networks toward non-intrusive meta-learning of parametric pdes
Hu et al. ℓ-DARTS: Light-weight differentiable architecture search with robustness enhancement strategy
Fischer et al. A surrogate-based adjustment factor approach to multi-fidelity design optimization
Gao et al. Developing a new mesh deformation technique based on support vector machine
Li et al. Stepwise-then-intelligent algorithm (STIA) for optimizing remotely sensed image rectification
Koziel et al. Introduction to surrogate modeling and surrogate-based optimization
Clarich et al. Formulations for Robust Design and Inverse Robust Design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination