CN114611678A - Training method and device, data processing method, electronic device and storage medium - Google Patents

Training method and device, data processing method, electronic device and storage medium Download PDF

Info

Publication number
CN114611678A
CN114611678A CN202210281615.0A CN202210281615A CN114611678A CN 114611678 A CN114611678 A CN 114611678A CN 202210281615 A CN202210281615 A CN 202210281615A CN 114611678 A CN114611678 A CN 114611678A
Authority
CN
China
Prior art keywords
group
neural network
output parameters
parameters
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210281615.0A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Biren Intelligent Technology Co Ltd
Original Assignee
Shanghai Biren Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Biren Intelligent Technology Co Ltd filed Critical Shanghai Biren Intelligent Technology Co Ltd
Priority to CN202210281615.0A priority Critical patent/CN114611678A/en
Publication of CN114611678A publication Critical patent/CN114611678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/13Differential equations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Feedback Control In General (AREA)

Abstract

A training method and apparatus, a data processing method, an electronic device, and a storage medium. The training processing method comprises the following steps: acquiring at least one group of training input parameters; inputting at least one group of training input parameters into a neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters; acquiring at least one group of second output parameters corresponding to the at least one group of first output parameters respectively, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively; obtaining loss values of the neural network, which are calculated through a loss function corresponding to the neural network, for the at least one group of first output parameters based on the at least one group of second output parameters; parameters of the neural network are adjusted based on the loss values. The training method reduces the search space of the optimizer, does not need the optimizer to search for an optimal solution to meet initial and boundary conditions, improves the training efficiency and accelerates the training convergence.

Description

Training method and device, data processing method, electronic device and storage medium
Technical Field
Embodiments of the present disclosure relate to a training method of a neural network, a data processing method, a training apparatus of a neural network, a data processing apparatus, an electronic device, and a non-transitory computer-readable storage medium.
Background
With the improvement of the GPU capability, the software and hardware ecology for supporting deep learning is rapidly developed. The solution of the scientific calculation problem by deep learning is a trend, and methods for solving differential equations by deep learning are also gradually emerging. Of particular interest is a method called physical-guided neural network, which injects new energy into the field of scientific computing. The core thought of the physical guided Neural Network (PINN) method is that a Neural Network is used for representing the solution of a differential equation, and the numerical solution is continuously approximated by training the Neural Network, so that the method brings some advantages different from the traditional numerical method.
Disclosure of Invention
At least one embodiment of the present disclosure provides a method for training a neural network, including: acquiring at least one group of training input parameters; inputting the at least one group of training input parameters into the neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters; acquiring at least one group of second output parameters corresponding to the at least one group of first output parameters respectively, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively; obtaining loss values of the neural network for the at least one group of first output parameters, which are calculated through a loss function corresponding to the neural network, based on the at least one group of second output parameters; adjusting parameters of the neural network based on the loss values.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, the neural network is trained to solve differential equations describing the motion and/or state of a target system.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, acquiring at least one set of second output parameters respectively corresponding to the at least one set of first output parameters includes: acquiring boundary condition constraint and initial condition constraint corresponding to the target system; for any group of first output parameters, determining a group of intermediate output parameters corresponding to the any group of first output parameters, wherein when a group of training input parameters corresponding to the any group of first output parameters indicates that the group of training input parameters is located at a preset region boundary, the corresponding group of intermediate output parameters is set to a value specified by the boundary condition constraint; and determining a group of second output parameters corresponding to the any group of first output parameters according to the corresponding group of intermediate output parameters and the initial condition constraint.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, for any set of first output parameters, determining a set of intermediate output parameters corresponding to the any set of first output parameters includes: obtaining the corresponding set of intermediate output parameters according to the following formula:
Figure BDA0003557149620000021
Figure BDA0003557149620000022
wherein,
Figure BDA0003557149620000023
representing the corresponding group of intermediate output parameters, (p, v) representing any one group of training input parameters, p being in a vector form, and τ representing the time corresponding to p; g (p) represents the boundary condition constraint; the distance between p and the preset region boundary in any set of training input parameters is in positive correlation with Q (p), and when p is positioned at the preset region boundary, Q (p) is 0; an indication of a Hadamard product operation; x (p, tau; theta) represents a set of first output parameters corresponding to any one of the sets of training input parameters.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, each set of training input parameters includes a time sub-parameter, and determining a set of second output parameters corresponding to any set of first output parameters according to the corresponding set of intermediate output parameters and the initial condition constraint includes: when the time sub-parameter in any one group of training input parameters is indicated as the initial time, setting the corresponding group of second output parameters as the values specified by the initial conditions; determining the corresponding group of second output parameters according to the corresponding group of intermediate output parameters and the initial condition constraint when the time sub-parameter indication in any group of training input parameters is within a preset time range from an initial time; and after the time sub-parameter indication in any one set of training input parameters is within the preset time range, taking the corresponding set of intermediate output parameters as the corresponding set of second output parameters.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, the corresponding set of second output parameters is obtained according to the following formula:
Figure BDA0003557149620000024
Figure BDA0003557149620000025
wherein,
Figure BDA0003557149620000026
representing the corresponding set of second output parameters; t (tau; k) is a transition function and is used for adjusting the preset time range through a transition parameter k, and the T (tau; k) meets the following two conditions:
Figure BDA0003557149620000027
x0(p) represents the initial conditional constraints.
For example, in a training method of a neural network provided by at least one embodiment of the present disclosure, the transition parameter k is in a positive correlation with the length of the preset time range.
For example, in a training method for a neural network provided by at least one embodiment of the present disclosure, obtaining a loss value of the neural network for the at least one set of first output parameters, which is calculated by a loss function corresponding to the neural network, based on the at least one set of second output parameters includes: obtaining a residual error result corresponding to the at least one group of second output parameters according to the at least one group of second output parameters and the differential equation; and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, the neural network corresponds toThe loss function is expressed as:
Figure BDA0003557149620000031
Figure BDA0003557149620000032
wherein,
Figure BDA0003557149620000033
representing a loss function, theta representing a parameter of the neural network, R (×) representing a residual calculation, p, tau representing any of the at least one set of training input parameters,
Figure BDA0003557149620000034
representing a set of second output parameters corresponding to any one set of training input parameters, M representing the number of differential equations corresponding to the target system, i being a positive integer,
Figure BDA0003557149620000035
representing an accumulation operation, λiRepresenting the residual weight coefficients corresponding to the ith differential equation,
Figure BDA0003557149620000036
representing the residual result calculated from the ith differential equation.
For example, in a training method for a neural network provided by at least one embodiment of the present disclosure, obtaining a loss value of the neural network for the at least one set of first output parameters, which is calculated by a loss function corresponding to the neural network, based on the at least one set of second output parameters includes: calculating a residual error result corresponding to the at least one group of second output parameters according to the at least one group of second output parameters and the differential equation; calculating a data fitting loss value according to output data and label data, wherein the output data is output obtained by inputting training data into the neural network, and the label data is a standard value corresponding to the training data; and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters and the data fitting loss value.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, the neural network is a physical guiding neural network.
For example, in a training method for a neural network provided in at least one embodiment of the present disclosure, the target system is a fluid dynamic system, the differential equation is obtained by converting a partial differential equation under an euler view, and the differential equation describes the fluid dynamic system under a lagrange view.
For example, in a training method of a neural network provided in at least one embodiment of the present disclosure, acquiring the at least one set of training input parameters includes: acquiring at least one set of training input parameters under the Lagrangian view.
At least one embodiment of the present disclosure provides a data processing method, including: acquiring at least one group of input parameters corresponding to a target system; inputting the at least one set of input parameters into a neural network to obtain at least one set of intermediate output results; performing conversion processing on the at least one group of intermediate output results to obtain at least one group of final output results corresponding to the at least one group of intermediate output results one to one, wherein the at least one group of final output results are obtained by respectively adding boundary condition constraints and initial condition constraints to the at least one group of intermediate output results; wherein the neural network is at least partially trained according to the training method of any embodiment of the present disclosure.
At least one embodiment of the present disclosure provides a training apparatus for a neural network, including: an acquisition unit configured to acquire at least one set of training input parameters; the operation processing unit is configured to input the at least one group of training input parameters into the neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters; the constraint processing unit is configured to acquire at least one group of second output parameters corresponding to the at least one group of first output parameters, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively; a loss value calculation unit configured to obtain, based on the at least one set of second output parameters, a loss value of the neural network for the at least one set of first output parameters, which is calculated by a loss function corresponding to the neural network; an adjusting unit configured to adjust a parameter of the neural network based on the loss value.
At least one embodiment of the present disclosure provides a data processing apparatus, including: the input acquisition unit is configured to acquire at least one group of input parameters corresponding to a target system; the processing unit inputs the at least one group of input parameters into the neural network to obtain at least one group of output parameters; wherein the neural network is trained at least in part according to the training method of any embodiment of the present disclosure.
At least one embodiment of the present disclosure provides an electronic device, including: a memory non-transiently storing computer-executable instructions; a processor configured to execute the computer-executable instructions, wherein the computer-executable instructions, when executed by the processor, implement the neural network training method according to any embodiment of the present disclosure or the data processing method according to any embodiment of the present disclosure.
At least one embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer-executable instructions, and when executed by a processor, the computer-executable instructions implement a training method for a neural network according to any one embodiment of the present disclosure or a data processing method according to any one embodiment of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description only relate to some embodiments of the present disclosure and do not limit the present disclosure.
Fig. 1 is a schematic flow chart of a data processing method according to at least one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a two-dimensional fluid simulation space provided by an embodiment of the present disclosure;
FIG. 3 is a block diagram of a neural network provided by an embodiment of the present disclosure;
FIGS. 4A and 4B are graphs comparing simulation results of a free liquid level simulation provided by an embodiment of the present disclosure;
fig. 5A to 5D are comparison diagrams of simulation results of free liquid level simulation provided by another embodiment of the present disclosure;
fig. 6 is a schematic flow chart of a data processing method according to at least one embodiment of the disclosure;
fig. 7 is a schematic block diagram of an exercise device provided in at least one embodiment of the present disclosure;
fig. 8 is a schematic block diagram of a data processing apparatus according to at least one embodiment of the present disclosure;
fig. 9 is a schematic diagram of an electronic device according to at least one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a non-transitory computer-readable storage medium provided in at least one embodiment of the present disclosure;
fig. 11 is a schematic diagram of a hardware environment according to at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used only to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly. To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of some known functions and components have been omitted from the present disclosure.
With the development of science and technology, differential equations have been developed into an independent discipline, and the laws of physics are continuously developed in the form of differential equations. Such as the transport process of matter, the propagation of sound in air, the diffusion of heat, the communication of electromagnetic waves, the flow process of water and air, and the matter wave of quantum mechanics-schrodinger equation, etc. The success of physics extends the theory and technology of differential equations to a wide range of scientific fields, such as chemistry, biology, and economics. Therefore, it can be said without any doubt that differential equations play an extremely important role in the fields of science and engineering.
The conventional approach to solve the problem by using differential equations is to analyze the problem, model the model with the differential equations, collect observation data, and configure the key parameters and conditions (e.g., initial conditions, boundary conditions, etc. of the differential equations) of the model. Then, the differential equation is solved through a mathematical means, and usually, the differential equation needs to be discretized in a certain sense, and is converted into a form which can be calculated by a computer, and finally, the solution is obtained.
At present, solving the differential equation has been developed into a huge field, and many numerical calculation methods such as a difference method, a finite volume method, a finite element method, and the like have been developed.
The development of deep learning has injected new vitality into the traditional methods. Compared with the traditional numerical method, the physical guide neural network method is a meshless method and has the advantages that:
1) better adaptability, and can fully utilize the collected data;
2) the method is easy to realize, is easy to realize through the existing deep learning tool chain, and better uses a GPU (Graphic Processing Unit) to complete parallel computation;
3) the method is more applicable to integral differential equations, random partial differential equations, fractional partial differential equations and even solves some inverse problems which are not mathematically determined.
For example, in the case of a simulation of the fluid level, which is also called a free surface, the fluid level is an interface between different fluids, and a common example is that the interface between water and air is the water level. Common liquid level problems such as sloshing of container fluids during transport, movement of ocean waves, dam breaks, etc.
Conventional fluid mechanics numerical simulation methods can be broadly divided into two types, namely mesh-based (mesh-based) and mesh-free (mesh-free), specifically, mesh-based algorithms such as finite difference, finite element, etc., and mesh-free algorithms such as smooth particle, etc. The grid method is suitable for calculation of Euler visual angles, and the grid-free method is more suitable for calculation of Lagrangian visual angles because the position of fluid particles can seriously distort the grid. In general, the grid-based approach can more accurately resolve multi-scale phenomena of flow, such as turbulence; the non-grid method can better calculate the position information of the particles, such as gas-liquid interfaces, shock waves and the like.
Due to the high development of deep learning in recent years, a fluid simulation method based on a neural network is also gradually developed. These methods utilize the universal approximation theorem of neural networks (neural networks can be used to approximate arbitrary complex functions and can achieve arbitrary approximation accuracy) to fit fluid observation data and learn physics from the data to perform simulation and prediction. The physical guide neural network based on physical priori knowledge is a novel fluid simulation algorithm at present. Unlike other neural network algorithms, the physical guided neural network approach can model the fluid and make predictions by constraining the neural network with a system of partial differential equations describing the fluid physics without observation data.
The advantage of fitting the physics-describing equations directly without the use of observation data, makes it a popular meshless method of computing fluids. Compared with the iterative algorithm of the traditional numerical method, the physical guide neural network directly fits the whole space and time domain by using the neural network. Unlike conventional methods that can only use approximate derivatives, physical guided neural networks use precise derivatives in the framework of auto-differentiation, so that calculations, especially simulations of non-linear terms, can be performed more accurately. In addition, the final result obtained by the calculation of the physical guide neural network is a continuous function, so that the numerical value of any position can be obtained under the condition of no interpolation error. One advantage of this is that the simulation resolution can be arbitrarily modified without recalculation.
The basic idea of the physical guide neural network method is to characterize the neural network to be solved, convert the initial conditions, boundary conditions and partial differential equation (system) constraints into the loss function of the neural network, and optimize the neural network to be solved by a gradient descent method, so that the neural network to be solved meets the physical constraint requirement.
However, the boundary conditions and initial conditions of the current physical guide neural network method cannot be well satisfied, which causes huge calculation errors. For example, the loss function of a physical guide neural network can be expressed as:
Figure BDA0003557149620000071
here, L denotes a loss function, R1Residual error, R, representing partial differential equation(s)2Denotes a residual error, R, corresponding to the boundary condition3And the residual errors corresponding to the initial conditions are represented, and alpha and beta are weight coefficients used for helping to optimize convergence of the loss function.
R2And R3Corresponding to the fitting of data and R1Corresponding to the neural netThe regularization of the network will therefore cause the gradient of the loss function to decrease unevenly and to change too fast (rigid), which in turn affects the convergence of the whole optimization process and results in poor final results.
For example, when a fluid level simulation is performed by using a physical neural network, the physical neural network method cannot satisfy a free surface condition (for example, the pressure on the free surface is zero), so that the physical neural network method has a poor fluid simulation effect on the fluid level.
In addition, the current physical guidance neural network method usually simulates fluid based on an euler visual angle, the fluid simulation under the euler visual angle can only obtain speed field information, if the position needs to be calculated, extra calculation is needed, and the position calculation cannot be directly realized in a physical guidance neural network frame at present. In addition, when the current physical guide neural network method is used for fluid simulation, a plurality of initial points and a plurality of boundary points are additionally arranged to simulate initial conditions and boundary conditions, wherein the initial points are used for calculating R2The boundary points being used to calculate R3Such calculations take up more resources and result in models that are not well-extended.
At least one embodiment of the disclosure provides a training method and device for a neural network, a data processing method and device, an electronic device, and a storage medium. The training method of the neural network comprises the following steps: acquiring at least one group of training input parameters; inputting at least one group of training input parameters into a neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters; acquiring at least one group of second output parameters corresponding to the at least one group of first output parameters respectively, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively; obtaining loss values of the neural network, which are calculated through a loss function corresponding to the neural network, for the at least one group of first output parameters based on the at least one group of second output parameters; parameters of the neural network are adjusted based on the loss values.
The training method is characterized in that initial condition constraints and boundary condition constraints are added to the output of the neural network in a forced mode in a training stage, so that the final output automatically meets the initial conditions and the boundary conditions, loss values are calculated by using output results meeting the initial conditions and the boundary conditions to optimize the neural network, the search space of an optimizer is reduced, the optimizer is not required to search an optimal solution to meet the initial conditions and the boundary conditions, the accuracy of the neural network results is improved, the training speed is accelerated, and the convergence of a loss function is accelerated; in addition, a plurality of initial points and boundary points do not need to be additionally arranged, the operation efficiency is improved, the resource consumption is reduced, and the expandability of the model is improved.
The training method of the neural network provided in at least one embodiment of the present disclosure may be applied to a training apparatus of the neural network provided in at least one embodiment of the present disclosure, and the training apparatus of the neural network may be configured on an electronic device. The electronic device may be a personal computer, a mobile terminal, and the like, and the mobile terminal may be a hardware device such as a mobile phone and a tablet computer.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, but the present disclosure is not limited to these specific embodiments.
Fig. 1 is a schematic flow chart of a training method of a neural network according to at least one embodiment of the present disclosure.
As shown in fig. 1, a training method of a neural network according to at least one embodiment of the present disclosure includes steps S10 to S50.
At step S10, at least one set of training input parameters is obtained.
In step S20, at least one training input parameter is input to the neural network for operation, and at least one first output parameter corresponding to the at least one training input parameter is obtained.
In step S30, at least one set of second output parameters corresponding to the at least one set of first output parameters is obtained.
For example, the at least one set of second output parameters is obtained by adding boundary condition constraints and initial condition constraints to the at least one set of first output parameters, respectively.
In step S40, based on the at least one set of second output parameters, the loss values of the neural network calculated by the loss function corresponding to the neural network for the at least one set of first output parameters are obtained.
In step S50, parameters of the neural network are adjusted based on the loss values.
For example, the steps S10-S50 are repeatedly performed until the loss value satisfies a predetermined convergence condition, at which time a trained neural network is obtained.
For example, the neural network is trained to solve the differential equation, that is, the trained neural network can represent the differential equation and directly output the differential equation to be solved which meets the physical constraint requirement.
For example, the Differential Equation includes a Differential Equation or a Differential Equation set, and the Differential Equation may include an integral Differential Equation, a stochastic Partial Differential Equation, a fractional Partial Differential Equation, etc., for example, a trained neural network can characterize a Partial Differential Equation set (PDE), etc., which is not limited in this disclosure.
For example, the differential equation can describe the motion or state of the target system. For example, the target system may be a physical system, a chemical system, a biological system, and the like. For example, the differential equation may characterize the motion of a substance in the target system, the state change of the target system, etc., for example, referring to the foregoing, the differential equation may be used to describe the propagation of sound in air, the diffusion of heat, the flow process of fluid, etc., the present disclosure does not specifically limit the differential equation and the target system, and the training method of the neural network provided in at least one embodiment of the present disclosure may be applied to any phenomenon that can be simulated using the differential equation.
For example, the neural network may be a physical steering neural network, for example, the neural network may also adopt other neural network structures based on a priori knowledge of physics, etc., and the type of the neural network is not particularly limited by the present disclosure. For example, the network structure of the physical steering neural network may be a fully connected network, and of course, any other network structure that is feasible may be adopted, and the disclosure does not specifically limit this.
For example, in step S10, a set of training input parameters may include one or more input sub-parameters. The input sub-parameters may be determined from the independent variables of the differential equation.
For example, when the target system is a two-dimensional fluid dynamics system, if the set of training input parameters is training input parameters under euler viewing angle, the set of training input parameters can be expressed as: (i, j, t), where input sub-parameter i represents the abscissa of the fluid particle, input sub-parameter j represents the ordinate of the fluid particle, and input sub-parameter t represents time, i.e. a time sub-parameter.
For example, when the target system is a two-dimensional fluid dynamics system, if the set of training input parameters is the training input parameters at the lagrangian perspective, the set of training input parameters may be represented as: (a, b, t), where the input sub-parameter a represents the label abscissa of the fluid particle, the input sub-parameter b represents the label ordinate of the fluid particle, and the input sub-parameter t represents the time, i.e. the time sub-parameter.
For example, the set of first output parameters and the set of second output parameters comprise a plurality of output sub-parameters, which are determined according to the dependent variable of the differential equation, i.e. the numerical solution to be solved by the differential equation.
For example, when the target system is a two-dimensional fluid dynamics system, if a set of training input parameters is (i, j, t), the output sub-parameters can be expressed as: (u, v, P), the output sub-parameter u representing the lateral velocity of the fluid particle, the output sub-parameter v representing the longitudinal velocity of the fluid particle, and the output sub-parameter P representing the pressure of the fluid particle.
For example, when the target system is a two-dimensional fluid dynamics system, if a set of training input parameters is (a, b, t), the output sub-parameters can be expressed as: (i, j, u, v, P), the output sub-parameter i represents the abscissa of the fluid particle, the output sub-parameter j represents the ordinate of the fluid particle, the output sub-parameter u represents the transverse velocity of the fluid particle, the output sub-parameter v represents the longitudinal velocity of the fluid particle, and the output sub-parameter P represents the pressure of the fluid particle.
That is, the set of training input parameters may be selected independent variables in the differential equation, and the set of first output parameters and the set of second output parameters may be derived dependent variables in the differential equation. In the training stage, the set of first output parameters corresponding to the set of training input parameters obtained in step S20 is the values of the dependent variables estimated based on the parameters of the neural network, and the set of second output parameters corresponding to the set of first output parameters obtained in step S30 is a set (ansatz) that can satisfy the boundary conditions and the initial conditions.
For example, when the target system is a fluid dynamics system, the training neural network may simultaneously set a plurality of fluid particles in a space in which a fluid motion is to be simulated, where each fluid particle corresponds to a set of training input parameters, and train the neural network based on a plurality of sets of training input parameters corresponding to the plurality of fluid particles.
It should be noted that the number of groups of the training input parameters, the type of the sub-parameters included in the training input parameters, and the like may be specifically set according to needs, and the disclosure does not specifically limit this.
For example, after obtaining at least one set of first output parameters of the neural network output through step S20, initial conditional constraints and boundary conditional constraints are added to the first output parameters, so that the final output of the neural network already satisfies the initial conditional constraints and the boundary conditional constraints.
For example, step S30 may include: acquiring boundary condition constraint and initial condition constraint corresponding to a target system; determining a group of intermediate output parameters corresponding to any group of first output parameters aiming at any group of first output parameters, wherein when a group of training input parameters corresponding to any group of first output parameters indicate that a group of training input parameters are positioned at the boundary of a preset area, the corresponding group of intermediate output parameters are set as values specified by boundary condition constraint; and determining a group of second output parameters corresponding to any group of first output parameters according to the corresponding group of intermediate output parameters and the initial condition constraint.
For example, the boundary condition refers to a change rule of a variable or a derivative thereof solved on the boundary of the preset area along with time and place, and the boundary condition constraint indicates a constraint condition of the target system on the boundary of the preset area. For example, the initial conditional constraints indicate the initial state of the entire target system. For differential equations, the initial condition constraints and the boundary condition constraints determined are a prerequisite for the adaptation (well-posed) of the overall problem, so that differential equations can exist and have a unique solution.
The boundary condition constraints and initial condition constraints may be self-specified according to the target system and the specific phenomena to be simulated, and may be expressed in any known form, which is not limited by the present disclosure.
For example, in step S10, a plurality of sets of training input parameters are obtained, wherein any one of the sets of training input parameters is represented by (p, τ), where τ represents a temporal sub-parameter and p is in the form of a vector, representing other independent variables in the differential equation except for the temporal sub-parameter. For example, p represents an input coordinate space or feature of a differential equation.
For example, in step S20, a set of training parameters (p, τ) is input to the neural network for computation, resulting in a set of first output parameters x (p, τ; θ) corresponding to the set of training parameters (p, τ), where θ represents a parameter of the neural network.
For example, in a differential equation, the dependent variable in the formula can be expressed as x (p, τ), and x (p, τ) can also be understood as a true numerical solution corresponding to a set of training parameters (p, τ), and x (p, τ; θ) represents an estimate of x (p, τ) obtained through a neural network. For example, for the set of first output parameters x (p, τ; θ), when the boundary condition constraint is applied, a set of intermediate output parameters corresponding to the set of first output parameters x (p, τ; θ) is determined according to the boundary condition constraint
Figure BDA0003557149620000111
For example, a set of intermediate output parameters is determined according to equation 2 below
Figure BDA0003557149620000112
Figure BDA0003557149620000121
Here, g (p) represents a boundary condition constraint; the distance between p and the preset region boundary is in positive correlation with Q (p), namely the closer p is to the preset region boundary, the smaller Q (p) is, and when p is positioned at the preset region boundary, Q (p) is 0; an indication of a Hadamard product (Hadamard product) operation.
For example, g (p) is in the form of a vector group, and g (p) may include a plurality of initial conditions to set different initial states for a plurality of variables in the target system.
For example, as shown in equation 2, when a set of training input parameters (p, τ) indicates that the set of training input parameters (p, τ) is located at the preset region boundary, i.e., p is located at the preset region boundary, q (p) is 0, then q (p) is equal to 0
Figure BDA0003557149620000122
I.e. a corresponding set of intermediate output parameters
Figure BDA0003557149620000123
Set to the value g (p) specified by the boundary condition constraint.
Certainly, formula 2 shows a possible implementation manner, and a person skilled in the art may set other expression forms as long as "when a set of training input parameters corresponding to any set of first output parameters indicates that a set of training input parameters is located at a preset region boundary, a corresponding set of intermediate output parameters is set to a value specified by boundary condition constraints", so that the intermediate output parameters corresponding to all sets of training input parameters all satisfy the boundary condition constraints, no boundary point needs to be additionally set, resource occupation is reduced, a search space of an optimizer is reduced, training efficiency is improved, and model convergence is accelerated.
Thus, the corresponding set of intermediate output parameters obtained by the above steps
Figure BDA0003557149620000124
The method is designed to meet the boundary condition.
For example, for the set of first output parameters x (p, τ; θ), an initial conditional constraint is applied based on a set of intermediate output parameters
Figure BDA0003557149620000125
And initial condition constraint, determining a set of second output parameters corresponding to any set of first output parameters x (p, tau; theta)
Figure BDA0003557149620000126
For example, determining a set of second output parameters corresponding to any set of first output parameters according to a corresponding set of intermediate output parameters and initial condition constraints may include: when the time sub-parameter tau in any training input parameter (p, tau) is indicated as the initial time, setting a corresponding set of second output parameters
Figure BDA0003557149620000127
A value specified for an initial condition; when the indication of the temporal sub-parameter tau in any of the sets of training input parameters (p, tau) is within a predetermined time range from an initial time, the corresponding set of intermediate output parameters is used
Figure BDA0003557149620000128
And initial condition constraint, determining a corresponding set of second output parameters
Figure BDA0003557149620000129
After the indication of the time sub-parameter tau in any one of the training input parameters (p, tau) is within a preset time range, a corresponding set of intermediate output parameters is determined
Figure BDA00035571496200001210
As a corresponding set of second output parameters
Figure BDA00035571496200001211
For example, initial conditional constraints typically specify the initial state of the target system. For example, when the time sub-parameter τ indicates an initial time, e.g., τ equals 0, a corresponding set of second output parameters is set
Figure BDA0003557149620000131
Values specified for the initial conditions, so that the second output parameter
Figure BDA0003557149620000132
The initial conditional constraints are satisfied at the initial time.
For example, the transition function T (τ; k) may be set to model the second output parameter
Figure BDA0003557149620000133
At an initial time, is a value specified by the initial conditional constraint, and as time goes by, the influence of the initial conditional constraint becomes lower, that is, the second output parameter
Figure BDA0003557149620000134
Gradually transitioning from initial conditions to intermediate output parameters within a predetermined time range
Figure BDA0003557149620000135
For example, a set of second output parameters is determined according to equation 3 below
Figure BDA0003557149620000136
Figure BDA0003557149620000137
Here, T (τ; k) is a transition function and is used to adjust the length of the preset time range by a transition parameter k, and T (τ; k) satisfies the following two conditions:
Figure BDA0003557149620000138
x0(p) represents the initial conditional constraints.
For example, the transition parameter k has a positive correlation with the length of the preset time range, i.e. the larger the transition parameter k, the larger the second output parameter
Figure BDA0003557149620000139
From the initial stripThe slower the change speed of the transition of the piece is, the longer the preset time range is; conversely, the smaller the transition parameter k, the second output parameter
Figure BDA00035571496200001310
The faster the rate of change of the transition from the initial condition, the shorter the preset time range.
For example, when the time sub-parameter τ in any set of training input parameters (p, τ) is indicated as the initial time, i.e., τ is 0, then T (0; k) is 0,
Figure BDA00035571496200001311
thus, a set of second output parameters at an initial time
Figure BDA00035571496200001312
Is set to initial condition x0(p) the specified value.
For example, when the temporal sub-parameter τ in the set of training input parameters (p, τ) indicates being within a preset time range from an initial time instant, at this time 0<T(τ;k)<1, a set of second output parameters
Figure BDA00035571496200001313
Determined according to equation 3.
For example, when the temporal sub-parameter τ of a set of training input parameters (p, τ) indicates after a preset time range from an initial instant, when T (τ; k) is 1,
Figure BDA00035571496200001314
i.e. a set of intermediate output parameters
Figure BDA00035571496200001315
As a set of second output parameters
Figure BDA00035571496200001316
For example, T (τ; k) is a known form of transition function, and the transition parameter k is preset and remains constant during training of the neural network.
For example, the transition function T (τ; k) and the transition parameter k may both be scalars, and in one embodiment, the transition function T (τ; k) may be expressed as equation 4 below:
Figure BDA00035571496200001317
of course, the form of the transition function and the specific value of the transition parameter may be set by itself according to actual needs, and the present disclosure does not limit this.
Certainly, formula 3 shows a possible implementation manner, and those skilled in the art may set other expression forms, and it is sufficient to set a group of corresponding second output parameters at an initial time as a value specified by an initial condition, so that the second output parameters corresponding to all groups of training input parameters all satisfy initial condition constraints, an initial point does not need to be additionally set, resource occupation is reduced, a search space of an optimizer is reduced, training efficiency is improved, and model convergence is accelerated.
Thus, a set of second output parameters obtained by the above steps
Figure BDA0003557149620000141
The method is a design which can simultaneously satisfy boundary conditions and initial conditions.
After at least one set of second output parameters is obtained, loss values are obtained according to the second output parameters.
For example, in some embodiments, the neural network may be trained directly without the need for observation data. For example, step S40 may include: obtaining a residual error result corresponding to at least one group of second output parameters according to at least one group of second output parameters and a differential equation; and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters.
The corresponding loss function of the neural network is expressed as:
Figure BDA0003557149620000142
wherein,
Figure BDA0003557149620000143
representing a loss function, theta representing a parameter of the neural network, R (—) representing a residual calculation, (p, tau) representing any of at least one set of training input parameters,
Figure BDA0003557149620000144
represents a set of second output parameters corresponding to any set of training input parameters, M represents the number of differential equations corresponding to the target system, i is a positive integer,
Figure BDA0003557149620000145
representing an accumulation operation, λiRepresenting the residual weight coefficients corresponding to the ith differential equation,
Figure BDA0003557149620000146
representing the residual result calculated from the ith differential equation.
For example, a set of second output parameters corresponding to a set of training input parameters (p, τ) is obtained according to the above formula 2 and formula 3
Figure BDA0003557149620000147
Replacing x (p, tau) in the differential equation by
Figure BDA0003557149620000148
To obtain a differential equation to be fitted, and calculating by using a formula 5 to obtain a loss value by combining the differential equation to be fitted so as to adjust parameters of the neural network.
In particular, here, the first and second parts,
Figure BDA0003557149620000149
Figure BDA00035571496200001410
wherein the dependent variable in the differential equation is expressed as x (p, tau), and the two equations are referred to obtain
Figure BDA00035571496200001411
(the meaning of the specific parameters in the formula is shown in formula 2 and formula 3, which is not described herein), and x (p, τ) in the original differential equation is replaced by x (p, τ)
Figure BDA00035571496200001412
From which a differential equation to be fitted is derived.
That is, the differential equation to be fitted looks different in structure from the original differential equation, but both are expressed identically, i.e., both describe the same state and/or motion of the same target system. And therefore also the number of the channels to be used,
Figure BDA0003557149620000151
become to
Figure BDA0003557149620000152
Based on the estimated values obtained by the neural network,
Figure BDA0003557149620000153
become to
Figure BDA0003557149620000154
Based on estimates derived by the neural network.
Any one set of training input parameters (p, tau) and a corresponding set of second output parameters
Figure BDA0003557149620000155
Substituting the differential equation to be fitted, calculating residual error results corresponding to each group of second output parameters, and adding the residual error results corresponding to each group of second output parameters to obtain the loss value of the neural network.
For example, the residual result corresponding to each group of second output parameters may be calculated according to equation 5.
Referring to formula 5 and formula 1, the training method for the neural network provided in at least one embodiment of the present disclosure does not need to perform additional data fitting on the initial condition and the boundary condition, reduces resource occupation caused by additional setting of boundary points and the initial point, avoids a phenomenon that convergence of the whole optimization process is affected and a final effect is poor due to uneven gradient descent and too fast change (rigidity) of a loss function in an existing training mode, improves training efficiency, and accelerates model convergence.
For example, in some embodiments, in addition to the loss of physical constraints, a loss value may be calculated to train the neural network in conjunction with a small amount of observed data during the training process.
For example, step S40 may include: calculating a residual error result corresponding to the at least one group of second output parameters according to the at least one group of second output parameters and the differential equation; calculating a data fitting loss value according to output data and label data, wherein the output data is output obtained by inputting training data into the neural network, and the label data is a standard value corresponding to the training data; and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters and the data fitting loss value.
For example, at this time, the residual result corresponding to at least one set of second output parameters may be calculated according to formula 5; calculating a data fitting loss value according to a small amount of observation data with labels (namely training data with corresponding label data), for example, inputting the training data into a neural network to obtain output data, and calculating the data fitting loss value according to the output data and the label data; and calculating the sum or weighted sum of the residual result and the data fitting loss value corresponding to the at least one group of second output parameters, and taking the sum or weighted sum as the loss value of the neural network.
Thereafter, in step S50, the loss function is optimized using an optimizer. For example, a first-order optimizer, such as an Adam (Adaptive Momentum Estimation) optimizer, may be used for optimization, and a second-order optimizer, such as an L-BFGS-B (limited-Broyden Fletcher golden farb Shanno-bound) optimizer, may also be used for optimization, or the first-order optimizer and the second-order optimizer may be mixed, for example, when the first-order optimizer cannot be further optimized, the second-order optimizer may be used for optimization, so as to obtain better optimization effect, and find the optimal parameter value θ of the neural network.
For example, when the target system is a fluid dynamic system, current fluid dynamic system simulation, especially free-surface simulation, generally uses a differential equation (set) under euler's view, the independent variables of the differential equation under euler's view include position information (such as abscissa i and ordinate j) and time t of fluid particles, the dependent variables obtained include velocity information (such as transverse velocity u and longitudinal velocity v) of fluid particles, density information ρ and pressure P, and the euler's view is suitable for a problem that requires observation of fluid information at a given position. However, when a free liquid surface is simulated, the position of the fluid particles under the euler viewing angle needs to be calculated additionally, and the tracking or capturing of the liquid surface cannot be easily realized in the current neural network framework.
Therefore, in at least one embodiment of the present disclosure, a partial differential equation under an euler viewing angle is converted into a differential equation under a lagrangian viewing angle, so that the differential equation describes a fluid dynamics system under the lagrangian viewing angle, and thus, a trained neural network can directly obtain position information of fluid particles, a free surface condition can be easily set in a boundary condition, and a simulation effect of the neural network on fluid with liquid level is improved.
The following describes, by way of example, a liquid level simulation of a liquid satisfying an incompressible non-viscous condition, an implementation of a training method for a neural network according to at least one embodiment of the present disclosure.
It should be noted that, for ease of understanding and description, the following embodiments are described by setting a schematic simulation scenario with the euler equation in two or three dimensions as an example. However, those skilled in the art may use the navier-stokes equation to set the simulation scenario differently and the initial conditions and boundary conditions may also be changed according to the actual needs, for example, when the viscosity of the liquid is considered, which is not limited in the present disclosure.
For example, for liquids like water, the assumption of incompressible, non-viscous properties can be made physically, and thus the motion of such liquids can be described by the incompressible euler equation. In the case of gravity alone, the two-dimensional partial differential equation (euler equation) describing the fluid dynamics system is shown in equation 6:
Figure BDA0003557149620000161
here, ,
Figure BDA0003557149620000162
representing the partial derivative, i represents the abscissa position of the fluid particle, j represents the ordinate position of the fluid particle, t represents time, u represents the transverse velocity of the fluid particle, v represents the longitudinal velocity of the fluid particle, g represents gravity, P represents the pressure of the fluid particle, and ρ represents the density of the fluid particle. In euler's perspective, for example, the dependent variables may be the position coordinates (i, j) and time t of the fluid particles, and the independent variables may be the velocity (u, v) and pressure P of the fluid particles.
The equation describes the fluid based on euler's perspective and contains only information about the fluid velocity field. Thus, the specific location of the fluid particle, including the free surface, requires additional calculations to be available, and this information is directly available in the lagrangian view. Therefore, the euler equation at the euler view in equation 6 can be converted into a differential equation at the lagrange view, as shown in equation 7 below:
Figure BDA0003557149620000171
here, (a, b) denotes the label of the fluid particle, (i, j) denotes the position coordinate of the fluid particle, (u, v) denotes the velocity of the fluid particle, [ tau ] denotes time (the use of different symbols is used to distinguish between viewing angles), P denotes the pressure of the fluid particle, subscript letters denote partial derivatives, e.g., iτRepresenting the partial derivative of the position abscissa i of the fluid particle with respect to time tau.
It should be noted that the above conversion is a possible mapping manner, and those skilled in the art may also set more or other forms of converted differential equations according to actual needs, and the disclosure is not limited thereto.
For example, in lagrange viewing, the dependent variables are the label (a, b) and time τ of the fluid particle, and the independent variables are the position (i, j) of the fluid particle, the velocity (u, v) and the pressure P of the fluid particle.
For example, in step S10, a plurality of sets of training input parameters corresponding to a plurality of fluid particles are obtained, each set of training input parameters including the label sub-parameter (a, b) and the time sub-parameter τ of the corresponding fluid particle.
Then, in step S20, a plurality of sets of training input parameters are input to the neural network and subjected to arithmetic processing, so that a plurality of sets of first output parameters x (p, τ; θ) corresponding to the plurality of sets of training input parameters are obtained, where p is (a, b). For example, a set of first output parameters x (p, τ; θ) ═ i (p, τ; θ), j (p, τ); θ, u (P, τ; θ), v (P, τ; θ), P (P, τ; θ) }.
Thereafter, in order to make the entire problem appropriate, in step S30, boundary condition constraints and initial condition constraints of the fluid dynamic system are acquired. For example, assuming that the motion of a free liquid surface is simulated in an unsealed rectangular parallelepiped vessel (assuming no motion of the vessel back and forth so that the target system can be described by a two-dimensional equation), the initial condition constrains x0(p) can be expressed as:
Figure BDA0003557149620000181
here, i (a, b,0) ═ i0(a, b) are examples illustrating the meaning of the parameters. For example, i (a, b,0) ═ i0(a, b) indicates that the abscissa of the position of the fluid particle (a, b) at the initial time (τ ═ 0) is i0(a,b),i0(a, b) is in some known form.
For example, as shown in fig. 2, the rectangular parallelepiped container has a length L, a height H, a longitudinal direction (i (a, b, τ)) and a height direction (j (a, b, τ)) and a curve in the physical space is in a state where the liquid level is possible at a certain time.
For example, the velocity of the fluid at the boundary of the preset region is 0, the boundary position of the fluid particles is constrained, and the pressure of the fluid particles on the surface of the free liquid surface is 0 for the free liquid surface, so that the resulting boundary condition constraint g (p) can be expressed as:
Figure BDA0003557149620000182
then, in step S30, a set of second input parameters corresponding to each set of first input parameters x (p, τ; θ) is calculated according to the above formula 2 and formula 3
Figure BDA0003557149620000183
For example, given a Q (p), in one embodiment, the dependent variable x (p, τ) of the differential equation and the corresponding
Figure BDA0003557149620000184
The mapping relationship of (a) is shown in equation 10:
Figure BDA0003557149620000185
for example, as shown in the first formula in formula 10, when a ═ L,
Figure BDA0003557149620000191
the constraints of the boundary conditions are satisfied.
For example,
Figure BDA0003557149620000192
and
Figure BDA0003557149620000193
the mapping relationship of (a) is shown in equation 11:
Figure BDA0003557149620000194
thereafter, in step S40, i in equation 7 is replaced with i in equation 11Is
Figure BDA0003557149620000195
Replace j in equation 7 with j in equation 11
Figure BDA0003557149620000196
And the like, thereby obtaining a differential equation set for calculating the residual error, namely the differential equation set to be fitted, according to the plurality of groups of second output parameters
Figure BDA0003557149620000197
And the system of differential equations, in combination with equation 5, to obtain the loss value.
Thereafter, in step S50, parameters of the neural network are adjusted based on the loss value.
The above process is repeatedly performed until a neural network satisfying a predetermined convergence condition is obtained.
Fig. 3 is a block diagram of a neural network according to an embodiment of the present disclosure.
As shown in fig. 3, the neural network includes a fully-connected network, and a set of training input parameters (P, τ) ═ a, b, τ) is input into the fully-connected network to obtain a corresponding set of first output parameters x (P, τ; θ) ═ i, j, u, v, P.
Then, with reference to the formula 2 and the formula 3 in the above step S30, a set of second output parameters corresponding to the set of first output parameters x (p, τ; θ) is calculated
Figure BDA0003557149620000198
Wherein (i)0,j0,u0,v0,P0) Denotes the initial condition x0(p), the partial derivative operation in the formula is implemented by an automatic differentiation framework.
Thereafter, in step S40, based on the plurality of sets of second output parameters
Figure BDA0003557149620000199
The loss value is calculated according to the loss function shown in formula 5, and the specific process is as described in step S40, which is not described herein again.
And then, adjusting parameters of the neural network according to the loss values, wherein the specific process is as described in step S50, and is not described herein again.
For example, for a liquid that makes an incompressible, inviscid assumption physically, the three-dimensional partial differential equation (euler equation) describing the fluid dynamics system in the case of gravity alone is shown in equation 12:
Figure BDA0003557149620000201
similar to the parameter definition of equation 6, (i, j, z) represents the three-dimensional position coordinates of the fluid particle, and (u, v, w) represents the three-dimensional velocity of the fluid particle, and under euler viewing angle, for example, the dependent variable may be the three-dimensional position coordinates (i, j, z) and time t of the fluid particle, and the independent variable may be the three-dimensional velocity (u, v, w) and pressure P of the fluid particle.
The euler equation in the euler view in the formula 12 is converted into a differential equation in the lagrange view, and the converted differential equation set is shown in the formula 13:
Figure BDA0003557149620000202
where (a, b, c) denotes the label of the fluid particle, (i, j, z) denotes the position of the fluid particle, (u, v, w) denotes the velocity of the fluid particle,. tau.denotes time, P denotes the pressure of the fluid particle, and subscript letters denote partial derivatives, e.g., iτRepresenting the partial derivative of the abscissa position i of the fluid particle with respect to time tau.
It should be noted that, the above conversion is a possible mapping manner, and those skilled in the art may also set more or other forms of converted differential equations according to actual needs, which is not limited by the disclosure.
For example, in lagrange viewing, the dependent variables are the label (a, b, c) and time τ of the fluid particle, and the independent variables are the position (i, j, z) of the fluid particle, the velocity (u, v, w) of the fluid particle, and the pressure P.
For example, assume the simulation is at oneThe movement of free liquid level in unsealed cuboid container, the initial condition constraint x0(p) can be expressed as:
Figure BDA0003557149620000203
initial conditional constraint x0The definition of (p) is the same as for the two-dimensional system and is not described here in detail.
For example, the length of the rectangular parallelepiped vessel is L, the height is H, the depth is D, the velocity of the fluid at the preset zone boundary is 0, and the boundary position of the fluid particles is constrained, and the pressure of the fluid particles at the surface of the free liquid surface is 0 for the free liquid surface, whereby the resulting boundary condition constraint g (p) can be expressed as:
Figure BDA0003557149620000211
then, in step S30, a set of second input parameters corresponding to each set of first input parameters x (p, τ; θ) is calculated according to the above formula 2 and formula 3
Figure BDA0003557149620000212
For example, given a Q (p), in one embodiment, the variable x (p, τ) of the differential equation and the corresponding
Figure BDA0003557149620000213
The mapping relationship of (a) is shown in equation 16:
Figure BDA0003557149620000214
for example,
Figure BDA0003557149620000215
and
Figure BDA0003557149620000216
the mapping relationship of (a) is shown in equation 17:
Figure BDA0003557149620000221
thereafter, in step S40, i in formula 13 is replaced with i in formula 17
Figure BDA0003557149620000222
Replace j in equation 13 with j in equation 17
Figure BDA0003557149620000223
And the like, thereby obtaining a system of differential equations for calculating the residual error, based on the plurality of sets of second output parameters
Figure BDA0003557149620000224
And the system of differential equations, in combination with equation 5, to obtain the loss value.
Thereafter, in step S50, parameters of the neural network are adjusted based on the loss value.
The above process is repeatedly performed until a neural network satisfying a predetermined convergence condition is obtained.
According to the training method disclosed by at least one embodiment of the disclosure, as the differential equation under the Lagrange visual angle is characterized, the trained neural network can effectively calculate the fluid simulation problem containing the free surface, and can accurately calculate the position and the speed of the liquid level. In addition, under the same parameters, the neural network obtained by training the physical guide neural network through the training method provided by at least one embodiment of the disclosure does not need to optimize and fit the initial point and the boundary point, so that the network training has better convergence rate, high calculation efficiency and less memory occupation.
Fig. 4A and 4B are comparison diagrams of simulation results of free liquid level simulation provided by an embodiment of the present disclosure.
For example, the target system being simulated is a fluid dynamic system, and the physical phenomenon being simulated is a sloshing phenomenon of the liquid level of a medium-high amplitude fluid in an unsealed container.
For example, training the neural network using the contents described in the foregoing equations 6 to 11 results in the simulation results labeled "this application" on the right side in fig. 4A and 4B. Also, in fig. 4A and 4B, the simulation result marked with "conventional physical guidance network" on the left side represents the simulation result of the neural network obtained in the conventional training manner using the physical guidance network.
As shown in fig. 4A and 4B, in the training method provided in at least one embodiment of the present disclosure, an initial condition constraint and a boundary condition constraint are added to the output of the neural network, so that the initial condition and the boundary condition of the simulation result of the present application are well satisfied, the simulated liquid level sloshing phenomenon is accurate and in line with the reality, and the simulation result obtained in the conventional manner cannot realize the simulation of the free liquid level.
Fig. 5A to 5D are comparison diagrams of simulation results of free liquid level simulation provided by another embodiment of the present disclosure.
In fig. 5A to 5D, the simulated target system is a hydrodynamic system, and the simulated physical phenomenon is a "dam break" phenomenon, such as a change in water flow when a dam is opened and water is discharged.
In fig. 5A to 5D, the simulation result marked with "conventional physical guidance network" on the left side represents the simulation result of the neural network obtained by using the physical guidance network according to the conventional training mode, and the simulation result marked with "this application" on the right side represents the simulation result of the neural network obtained by using the training method provided by at least one embodiment of the present disclosure.
Fig. 5A illustrates a simulation result at an initial time (t ═ 0), as shown in fig. 5A, since an initial condition constraint is added to the output of the neural network in the training method provided in at least one embodiment of the present disclosure, the simulation result of the "present application" meets the initial condition at the initial time, whereas the simulation result obtained in the conventional manner does not well meet the initial condition at the initial time.
Fig. 5B to 5D respectively show simulation results when t is 0.05s, t is 0.3s, and t is 0.7 s. As is apparent from fig. 5B to 5D, in the training method provided in at least one embodiment of the present disclosure, boundary condition constraints are added to the output of the neural network, so that the boundary conditions and the initial conditions of the simulation result of the present application are well satisfied, the simulated "dam break" phenomenon is accurate and in accordance with reality, and the simulation result obtained in the conventional manner does not well satisfy the boundary conditions, and as time goes on, the error becomes larger and larger, and even the simulation of the free liquid level cannot be finally realized.
Through practical tests, the neural network trained by the training method provided by at least one embodiment of the disclosure can achieve a good liquid level simulation effect, can capture the phenomena of nonlinear shaking, impact, rebound and the like, and reflects that the traditional physical guiding neural network cannot even simulate a correct fluid phenomenon.
For example, at least one embodiment of the present disclosure further provides a data processing method. Fig. 6 is a schematic flow chart of a data processing method according to at least one embodiment of the present disclosure.
As shown in FIG. 6, the data processing method includes steps S60-S70.
At step S60, at least one set of input parameters corresponding to the target system is obtained.
At step S70, at least one set of input parameters is input into the neural network to obtain at least one set of intermediate output results.
In step S80, the at least one group of intermediate output results is transformed to obtain at least one group of final output results corresponding to the at least one group of intermediate output results, wherein the at least one group of final output results is obtained by adding boundary condition constraints and initial condition constraints to the at least one group of intermediate output results, respectively.
For example, the definition of the input parameters is the same as the definition of the training input parameters, for example, if the training input parameters are expressed as: (i, j, t), the input parameters also include the abscissa of the fluid particle, the ordinate of the fluid particle, and the time t, which are not described herein again.
For example, the neural network is at least partially trained according to the training method of the neural network described in at least one embodiment of the present disclosure.
The training method for neural networks is as described above and will not be described here.
For example, the trained neural network can characterize the corresponding differential equation of the target system.
For example, a set of input parameters may be represented as (q, τ), where τ represents a temporal sub-parameter and q is in the form of a vector representing other arguments in the differential equation than the temporal sub-parameter, e.g., a set of input parameters are preset values.
For example, a set of intermediate output results may be represented as x (q, τ), a dependent variable of a differential equation.
For example, a set of final output results may be represented as
Figure BDA0003557149620000241
Namely, the numerical solution of the differential equation to be fitted corresponding to the target system, and the numerical solution meets the boundary condition constraint and the initial condition constraint. As previously mentioned, the differential equation to be fitted is to replace x (p, τ) in the original differential equation by x (p, τ)
Figure BDA0003557149620000242
From which a differential equation to be fitted is derived.
For example, the conversion process can be implemented by the following equations 18 and 19:
Figure BDA0003557149620000243
Figure BDA0003557149620000244
for the meaning of the parameters in equation 18, equation 2 can be referred to, and is not described herein. For the meaning of the parameters of equation 19, refer to equation 3, which is not described herein.
Therefore, in this data processing method, the characterized differential equation is different from the original differential equation(s) characterized by the conventional physical guide neural network, and the differential equation(s) characterized in this data processing method is the differential equation(s) to be fitted, which has a different structure from the original differential equation(s) but expresses the same meaning, as described above.
For example, due to the high parallelization of the neural network and the relaxed requirement on the computational accuracy, the data processing method can be executed in parallel on a graphics processor, a data processor or a neural network processor, and can be accelerated under the mixed accuracy.
In addition, when the neural network is a physical guide neural network, at least one set of output parameters obtained by the data processing method also has the advantages of more accurately calculating, particularly simulating a nonlinear term and the like due to the physical guide neural network.
At least one embodiment of the present disclosure further provides a training apparatus for a neural network, and fig. 7 is a schematic block diagram of the training apparatus provided in at least one embodiment of the present disclosure.
As shown in fig. 7, the training apparatus 100 for a neural network may include an acquisition unit 101, an arithmetic processing unit 102, a constraint processing unit 103, a loss value calculation unit 104, and an adjustment unit 105. These components are interconnected by a bus system and/or other form of connection mechanism (not shown). For example, the modules may be implemented by hardware (e.g., circuit) modules, software modules, or any combination of the two, and the following embodiments are the same and will not be described again. These units may be implemented, for example, by a Central Processing Unit (CPU), image processor (GPU), Tensor Processor (TPU), neural Network Processor (NPU), Data Processor (DPU), AI accelerator, Field Programmable Gate Array (FPGA) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions. It should be noted that the components and configuration of the exercise device 100 shown in FIG. 7 are exemplary only, and not limiting, and that the exercise device 100 may have other components and configurations as desired.
For example, the obtaining unit 101 is configured to obtain at least one set of training input parameters.
For example, the arithmetic processing unit 102 is configured to input the at least one set of training input parameters into the neural network for arithmetic processing, so as to obtain at least one set of first output parameters corresponding to the at least one set of training input parameters.
For example, the constraint processing unit 103 is configured to obtain at least one set of second output parameters corresponding to the at least one set of first output parameters, respectively, where the at least one set of second output parameters is obtained by adding a boundary condition constraint and an initial condition constraint to the at least one set of first output parameters, respectively.
For example, the loss value calculating unit 104 is configured to obtain, based on the at least one set of second output parameters, the loss value of the neural network for the at least one set of first output parameters, which is calculated by a loss function corresponding to the neural network.
For example, the adjusting unit 105 is configured to adjust a parameter of the neural network based on the loss value.
For example, the training device of the neural network is used to train a training network, the training network includes a neural network, a loss function, and the like (none of which is shown), and the adjusting unit 105 is used to train the neural network to be trained to obtain a trained neural network.
It should be noted that the structure and function of the training network are the same as those of the neural network in the embodiment of the neural network training method described above, and are not described herein again.
For example, the acquisition unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculation unit 104, and the adjustment unit 105 may include codes and programs stored in a memory; the processor may execute the code and the program to realize some or all of the functions of the acquisition unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculation unit 104, and the adjustment unit 105 as described above.
For example, the acquisition unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculation unit 104, and the adjustment unit 105 may be dedicated hardware devices for implementing some or all of the functions of the acquisition unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculation unit 104, and the adjustment unit 105 described above. For example, the acquisition unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculation unit 104, and the adjustment unit 105 may be one circuit board or a combination of a plurality of circuit boards for realizing the functions as described above.
In the embodiment of the present application, the one or a combination of a plurality of circuit boards may include: (1) one or more processors; (2) one or more non-transitory memories connected to the processor; and (3) firmware stored in the memory executable by the processor.
It should be noted that the obtaining unit 101 may be configured to implement step S10 shown in fig. 1, the arithmetic processing unit 102 may be configured to implement step S20 shown in fig. 1, the constraint processing unit 103 may be configured to implement step S30 shown in fig. 1, the loss value calculating unit 104 may be configured to implement step S40 shown in fig. 1, and the adjusting unit 105 may be configured to implement step S50 shown in fig. 1. Therefore, for specific descriptions of the functions that can be realized by the obtaining unit 101, the arithmetic processing unit 102, the constraint processing unit 103, the loss value calculating unit 104, and the adjusting unit 105, reference may be made to the related descriptions of step S10 to step S50 in the above embodiments of the data processing method, and repeated descriptions are omitted. In addition, the data processing apparatus 100 can achieve similar technical effects to the foregoing data processing method, and will not be described herein again.
It should be noted that in the embodiment of the present disclosure, the data processing apparatus 100 may include more or less circuits or units, and the connection relationship between the respective circuits or units is not limited and may be determined according to actual requirements. The specific configuration of each circuit or unit is not limited, and may be configured by an analog device, a digital chip, or other suitable configurations according to the circuit principle.
Fig. 8 is a schematic block diagram of a data processing apparatus according to at least one embodiment of the present disclosure.
As shown in fig. 8, the data processing apparatus 200 may include an input acquisition unit 201, a processing unit 202. These components are interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of data processing device 200 shown in FIG. 8 are exemplary only, and not limiting, and that data processing device 200 may have other components and configurations as desired.
For example, the input obtaining unit 201 is configured to obtain at least one set of input parameters corresponding to the target system.
For example, the processing unit 202 is configured to input at least one set of input parameters into the neural network to obtain at least one set of output parameters.
For example, at least part of the neural network is obtained by training according to the training method described in any embodiment of the present disclosure, and the specific process of training and the like may refer to the related description in the embodiment of the training method of the neural network, and repeated parts are not repeated.
At least one embodiment of the present disclosure further provides an electronic device, and fig. 9 is a schematic diagram of an electronic device provided in at least one embodiment of the present disclosure.
For example, as shown in fig. 9, the electronic device includes a processor 701, a communication interface 702, a memory 703, and a communication bus 704. The processor 701, the communication interface 702, and the memory 703 communicate with each other via a communication bus 704, and components such as the processor 701, the communication interface 702, and the memory 703 may also communicate with each other via a network connection. The present disclosure is not limited herein as to the type and function of the network. It should be noted that the components of the electronic device shown in fig. 7 are only exemplary and not limiting, and the electronic device may have other components according to the actual application.
For example, memory 703 is used to store computer readable instructions non-transiently. The processor 701 is configured to execute computer readable instructions to implement the training method or the data processing method of the neural network according to any one of the above embodiments. For specific implementation and related explanation of each step of the training method or the data processing method of the neural network, reference may be made to the above embodiments of the training method or the data processing method of the neural network, which are not described herein again.
For example, other implementation manners of the training method of the neural network or the data processing method implemented by the processor 701 executing the computer readable instructions stored in the memory 703 are the same as the implementation manners mentioned in the foregoing method embodiment, and are not described herein again.
For example, the communication bus 704 may be a peripheral component interconnect standard (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
For example, communication interface 702 is used to enable communication between an electronic device and other devices.
For example, the processor 701 and the memory 703 may be located on a server side (or cloud side).
For example, the processor 701 may control other components in the electronic device to perform desired functions. The processor 701 may be a device having data processing capability and/or program execution capability, such as a Central Processing Unit (CPU), a Tensor Processor (TPU), a neural Network Processor (NPU), a Data Processor (DPU), an AI accelerator, or a Graphics Processing Unit (GPU); but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The Central Processing Unit (CPU) may be an X86 or ARM architecture, etc.
For example, memory 703 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer-readable instructions may be stored on the computer-readable storage medium and executed by the processor 701 to implement various functions of the electronic device. Various application programs and various data and the like can also be stored in the storage medium.
For example, for the detailed description of the process of the electronic device executing the training method or the data processing method of the neural network, reference may be made to the related description in the embodiment of the training method or the data processing method of the neural network, and repeated descriptions are omitted.
Fig. 10 is a schematic diagram of a non-transitory computer-readable storage medium according to at least one embodiment of the disclosure. For example, as shown in fig. 10, the storage medium 800 may be a non-transitory computer-readable storage medium, on which one or more computer-readable instructions 801 may be non-temporarily stored on the storage medium 800. For example, the computer readable instructions 801, when executed by a processor, may perform one or more steps in a training method according to the data processing method or neural network described above.
For example, the storage medium 800 may be applied to the electronic device described above, and for example, the storage medium 800 may include a memory in the electronic device.
For example, the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a flash memory, or any combination of the above, as well as other suitable storage media.
For example, the description of the storage medium 800 may refer to the description of the memory in the embodiment of the electronic device, and repeated descriptions are omitted.
Fig. 11 is a schematic diagram of a hardware environment according to at least one embodiment of the present disclosure. The electronic equipment provided by the disclosure can be applied to an Internet system.
The functions of the data processing apparatus and/or the electronic device referred to in the present disclosure may be implemented using the computer system provided in fig. 11. Such computer systems may include personal computers, laptops, tablets, cell phones, personal digital assistants, smart glasses, smart watches, smart rings, smart helmets, and any smart portable or wearable device. The particular system in this embodiment utilizes a functional block diagram to illustrate a hardware platform that contains a user interface. Such a computer device may be a general purpose computer device or a special purpose computer device. Both computer devices may be used to implement the data processing apparatus and/or the electronic device in the present embodiment. The computer system may include any components that implement the information needed to implement the presently described image processing. For example, the computer system can be implemented by a computer device through its hardware devices, software programs, firmware, and combinations thereof. For convenience, only one computer device is depicted in fig. 11, but the related computer functions of the information required to implement image processing described in the present embodiment can be implemented in a distributed manner by a set of similar platforms, distributing the processing load of the computer system.
As shown in FIG. 11, the computer system may include a communication port 250 coupled to a network that enables data communication, e.g., the computer system may send and receive information and data via the communication port 250, i.e., the communication port 250 may enable the computer system to communicate wirelessly or wiredly with other electronic devices to exchange data. The computer system may also include a processor complex 220 (i.e., the processor described above) for executing program instructions. The processor group 220 may be composed of at least one processor (e.g., CPU). The computer system may include an internal communication bus 210. The computer system may include various forms of program storage units and data storage units (i.e., the memory or storage medium described above), such as a hard disk 270, Read Only Memory (ROM)230, Random Access Memory (RAM)240, which can be used to store various data files used in computer processing and/or communications, as well as possible program instructions executed by the processor complex 220. The computer system may also include an input/output component 260, the input/output component 260 being used to implement input/output data flow between the computer system and other components (e.g., user interface 280, etc.).
Generally, the following devices may be connected to the input/output assembly 260: input devices such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; output devices such as displays (e.g., LCD, OLED display, etc.), speakers, vibrators, etc.; storage devices including, for example, magnetic tape, hard disk, etc.; and a communication interface.
While fig. 11 illustrates a computer system having various devices, it is to be understood that a computer system is not required to have all of the devices illustrated and that a computer system may alternatively have more or fewer devices.
For the present disclosure, there are also the following points to be explained:
(1) the drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design.
(2) Thicknesses and dimensions of layers or structures may be exaggerated in the drawings used to describe embodiments of the present invention for clarity. It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" or "under" another element, it can be "directly on" or "under" the other element or intervening elements may be present.
(3) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (18)

1. A method of training a neural network, comprising:
acquiring at least one group of training input parameters;
inputting the at least one group of training input parameters into the neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters;
acquiring at least one group of second output parameters corresponding to the at least one group of first output parameters respectively, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively;
obtaining loss values of the neural network for the at least one group of first output parameters, which are calculated through a loss function corresponding to the neural network, based on the at least one group of second output parameters;
adjusting parameters of the neural network based on the loss values.
2. Training method according to claim 1, wherein the neural network is trained for solving differential equations describing the motion and/or state of the target system.
3. The training method according to claim 2, wherein obtaining at least one set of second output parameters corresponding to the at least one set of first output parameters respectively comprises:
acquiring boundary condition constraint and initial condition constraint corresponding to the target system;
for any group of first output parameters, determining a group of intermediate output parameters corresponding to the any group of first output parameters, wherein when a group of training input parameters corresponding to the any group of first output parameters indicates that the group of training input parameters is located at a preset region boundary, the corresponding group of intermediate output parameters is set to a value specified by the boundary condition constraint;
and determining a group of second output parameters corresponding to the any group of first output parameters according to the corresponding group of intermediate output parameters and the initial condition constraint.
4. The training method according to claim 3, wherein for any set of first output parameters, determining a set of intermediate output parameters corresponding to the any set of first output parameters comprises:
obtaining the corresponding set of intermediate output parameters according to the following formula:
Figure FDA0003557149610000011
wherein,
Figure FDA0003557149610000012
representing the corresponding set of intermediate output parameters, (p, τ) representing the any one set of training input parameters, p being in a vector form, τ representing the time corresponding to p; g (p) represents the boundary condition constraint; the distance between p and the preset region boundary in any set of training input parameters is in positive correlation with Q (p), and when p is positioned at the preset region boundary, Q (p) is 0; an indication of a Hadamard product operation; x (p, tau; theta) represents a set of first output parameters corresponding to any one of the sets of training input parameters.
5. The training method of claim 3, wherein each set of training input parameters comprises a temporal sub-parameter,
determining a set of second output parameters corresponding to the any set of first output parameters according to the corresponding set of intermediate output parameters and the initial condition constraints, including:
when the time sub-parameter in any one group of training input parameters is indicated as the initial time, setting the corresponding group of second output parameters as the values specified by the initial conditions;
determining the corresponding group of second output parameters according to the corresponding group of intermediate output parameters and the initial condition constraint when the time sub-parameter indication in any group of training input parameters is within a preset time range from an initial time;
and after the time sub-parameter indication in any one set of training input parameters is within the preset time range, taking the corresponding set of intermediate output parameters as the corresponding set of second output parameters.
6. The training method of claim 5, wherein the corresponding set of second output parameters is derived according to the following formula:
Figure FDA0003557149610000021
wherein,
Figure FDA0003557149610000022
representing the corresponding set of second output parameters; t (tau; k) is a transition function and is used for adjusting the preset time range through a transition parameter k, and the T (tau; k) meets the following two conditions: t (0; k) is 0,
Figure FDA0003557149610000023
x0(p) represents the initial conditional constraints.
7. Training method according to claim 6, wherein the transition parameter k is positively correlated with the length of the preset time range.
8. The training method according to claim 2, wherein obtaining the loss value of the neural network for the at least one set of first output parameters, which is calculated by the loss function corresponding to the neural network, based on the at least one set of second output parameters comprises:
obtaining a residual error result corresponding to the at least one group of second output parameters according to the at least one group of second output parameters and the differential equation;
and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters.
9. The training method of claim 8, wherein the neural network corresponding loss function is represented as:
Figure FDA0003557149610000031
wherein,
Figure FDA0003557149610000032
representing a loss function, θ representing a parameter of the neural network, R (—) representing a residual calculation, (p, τ) representing any of the at least one set of training input parameters,
Figure FDA0003557149610000033
representing a set of second output parameters corresponding to any one set of training input parameters, M representing the number of differential equations corresponding to the target system, i being a positive integer,
Figure FDA0003557149610000034
representing an accumulation operation, λiRepresenting the residual weight coefficients corresponding to the ith differential equation,
Figure FDA0003557149610000035
representing the residual result calculated from said i-th differential equation.
10. The training method according to claim 2, wherein obtaining the loss value of the neural network for the at least one set of first output parameters, which is calculated by the loss function corresponding to the neural network, based on the at least one set of second output parameters comprises:
calculating a residual error result corresponding to the at least one group of second output parameters according to the at least one group of second output parameters and the differential equation;
calculating a data fitting loss value according to output data and label data, wherein the output data is output obtained by inputting training data into the neural network, and the label data is a standard value corresponding to the training data;
and obtaining a loss value of the neural network according to the residual error result corresponding to the at least one group of second output parameters and the data fitting loss value.
11. The training method of any one of claims 1-10, wherein the neural network is a physical steering neural network.
12. Training method according to any of the claims 2-10, wherein the target system is a fluid dynamic system,
the differential equation is obtained by converting a partial differential equation under an Euler viewing angle, and the differential equation describes the fluid dynamics system under a Lagrangian viewing angle.
13. The training method of claim 12, wherein obtaining the at least one set of training input parameters comprises:
acquiring at least one set of training input parameters under the Lagrangian view.
14. A method of data processing, comprising:
acquiring at least one group of input parameters corresponding to a target system;
inputting the at least one set of input parameters into a neural network to obtain at least one set of intermediate output results;
performing conversion processing on the at least one group of intermediate output results to obtain at least one group of final output results corresponding to the at least one group of intermediate output results one to one, wherein the at least one group of final output results are obtained by respectively adding boundary condition constraints and initial condition constraints to the at least one group of intermediate output results;
wherein the neural network is trained at least in part according to the training method of any one of claims 1-13.
15. An apparatus for training a neural network, comprising:
an acquisition unit configured to acquire at least one set of training input parameters;
the operation processing unit is configured to input the at least one group of training input parameters into the neural network for operation processing to obtain at least one group of first output parameters corresponding to the at least one group of training input parameters;
the constraint processing unit is configured to acquire at least one group of second output parameters corresponding to the at least one group of first output parameters, wherein the at least one group of second output parameters are obtained by adding boundary condition constraints and initial condition constraints to the at least one group of first output parameters respectively;
a loss value calculation unit configured to obtain, based on the at least one set of second output parameters, a loss value of the neural network for the at least one set of first output parameters, which is calculated by a loss function corresponding to the neural network;
an adjusting unit configured to adjust a parameter of the neural network based on the loss value.
16. A data processing apparatus comprising:
the input acquisition unit is configured to acquire at least one group of input parameters corresponding to a target system;
the processing unit inputs the at least one group of input parameters into the neural network to obtain at least one group of output parameters;
wherein the neural network is trained at least in part according to the training method of any one of claims 1-13.
17. An electronic device, comprising:
a memory non-transiently storing computer executable instructions;
a processor configured to execute the computer-executable instructions,
wherein the computer-executable instructions, when executed by the processor, implement a method of training a neural network according to any one of claims 1-13 or a method of data processing according to claim 14.
18. A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer-executable instructions that, when executed by a processor, implement the training method of the neural network of any one of claims 1-13 or the data processing method of claim 14.
CN202210281615.0A 2022-03-21 2022-03-21 Training method and device, data processing method, electronic device and storage medium Pending CN114611678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210281615.0A CN114611678A (en) 2022-03-21 2022-03-21 Training method and device, data processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210281615.0A CN114611678A (en) 2022-03-21 2022-03-21 Training method and device, data processing method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114611678A true CN114611678A (en) 2022-06-10

Family

ID=81865892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210281615.0A Pending CN114611678A (en) 2022-03-21 2022-03-21 Training method and device, data processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114611678A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024031525A1 (en) * 2022-08-11 2024-02-15 Robert Bosch Gmbh Method and apparatus for bi-level physics-informed neural networks for pde constrained optimization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024031525A1 (en) * 2022-08-11 2024-02-15 Robert Bosch Gmbh Method and apparatus for bi-level physics-informed neural networks for pde constrained optimization

Similar Documents

Publication Publication Date Title
Wiewel et al. Latent space physics: Towards learning the temporal evolution of fluid flow
KR102127524B1 (en) Vector computation unit of neural network processor
Paul‐Dubois‐Taine et al. An adaptive and efficient greedy procedure for the optimal training of parametric reduced‐order models
Dodd et al. A fast pressure-correction method for incompressible two-fluid flows
Farhat et al. Dimensional reduction of nonlinear finite element dynamic models with finite rotations and energy‐based mesh sampling and weighting for computational efficiency
Rakhsha et al. Using a half-implicit integration scheme for the SPH-based solution of fluid–solid interaction problems
Yang et al. Realtime two‐way coupling of meshless fluids and nonlinear FEM
He et al. A deep learning energy-based method for classical elastoplasticity
CA3182690A1 (en) Method and system for real-time simulation of elastic body
JP2020119189A (en) Fluid analysis system, method for analyzing fluid, and fluid analysis program
Ahmad et al. Local radial basis function collocation method for stokes equations with interface conditions
Li et al. Plasticitynet: Learning to simulate metal, sand, and snow for optimization time integration
CN114611678A (en) Training method and device, data processing method, electronic device and storage medium
Wassing et al. Physics-informed neural networks for parametric compressible Euler equations
CN109960841B (en) Fluid surface tension simulation method, terminal equipment and storage medium
Zhu et al. Physics-informed neural networks for solving dynamic two-phase interface problems
CN114692529B (en) CFD high-dimensional response uncertainty quantification method and device, and computer equipment
CN114897146B (en) Model generation method and device and electronic equipment
Witman et al. Neural basis functions for accelerating solutions to high mach euler equations
US10810335B2 (en) Method and apparatus for explicit simulation
Lo et al. Learning based mesh generation for thermal simulation in handheld devices with variable power consumption
CN103970927A (en) Simulation Program, Simulation Method, And Simulation Device
Yu et al. Data-driven subspace enrichment for elastic deformations with collisions
US10504269B2 (en) Inertial damping for enhanced simulation of elastic bodies
Xiao et al. A Hybrid Multilayer Perceptron-Radial Basis Function (HMLP-RBF) Neural Network for Solving Hyperbolic Conservation Laws

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 201100 room 1302, 13 / F, building 16, No. 2388, Chenhang highway, Minhang District, Shanghai

Applicant after: Shanghai Bi Ren Technology Co.,Ltd.

Address before: 201100 room 1302, 13 / F, building 16, No. 2388, Chenhang highway, Minhang District, Shanghai

Applicant before: Shanghai Bilin Intelligent Technology Co.,Ltd.

Country or region before: China