WO2024104485A1 - Procédé et appareil de construction de réseau multi-fidélité pour test de simulation de réacteur nucléaire - Google Patents
Procédé et appareil de construction de réseau multi-fidélité pour test de simulation de réacteur nucléaire Download PDFInfo
- Publication number
- WO2024104485A1 WO2024104485A1 PCT/CN2023/132550 CN2023132550W WO2024104485A1 WO 2024104485 A1 WO2024104485 A1 WO 2024104485A1 CN 2023132550 W CN2023132550 W CN 2023132550W WO 2024104485 A1 WO2024104485 A1 WO 2024104485A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fidelity
- network
- data
- trained
- fidelity network
- Prior art date
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 78
- 238000012360 testing method Methods 0.000 title claims abstract description 51
- 238000010276 construction Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 49
- 238000003860 storage Methods 0.000 claims description 11
- 230000008878 coupling Effects 0.000 abstract description 4
- 238000010168 coupling process Methods 0.000 abstract description 4
- 238000005859 coupling reaction Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009257 reactivity Effects 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000446 fuel Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 229910052768 actinide Inorganic materials 0.000 description 1
- 150000001255 actinides Chemical class 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005253 cladding Methods 0.000 description 1
- 239000002826 coolant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004992 fission Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000000265 homogenisation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present application relates to the technical field of nuclear power plant reactor core design and operation, and in particular to a multi-fidelity network construction method, apparatus, computer equipment, storage medium and computer program product for nuclear reactor simulation testing.
- simulations need to be performed for the actual or assumed operating conditions of the reactor to verify the safety of the designed or operated reactor.
- the designed reactor it is necessary to quantitatively evaluate the various operating boundaries and operating consequences of the reactor under various assumed accident conditions to ensure the safety of the designed reactor; on the other hand, in the operating reactor, various simulation calculations need to be performed to ensure that the design calculation parameters are consistent with the actual operating parameters within a certain error range, thereby ensuring the consistency between the designed reactor and the actual reactor, and thus ensuring that the operating reactor has sufficient safety margins under various accident conditions.
- some parameters directly related to the safety of reactor operation cannot be measured directly, but must be derived from some measurable reactor fluid parameters (such as temperature, pressure, etc.) or neutron detector readings (such as characterizing fission reaction rate) combined with the simulation model of the reactor.
- Some measurable reactor fluid parameters such as temperature, pressure, etc.
- neutron detector readings such as characterizing fission reaction rate
- PCM nuclear design software package PCM uses the equivalent homogenization assumption and neutron diffusion approximation to realize the simulation of the three-dimensional core.
- point reactor equations have no spatial distribution, they are often used for inversion monitoring of reactivity based on power changes and xenon poisoning.
- state transition models which represent the process of core state changing from the state at the next moment due to control actions.
- State transition models are used for both the state distribution of non-measurable variables at the current moment and the prediction of core state at subsequent moments, and there are unknown errors.
- the present application provides a method for constructing a multi-fidelity network for nuclear reactor simulation testing.
- the method comprises:
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the method before acquiring at least one second fidelity network according to the second fidelity data of the sample nuclear reactor, the method further includes:
- data other than the first fidelity data is used as the second fidelity data.
- obtaining at least one second fidelity network according to second fidelity data of a sample nuclear reactor includes:
- Using the second fidelity data to train at least one second fidelity network to obtain at least one trained second fidelity network includes:
- Each second fidelity network is trained using the sub-data corresponding to each fidelity level to obtain at least one trained second fidelity network.
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, comprising:
- the input end of the first fidelity network and the input ends of each trained second fidelity network are used together as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain a multi-fidelity network.
- the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network, including:
- the method further comprises:
- the simulation test result of the target nuclear reactor is obtained based on the state parameters at the second moment.
- the present application also provides a multi-fidelity network construction device for nuclear reactor simulation testing.
- the device comprises:
- an acquisition module configured to acquire a first fidelity network according to first fidelity data of the sample nuclear reactor, and to acquire at least one second fidelity network according to second fidelity data of the sample nuclear reactor;
- a training module configured to train at least one second fidelity network using second fidelity data to obtain at least one trained second fidelity network
- a combination module is used to combine at least one trained second fidelity network with a first fidelity network to obtain a multi-fidelity network, and use the first fidelity data to train the multi-fidelity network to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the present application further provides a computer device.
- the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the present application further provides a computer-readable storage medium.
- the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the following steps are implemented:
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the present application further provides a computer program product.
- the computer program product includes a computer program, and when the computer program is executed by a processor, the following steps are implemented:
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the above-mentioned multi-fidelity network construction method, device, computer equipment, storage medium and computer program product for nuclear reactor simulation testing obtains a first fidelity network according to the first fidelity data of the sample nuclear reactor, and obtains at least one second fidelity network according to the second fidelity data of the sample nuclear reactor; uses the second fidelity data to train at least one second fidelity network to obtain at least one trained second fidelity network; combines at least one trained second fidelity network with the first fidelity network to obtain a multi-fidelity network, and uses the first fidelity data to train the multi-fidelity network to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to simulate the target nuclear reactor.
- the second fidelity network can first obtain a reference simulation result according to the input parameters, and input the input parameters and the simulation result into the first fidelity network together, and the first fidelity network can output the final simulation result according to the coupling between different fidelity data.
- FIG1 is a schematic diagram of a flow chart of a method for constructing a multi-fidelity network for nuclear reactor simulation testing in one embodiment
- FIG2 is a schematic diagram of the structure of a second fidelity network in one embodiment
- FIG3 is a schematic diagram of the structure of a multi-fidelity network in one embodiment
- FIG4 is a schematic diagram of a process of generating adversarial network training in another embodiment
- FIG5 is a structural block diagram of a multi-fidelity network construction device for nuclear reactor simulation testing in one embodiment
- FIG. 6 is a diagram showing the internal structure of a computer device in one embodiment.
- a multi-fidelity network construction method for nuclear reactor simulation testing is provided.
- This embodiment uses the method applied to a computer device as an example.
- the computer device may be a terminal or a server.
- the terminal may be, but is not limited to, various industrial computers.
- the server may be, for example, a computer.
- the method can be implemented by a separate server or a server cluster composed of multiple servers. In this embodiment, the method includes the following steps:
- Step 102 acquiring a first fidelity network according to first fidelity data of a sample nuclear reactor, and acquiring at least one second fidelity network according to second fidelity data of the sample nuclear reactor.
- the first fidelity data refers to high-fidelity data, which can be generated by high-fidelity software
- the second fidelity data refers to low-fidelity data with lower data accuracy than the first fidelity data, which can be quickly generated by low-fidelity software.
- Both the first fidelity data and the second fidelity data include multiple control parameters and state parameters of the input and output of the sample nuclear reactor, including: 1) reactivity parameters, such as control rod positions, etc.; 2) power parameters, such as power level, power distribution; 3) nuclear density parameters, such as the nuclear density at the axial height of each component (including fissile nuclides, minor actinide nuclides, light nuclides, etc.); 4) macroscopic or microscopic reaction cross sections; 5) thermal parameters, such as coolant temperature, pressure, flow, etc., fuel or material temperature, etc.
- reactivity parameters and power parameters can be used as input parameters, and other state parameters as output parameters.
- a suitable neural network is selected as the first fidelity network.
- another one or more suitable neural networks are selected as the second fidelity networks.
- Step 104 Use the second fidelity data to train at least one second fidelity network to obtain at least one trained second fidelity network.
- all second fidelity data can be directly used to train a second fidelity network to obtain a trained second fidelity network.
- the second fidelity data can also be first differentiated by accuracy, and each group of data in the second fidelity data can be scored for accuracy according to the state characteristics of the sample nuclear reactor, and then all data can be divided into multiple fidelity levels according to the accuracy score, and the same number of second fidelity networks as the fidelity level are prepared, and each second fidelity network is trained using data of a fidelity level.
- the second fidelity data is divided into two parts according to the accuracy, and the data with higher accuracy is used as medium fidelity data, and the data with lower accuracy is used as low fidelity data.
- Two second fidelity networks are prepared: a medium fidelity network and a low fidelity network.
- the medium fidelity data is used to train the medium fidelity network to obtain a trained medium fidelity network
- the low fidelity data is used to train the low fidelity network to obtain a trained low fidelity network.
- Step 106 combine at least one trained second fidelity network with the first fidelity network to obtain a multi-fidelity network, and use the first fidelity data to train the multi-fidelity network to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on the target nuclear reactor.
- each trained second fidelity network is connected to the input end of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain a multi-fidelity network.
- the input data is first input into each trained second fidelity network, and each trained second fidelity network outputs its own simulation results. These simulation results are used as reference data and input into the first fidelity network together with the input data.
- the first fidelity network processes the input data in combination with the reference data, outputs a simulation data, compares the simulation data with the label data corresponding to the input data, and adjusts the weight parameters of the first fidelity network.
- the training is completed to obtain a trained multi-fidelity network.
- each trained second fidelity network is not adjusted, that is, the first fidelity data is only used to train the first fidelity network.
- control parameters and current state parameters of the target nuclear reactor are obtained, and then the control parameters and current state parameters are input into a trained multi-fidelity network.
- the multi-fidelity network can output predicted state parameters, and the predicted state parameters can characterize the changes in the state parameters of the target nuclear reactor under the influence of the control parameters. Based on the predicted state parameters, the target nuclear reactor can be simulated and tested to determine the optimal control method and control parameters for the target nuclear reactor.
- a first fidelity network is obtained according to the first fidelity data of the sample nuclear reactor, and at least one second fidelity network is obtained according to the second fidelity data of the sample nuclear reactor.
- a two-fidelity network using the second fidelity data to train at least one second fidelity network to obtain at least one trained second fidelity network; combining at least one trained second fidelity network with the first fidelity network to obtain a multi-fidelity network, and using the first fidelity data to train the multi-fidelity network to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to simulate and test the target nuclear reactor.
- the second fidelity network when simulating the target nuclear reactor through the multi-fidelity network, the second fidelity network can first obtain a reference simulation result based on the input parameters, and input the input parameters and the simulation result into the first fidelity network together.
- the first fidelity network can output the final simulation result based on the coupling between different fidelity data, thereby improving the simulation efficiency while ensuring the simulation accuracy.
- the method before acquiring at least one second fidelity network based on the second fidelity data of the sample nuclear reactor, the method further includes: acquiring multiple data sets of the sample nuclear reactor; determining the accuracy of each data set, and using the data in a data set with the highest accuracy as the first fidelity data; and using the data in the multiple data sets, except the first fidelity data, as the second fidelity data.
- the first fidelity data and second fidelity data of the sample nuclear reactor can be generated by a variety of software with different fidelity. These software are based on mathematical physics equations and are used for derivation and calculation.
- the measured data under some operating conditions can also be obtained manually.
- the measured data itself can exist as the highest level of fidelity or as low-fidelity data, depending on the nature of data acquisition and data acquisition conditions. Therefore, it is necessary to divide these data into fidelity levels, with high-fidelity data as first-fidelity data and the rest of the data as second-fidelity data.
- the data format needs to be unified between data of different fidelity levels.
- the state space parameters of the sample nuclear reactor are defined as High-fidelity software is used to generate high-fidelity data sets. Because high-fidelity simulation software has high calculation accuracy, but low calculation efficiency. For example, the calculation of a typical reactor state point takes about several minutes or hours. Therefore, high-fidelity data is relatively scarce. Assume that the input of the high-fidelity simulation software is The output is Get high-fidelity input-output pairs: Since the reactor process can be essentially regarded as a Markov process, its input state space and output state space can be essentially consistent. Similarly, construct the medium-fidelity input-output pair: And low-fidelity input-output pairs:
- At least one second fidelity network is obtained based on second fidelity data of a sample nuclear reactor, including: classifying the second fidelity data into levels to obtain at least one fidelity level and sub-data corresponding to each fidelity level; obtaining a corresponding second fidelity network based on the sub-data corresponding to each fidelity level; the number of second fidelity networks is the same as the number of fidelity levels.
- the second fidelity data is used to train at least one second fidelity network to obtain at least one trained second fidelity network, including: using sub-data corresponding to each fidelity level to train each second fidelity network to obtain at least one trained second fidelity network.
- a standard neural network is selected to perform fitting training on the input and output to achieve and As shown in Figure 2, the number of layers (or depth) m of the neural network, the number of nodes (or width) n of each layer, the activation function ReLU or leakyReLu, the learning rate, the optimization algorithm Adam or SGD, etc. are the hyperparameters of neural network training, which can be set according to different problems or according to the hyperparameter optimization algorithm HPO. To finally determine the relevant parameters.
- the methods of building and training neural networks can be selected according to the data type, training requirements, etc., which will not be described here.
- Some deep learning open source platforms can be used, including PaddlePaddle, Pytorch, Tensorflow, etc., which can easily implement an optimized neural network model to characterize the intrinsic relationship of low-fidelity data or medium-fidelity data.
- the second fidelity data by classifying the second fidelity data, at least one fidelity level and sub-data corresponding to each fidelity level are obtained; the corresponding second fidelity network is obtained according to the sub-data corresponding to each fidelity level; the number of second fidelity networks is the same as the number of fidelity levels; the sub-data corresponding to each fidelity level are respectively used to train each second fidelity network to obtain at least one trained second fidelity network. Multiple trained second fidelity networks can be obtained.
- At least one trained second fidelity network is combined with a first fidelity network to obtain a multi-fidelity network, including: connecting the output of each trained second fidelity network to one of the inputs of the first fidelity network; using the input of the first fidelity network and the input of each trained second fidelity network as the input of the multi-fidelity network, and using the output of the first fidelity network as the output of the multi-fidelity network to obtain the multi-fidelity network.
- the first fidelity network as a high-fidelity network
- the second fidelity network as an example
- the low-fidelity network since the low-fidelity network itself has a large amount of data, while the amount of high-fidelity data is relatively small, directly training the high-fidelity data source will lead to a large overfitting error. Therefore, the high-fidelity input data x high is input into the already trained low-fidelity network, medium-fidelity network, etc., to obtain the output of these non-high-fidelity network levels and In fact, or The difference from the true y high characterizes the error of the low-fidelity network extended to the high-fidelity data.
- the first fidelity network is constructed, as shown in FIG3, and the output of the low-fidelity network and the output network of the medium-fidelity network are mapped to one of the inputs of the high-fidelity network, and a multi-fidelity network is obtained by combining them.
- the depth and width m high and n high of the multi-fidelity network, as well as the activation function, training hyperparameters, etc. need to be determined according to the state parameters of the specific nuclear reactor, which will not be repeated here.
- each trained second fidelity network is connected to one of the input ends of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain a multi-fidelity network.
- the multi-fidelity network can combine the output results of different fidelity networks to obtain the final simulation result, and the second fidelity network part in the multi-fidelity network is obtained by training with a large amount of low-fidelity data, which can greatly improve the computational efficiency and accuracy of training to meet real-time requirements.
- a multi-fidelity network is trained using first fidelity data to obtain a trained multi-fidelity network, including: using the multi-fidelity network as a generator network, and obtaining a corresponding discriminator network based on the generator network; constructing a generative adversarial network based on the generator network and the discriminator network; training the generative adversarial network using the first fidelity data to obtain a trained generative adversarial network; and obtaining a trained generator network from the trained generative adversarial network as a trained multi-fidelity network.
- the first fidelity network as a high-fidelity network and the second fidelity network including a medium-fidelity network and a low-fidelity network as an example, as shown in FIG3, based on the output of the generator network and Traditional neural network training directly uses And the deviation of y high , such as L1loss loss function: You can directly Then carry out training.
- the adversarial training method is used to improve the ability of neural networks to deceive adversarial samples.
- the basic idea of adversarial training is to continuously generate and learn adversarial samples during network training.
- the reactor state label generated by the generator G(X) can deceive the discriminator and be consistent with the real reactor state label. That is, the output of the discriminator D(Y) is the possibility of judging the true label.
- the discriminator's network structure, learning rate and other hyperparameters are adjusted according to the specific actual scenario.
- a multi-fidelity network is used as a generator network, and a corresponding discriminator network is obtained according to the generator network; a generative adversarial network is constructed according to the generator network and the discriminator network; the generative adversarial network is trained using the first fidelity data to obtain a trained generative adversarial network; and a trained generator network is obtained from the trained generative adversarial network as a trained multi-fidelity network.
- a trained multi-fidelity network can be obtained, and the trained multi-fidelity network can combine the output results of different fidelity networks to obtain simulation results corresponding to the input data.
- the trained multi-fidelity network is used to simulate and test the target nuclear reactor pair, that is, the target nuclear reactor can be simulated and tested through the trained multi-fidelity network to simulate the state change of the target reactor.
- the step of performing simulation testing on the target nuclear reactor through the trained multi-fidelity network includes: obtaining control parameters and state parameters of the target nuclear reactor at a first moment; inputting the control parameters and the state parameters of the first moment into the trained multi-fidelity network to obtain the state parameters of the target nuclear reactor at a second moment; and obtaining the simulation test results of the target nuclear reactor based on the state parameters at the second moment.
- a multi-fidelity network obtained by combining the low-fidelity network, the medium-fidelity network, and the high-fidelity network can realize the change of the reactor state.
- the multi-fidelity network inputs x high , which includes the control parameters of the target nuclear reactor and the state parameters at the first moment.
- the low-fidelity network and the medium-fidelity network first process the input data x high and output them respectively.
- control parameters and the state parameters of the target nuclear reactor at the first moment are obtained; the control parameters and the state parameters of the first moment are input into the trained multi-fidelity network to obtain the state parameters of the target nuclear reactor at the second moment; and the simulation test results of the target nuclear reactor are obtained based on the state parameters at the second moment.
- the final simulation results can be output according to the coupling between different fidelity data, thereby improving the simulation efficiency while ensuring the simulation accuracy.
- a multi-fidelity network for nuclear reactor simulation testing comprising:
- Acquire multiple data sets of a sample nuclear reactor determine the accuracy of each data set, and use the data in a data set with the highest accuracy as first fidelity data; and use the data in the multiple data sets, except the first fidelity data, as second fidelity data.
- a first fidelity network is obtained according to the first fidelity data of the sample nuclear reactor.
- the second fidelity data is graded to obtain at least one fidelity grade and sub-data corresponding to each fidelity grade; a corresponding second fidelity network is obtained according to the sub-data corresponding to each fidelity grade; the number of the second fidelity networks is the same as the number of the fidelity grades.
- Each second fidelity network is trained using the sub-data corresponding to each fidelity level to obtain at least one trained second fidelity network.
- each trained second fidelity network is connected to one of the input ends of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain a multi-fidelity network.
- the multi-fidelity network is used as a generator network, and the corresponding discriminator network is obtained according to the generator network; a generative adversarial network is constructed according to the generator network and the discriminator network; the generative adversarial network is trained using the first fidelity data to obtain a trained generative adversarial network; the trained generator network is obtained from the trained generative adversarial network as a trained multi-fidelity network, and the trained multi-fidelity network is used to perform simulation tests on a target nuclear reactor.
- the specific steps of performing simulation test on the target nuclear reactor through the trained multi-fidelity network include: obtaining control parameters and first-time state parameters of the target nuclear reactor; inputting the control parameters and first-time state parameters into the trained multi-fidelity network to obtain second-time state parameters of the target nuclear reactor; and obtaining simulation test results of the target nuclear reactor based on the second-time state parameters.
- steps in the flowcharts involved in the above-mentioned embodiments can include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but can be executed at different times, and the execution order of these steps or stages is not necessarily carried out in sequence, but can be executed in turn or alternately with other steps or at least a part of the steps or stages in other steps.
- the embodiment of the present application also provides a multi-fidelity network construction device for nuclear reactor simulation testing, which is used to implement the multi-fidelity network construction method for nuclear reactor simulation testing involved above.
- the implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the above method, so the specific limitations in one or more embodiments of the multi-fidelity network construction device for nuclear reactor simulation testing provided below can refer to the limitations of the multi-fidelity network construction method for nuclear reactor simulation testing above, and will not be repeated here.
- the device 500 includes: an acquisition module 501, a training module 502 and a combination module 503, wherein:
- the acquisition module 501 is used to acquire a first fidelity network according to the first fidelity data of the sample nuclear reactor, and to acquire at least one second fidelity network according to the second fidelity data of the sample nuclear reactor.
- the training module 502 is used to train at least one second fidelity network using the second fidelity data to obtain at least one trained second fidelity network.
- the combination module 503 is used to combine at least one trained second fidelity network with the first fidelity network to obtain a multi-fidelity network, and use the first fidelity data to train the multi-fidelity network to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the acquisition module 501 is also used to acquire multiple data sets of a sample nuclear reactor; determine the accuracy of each data set, and use the data in a data set with the highest accuracy as first fidelity data; and use the data in multiple data sets except the first fidelity data as second fidelity data.
- the acquisition module 501 is further used to grade the second fidelity data to obtain at least one fidelity grade and sub-data corresponding to each fidelity grade; obtain the corresponding second fidelity network according to the sub-data corresponding to each fidelity grade; the number of second fidelity networks is the same as the number of fidelity grades.
- the training module 502 is further configured to respectively use the sub-data corresponding to each fidelity level to train each second fidelity network to obtain at least one trained second fidelity network.
- the combination module 503 is also used to connect the output end of each trained second fidelity network to one of the input ends of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain a multi-fidelity network.
- the combination module 503 is also used to use the multi-fidelity network as a generator network, and obtain a corresponding discriminator network based on the generator network; construct a generative adversarial network based on the generator network and the discriminator network; use the first fidelity data to train the generative adversarial network to obtain a trained generative adversarial network; obtain a trained generator network from the trained generative adversarial network as a trained multi-fidelity network.
- the apparatus further comprises:
- the test module 504 is used to obtain the control parameters and the state parameters of the target nuclear reactor at the first moment; input the control parameters and the state parameters of the first moment into the trained multi-fidelity network to obtain the state parameters of the target nuclear reactor at the second moment; and obtain the simulation test results of the target nuclear reactor based on the state parameters at the second moment.
- Each module in the multi-fidelity network construction device for nuclear reactor simulation testing can be implemented in whole or in part by software, hardware, or a combination thereof.
- Each module can be embedded in or independent of a processor in a computer device in the form of hardware, or can be stored in a memory in a computer device in the form of software, so that the processor can call and execute operations corresponding to each module.
- a computer device which may be a server, and its internal structure diagram may be shown in FIG6.
- the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O) and a communication interface.
- the processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface.
- the processor of the computer device is used to provide computing and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, a computer program and a database.
- the internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium.
- the database of the computer device is used to store neural network data.
- the input/output interface of the computer device is used to exchange information between the processor and an external device.
- the communication interface of the computer device is used to communicate with an external terminal through a network connection.
- FIG. 6 is merely a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may include more or fewer components than those shown in the figure, or combine certain components, or have a different arrangement of components.
- a computer device including a memory and a processor, wherein the memory stores a computer program
- a computer program is provided for executing the computer program by a processor, wherein the following steps are implemented when the processor executes the computer program: obtaining a first fidelity network according to first fidelity data of a sample nuclear reactor, and obtaining at least one second fidelity network according to second fidelity data of the sample nuclear reactor; training at least one second fidelity network using the second fidelity data to obtain at least one trained second fidelity network; combining at least one trained second fidelity network with the first fidelity network to obtain a multi-fidelity network, and training the multi-fidelity network using the first fidelity data to obtain a trained multi-fidelity network; and using the trained multi-fidelity network to perform simulation testing on a target nuclear reactor.
- the processor when the processor executes the computer program, the following steps are also implemented: acquiring multiple data sets of a sample nuclear reactor; determining the accuracy of each data set, and using the data in a data set with the highest accuracy as first fidelity data; and using the data in multiple data sets, except the first fidelity data, as second fidelity data.
- the second fidelity data is graded to obtain at least one fidelity level and sub-data corresponding to each fidelity level; a corresponding second fidelity network is obtained according to the sub-data corresponding to each fidelity level; the number of second fidelity networks is the same as the number of fidelity levels.
- the processor executes the computer program, the following steps are also implemented: the output end of each trained second fidelity network is connected to one of the input ends of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain the multi-fidelity network.
- the processor when the processor executes the computer program, the following steps are also implemented: obtaining control parameters and first-moment state parameters of the target nuclear reactor; inputting the control parameters and first-moment state parameters into a trained multi-fidelity network to obtain second-moment state parameters of the target nuclear reactor; and obtaining simulation test results of the target nuclear reactor based on the second-moment state parameters.
- a computer-readable storage medium on which a computer program is stored.
- the computer program is executed by a processor, the following steps are implemented: a first fidelity network is obtained based on first fidelity data of a sample nuclear reactor, and at least one second fidelity network is obtained based on second fidelity data of the sample nuclear reactor; at least one second fidelity network is trained using the second fidelity data to obtain at least one trained second fidelity network; at least one trained second fidelity network is combined with the first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the second fidelity data is graded to obtain at least one fidelity level and sub-data corresponding to each fidelity level; a corresponding second fidelity network is obtained according to the sub-data corresponding to each fidelity level; the number of second fidelity networks is the same as the number of fidelity levels.
- the following steps are further implemented: using the sub-data corresponding to each fidelity level to train each second fidelity network to obtain at least one trained second fidelity network.
- the following steps are also implemented: obtaining control parameters and first-time state parameters of the target nuclear reactor; inputting the control parameters and first-time state parameters into a trained multi-fidelity network to obtain second-time state parameters of the target nuclear reactor; and obtaining simulation test results of the target nuclear reactor based on the second-time state parameters.
- a computer program product comprising a computer program, which, when executed by a processor, implements the following steps:
- a first fidelity network is obtained according to first fidelity data of a sample nuclear reactor, and at least one second fidelity network is obtained according to second fidelity data of the sample nuclear reactor; at least one second fidelity network is trained using the second fidelity data to obtain at least one trained second fidelity network; at least one trained second fidelity network is combined with the first fidelity network to obtain a multi-fidelity network, and the multi-fidelity network is trained using the first fidelity data to obtain a trained multi-fidelity network; the trained multi-fidelity network is used to perform simulation testing on a target nuclear reactor.
- the following steps are also implemented: acquiring multiple data sets of a sample nuclear reactor; determining the accuracy of each data set, and using the data in a data set with the highest accuracy as first fidelity data; and using the data in multiple data sets, except the first fidelity data, as second fidelity data.
- the second fidelity data is graded to obtain at least one fidelity level and sub-data corresponding to each fidelity level; a corresponding second fidelity network is obtained according to the sub-data corresponding to each fidelity level; the number of second fidelity networks is the same as the number of fidelity levels.
- the following steps are further implemented: using the sub-data corresponding to each fidelity level to train each second fidelity network to obtain at least one trained second fidelity network.
- the output end of each trained second fidelity network is connected to one of the input ends of the first fidelity network; the input end of the first fidelity network and the input end of each trained second fidelity network are used as the input end of the multi-fidelity network, and the output end of the first fidelity network is used as the output end of the multi-fidelity network to obtain the multi-fidelity network.
- the following steps are also implemented: using the multi-fidelity network as a generator network, and obtaining a corresponding discriminator network based on the generator network; constructing a generative adversarial network based on the generator network and the discriminator network; using the first fidelity data to train the generative adversarial network to obtain a trained generative adversarial network; obtaining a trained generator network from the trained generative adversarial network as a trained multi-fidelity network.
- the following steps are also implemented: obtaining control parameters and first-time state parameters of the target nuclear reactor; inputting the control parameters and first-time state parameters into a trained multi-fidelity network to obtain second-time state parameters of the target nuclear reactor; and obtaining simulation test results of the target nuclear reactor based on the second-time state parameters.
- user information including but not limited to user device information, user personal information, etc.
- data including but not limited to data used for analysis, stored data, displayed data, etc.
- any reference to the memory, database or other media used in the embodiments provided in this application can include non-volatile computer-readable storage media. At least one of volatile and non-volatile memory.
- Non-volatile memory may include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetic random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc.
- Volatile memory may include random access memory (RAM) or external cache memory, etc.
- RAM may be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM).
- SRAM static random access memory
- DRAM dynamic random access memory
- the database involved in each embodiment provided in this application may include at least one of a relational database and a non-relational database.
- Non-relational databases may include distributed databases based on blockchains, etc., but are not limited thereto.
- the processor involved in each embodiment provided in this application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, etc., but is not limited thereto.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Monitoring And Testing Of Nuclear Reactors (AREA)
Abstract
La présente demande concerne un procédé et un appareil de construction de réseau multi-fidélité pour un test de simulation de réacteur nucléaire. Le procédé comprend : l'acquisition d'un premier réseau de fidélité selon des premières données de fidélité d'un réacteur nucléaire échantillon, et l'acquisition d'au moins un second réseau de fidélité selon des secondes données de fidélité du réacteur nucléaire échantillon (102) ; l'entraînement du ou des seconds réseaux de fidélité à l'aide des secondes données de fidélité pour obtenir au moins un second réseau de fidélité entraîné (104) ; et la combinaison du ou des seconds réseaux de fidélité entraînés avec le premier réseau de fidélité pour obtenir un réseau multi-fidélité, et l'entraînement du réseau multi-fidélité à l'aide des premières données de fidélité pour obtenir un réseau multi-fidélité entraîné, le réseau multi-fidélité entraîné étant utilisé pour réaliser un test de simulation sur un réacteur nucléaire cible (106). En utilisant le procédé, un résultat de simulation final peut être délivré selon le couplage entre différentes données de fidélité, ce qui permet d'améliorer l'efficacité de la simulation tout en garantissant la précision de la simulation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211447961.8A CN115859783A (zh) | 2022-11-18 | 2022-11-18 | 用于核反应堆仿真测试的多保真度网络构建方法和装置 |
CN202211447961.8 | 2022-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024104485A1 true WO2024104485A1 (fr) | 2024-05-23 |
Family
ID=85664126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/132550 WO2024104485A1 (fr) | 2022-11-18 | 2023-11-20 | Procédé et appareil de construction de réseau multi-fidélité pour test de simulation de réacteur nucléaire |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115859783A (fr) |
WO (1) | WO2024104485A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115859783A (zh) * | 2022-11-18 | 2023-03-28 | 中广核研究院有限公司 | 用于核反应堆仿真测试的多保真度网络构建方法和装置 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015037296A1 (fr) * | 2013-09-11 | 2015-03-19 | 株式会社日立製作所 | Dispositif d'analyse de système |
CN111178535A (zh) * | 2018-11-12 | 2020-05-19 | 第四范式(北京)技术有限公司 | 实现自动机器学习的方法和装置 |
CN112182938A (zh) * | 2020-10-13 | 2021-01-05 | 上海交通大学 | 基于迁移学习-多保真度建模的介观结构件力学性能预测方法 |
US20210391832A1 (en) * | 2020-06-12 | 2021-12-16 | Nokia Technologies Oy | Machine learning based digital pre-distortion for power amplifiers |
CN113886992A (zh) * | 2021-10-21 | 2022-01-04 | 大连理工大学 | 一种基于多保真度数据的数字孪生建模方法 |
CN114676522A (zh) * | 2022-03-28 | 2022-06-28 | 西安交通大学 | 融合gan和迁移学习的气动形状优化设计方法及系统及设备 |
US20220207218A1 (en) * | 2019-06-24 | 2022-06-30 | Nanyang Technological University | Machine learning techniques for estimating mechanical properties of materials |
CN115859783A (zh) * | 2022-11-18 | 2023-03-28 | 中广核研究院有限公司 | 用于核反应堆仿真测试的多保真度网络构建方法和装置 |
-
2022
- 2022-11-18 CN CN202211447961.8A patent/CN115859783A/zh active Pending
-
2023
- 2023-11-20 WO PCT/CN2023/132550 patent/WO2024104485A1/fr unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015037296A1 (fr) * | 2013-09-11 | 2015-03-19 | 株式会社日立製作所 | Dispositif d'analyse de système |
CN111178535A (zh) * | 2018-11-12 | 2020-05-19 | 第四范式(北京)技术有限公司 | 实现自动机器学习的方法和装置 |
US20220207218A1 (en) * | 2019-06-24 | 2022-06-30 | Nanyang Technological University | Machine learning techniques for estimating mechanical properties of materials |
US20210391832A1 (en) * | 2020-06-12 | 2021-12-16 | Nokia Technologies Oy | Machine learning based digital pre-distortion for power amplifiers |
CN112182938A (zh) * | 2020-10-13 | 2021-01-05 | 上海交通大学 | 基于迁移学习-多保真度建模的介观结构件力学性能预测方法 |
CN113886992A (zh) * | 2021-10-21 | 2022-01-04 | 大连理工大学 | 一种基于多保真度数据的数字孪生建模方法 |
CN114676522A (zh) * | 2022-03-28 | 2022-06-28 | 西安交通大学 | 融合gan和迁移学习的气动形状优化设计方法及系统及设备 |
CN115859783A (zh) * | 2022-11-18 | 2023-03-28 | 中广核研究院有限公司 | 用于核反应堆仿真测试的多保真度网络构建方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
CN115859783A (zh) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7562576B2 (ja) | 機械学習を用いた、迅速なデジタル原子炉設計 | |
KR20220124769A (ko) | 복잡한 다차원 제약 조건에 따른 고가의 비용 함수의 최적화 | |
WO2024104485A1 (fr) | Procédé et appareil de construction de réseau multi-fidélité pour test de simulation de réacteur nucléaire | |
JP2006189439A (ja) | 原子炉の炉心用未照射バンドル設計の決定方法 | |
WO2024193344A1 (fr) | Procédé et appareil de quantification de signaux d'indication d'un système d'instrumentation nucléaire, dispositif, et support de stockage | |
US20220253740A1 (en) | Systems and methods for simulating a quantum processor | |
CN114186405A (zh) | 一种核动力反应堆系统的参数不确定性分析方法及系统 | |
CN115147012B (zh) | 一种基于神经网络模型的碳排放量核算方法及装置 | |
CN112906272B (zh) | 一种反应堆稳态物理热工全耦合精细数值模拟方法及系统 | |
Liu et al. | A cumulative migration method for computing rigorous transport cross sections and diffusion coefficients for LWR lattices with Monte Carlo | |
WO2024016621A1 (fr) | Procédé et appareil de détermination d'échelle pour modèle d'essai de réacteur, et dispositif informatique | |
Xiang et al. | Controllability of weighted and directed networks with nonidentical node dynamics | |
Hoseyni et al. | A Bayesian ensemble of sensitivity measures for severe accident modeling | |
WO2024148895A1 (fr) | Procédé et appareil de prédiction d'informations de cœur de réacteur nucléaire, dispositif et support de stockage | |
Dinh et al. | Development and Application of a Data-Driven Methodology for Validation of Risk-Informed Safety Margin Characterization Models | |
El-Morshedy et al. | A New Probability Heavy‐Tail Model for Stochastic Modeling under Engineering Data | |
Bai et al. | Accelerating cluster dynamics simulation of fission gas behavior in nuclear fuel on deep computing unit–based heterogeneous architecture supercomputer | |
Prince et al. | Reduced Order Models Generation for HTGRs Pebble Shuffling Procedure Optimization Studies | |
O'Rourke | Modeling and Simulation of Stochastic Neutron and Cumulative Deposited Fission Energy Distributions | |
Athe | A framework for predictive capability maturity assessment of simulation codes | |
Spencer et al. | Use of Neutron Flux Calculated by Shift in a Grizzly Reactor Pressure Vessel Fracture Simulation | |
Ştefan et al. | Towards Testing Considerations Of Experimental Decision Support System Design | |
Dieudonne et al. | Depletion calculations based on perturbations. Application to the study of a REP-like assembly at beginning of cycle with TRIPOLI-4® | |
KR102700977B1 (ko) | 상호배타적인 분할사건을 고려한 원자력발전소의 지진사건 확률론적안전성평가에서의 노심손상빈도 산출 방법 및 장치 | |
Pounders et al. | The history-partitioning method for multigroup stochastic cross section generation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23890913 Country of ref document: EP Kind code of ref document: A1 |