CN115796269A - Atom cooling parameter online optimization method and device based on artificial neural network - Google Patents
Atom cooling parameter online optimization method and device based on artificial neural network Download PDFInfo
- Publication number
- CN115796269A CN115796269A CN202310089181.9A CN202310089181A CN115796269A CN 115796269 A CN115796269 A CN 115796269A CN 202310089181 A CN202310089181 A CN 202310089181A CN 115796269 A CN115796269 A CN 115796269A
- Authority
- CN
- China
- Prior art keywords
- parameter
- parameters
- artificial neural
- phase space
- experiment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 106
- 238000005457 optimization Methods 0.000 title claims abstract description 92
- 238000001816 cooling Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000002474 experimental method Methods 0.000 claims abstract description 61
- 238000012549 training Methods 0.000 claims abstract description 47
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 34
- 238000012360 testing method Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims description 18
- 239000013598 vector Substances 0.000 claims description 18
- 230000002068 genetic effect Effects 0.000 claims description 15
- 230000035772 mutation Effects 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 7
- 230000001502 supplementing effect Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- 238000002790 cross-validation Methods 0.000 claims description 5
- 150000001875 compounds Chemical class 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 238000004590 computer program Methods 0.000 description 8
- 125000004429 atom Chemical group 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 230000010287 polarization Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000007789 gas Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 125000004432 carbon atom Chemical group C* 0.000 description 1
- 238000009833 condensation Methods 0.000 description 1
- 230000005494 condensation Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052757 nitrogen Inorganic materials 0.000 description 1
- 125000004433 nitrogen atom Chemical group N* 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 239000011541 reaction mixture Substances 0.000 description 1
- 229910052701 rubidium Inorganic materials 0.000 description 1
- IGLNJRXAVVLDKE-UHFFFAOYSA-N rubidium atom Chemical compound [Rb] IGLNJRXAVVLDKE-UHFFFAOYSA-N 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to an atom cooling parameter online optimization method and device based on an artificial neural network. The method comprises the following steps: inputting an experiment parameter set into an atom cooling experiment device to obtain phase space density corresponding to the experiment parameter, evolving the experiment parameter and the data pair of the phase space density by using a differential evolution algorithm to obtain a next generation experiment parameter and phase space density pair until the evolution parameter set is output, taking the evolution parameter set as a training set and a testing set of an artificial neural network, training to obtain a plurality of trained artificial neural networks, optimizing the trained artificial neural network to obtain a plurality of optimal experiment parameters, expanding the optimal experiment parameters to further obtain a plurality of optimal phase space densities, and selecting the experiment parameter corresponding to the maximum value in the plurality of optimal phase space densities as the optimization result of the atom cooling parameter of the round. The method can realize the online optimization of the atomic cooling parameters.
Description
Technical Field
The application relates to the technical field of atomic cooling, in particular to an atomic cooling parameter online optimization method and device based on an artificial neural network.
Background
The atomic cooling technology is an important support technology in the fields of quantum precision measurement, quantum information processing, einstein condensation state preparation and the like, the cooling process is influenced by a plurality of parameters, for example, polarization Gradient Cooling (PGC) of Rb atoms, experimental parameters to be optimized mainly include a gradient magnetic field, a compensation magnetic field, rubidium source current, cooling light detuning, cooling light power, loading time of a Magneto-optical trap (MOT), polarization gradient cooling time, laser detuning amount during Polarization gradient cooling, and laser power change rate during Polarization gradient cooling, and the whole atomic cooling process is a very complex and highly nonlinear process.
At present, most of the methods for optimizing the atomic cooling parameters are manual adjustment or parameter-by-parameter scanning. The manual adjustment mainly depends on personal feelings, the requirements on the experience of the personnel are high, the optimization has certain blindness, an initial parameter combination is generally given according to the experience, then the optimal value of the initial parameter combination is searched one by one through the parameters, and because the atomic cooling experiment is a highly nonlinear process, the other problem caused by the optimization mode is that the optimization mode is easy to fall into local optimization. The parameter-by-parameter scanning mode can generally find the optimal parameters, but the accuracy of the optimal parameters depends on the scanning step length of each parameter, the calculated amount increases exponentially with the increase of the number of the optimized parameters, the calculated amount is huge, and the optimization efficiency is low. Some documents propose atomic cooling parameter optimization schemes based on machine learning, but most of the schemes are based on Gaussian process models and evolutionary algorithms, the advantages of deep learning on large data sets are not exerted, and some documents propose optimization schemes based on deep learning, but most of the schemes rely on historical data and are executed offline, training models are difficult to update in real time, and some deep learning schemes in the documents cannot effectively avoid local optimization, so that the optimization effect is limited.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an intelligent algorithm architecture and apparatus capable of efficiently optimizing atomic cooling parameters.
An atomic cooling parameter online optimization method based on an artificial neural network, the method comprising:
inputting a preset experiment parameter set into an atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
evolving the experiment parameter and the phase space density pair by using a differential evolution algorithm to obtain a next generation of data pair of the experiment parameter and the phase space density until an evolution parameter set consisting of a plurality of generations of the experiment parameter and the phase space density pair is output;
taking the evolution parameter set as a training set and a testing set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks;
carrying out global optimization on the trained artificial neural network by utilizing a genetic algorithm to obtain a plurality of optimal experimental parameters;
expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization result of the atomic cooling parameters of the current round, judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the current atomic cooling parameters as the final optimization result, if not, supplementing all parameter sets to the original parameter sets, retraining the neural network, and performing iteration of the next round until the termination condition is met.
In one embodiment, the phase space density is:
wherein the content of the first and second substances,the spatial density of the phases is represented,which represents the wavelength of de broglie,,in order to approximate the constant of planck,is a compound having a structure represented by the atomic mass,is a constant of boltzmann's constant,is the temperature of the atoms of the gas,represents the atomic number density.
In one embodiment, the method further comprises the following steps: selecting a plurality of groups of parameter groups consisting of a plurality of different experimental parameters, and calculating to obtain a plurality of groups of variation parameters through a preset variation formula;
generating random numbers R and R, and setting(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
If yes, then set upWherein, in the step (A),represents the 1 st generationA first of the cross parameter vectorsThe number of the parameters is one,represents the 1 st generationThe first of the variation parametersA parameter; if not, setting,,First to show initial experimental parametersA parameter;
Is provided withWill cross the parameter vectorInputting into an experimental device to obtain an optimized indexComparison ofAndsize of (1), ifThen, thenAnd if not, the step (B),;
when in useGeneration 1 evolution data set was generated,(ii) a Wherein the content of the first and second substances,represents the ith experimental parameter of the 1 st generation,represents the ith phase space density of the 1 st generation, and N represents the number of groups of variation parameters.
In one embodiment, the method further comprises the following steps: and (4) disordering the data in the evolution parameter set, and obtaining a training set and a test set after rounding according to a proportion.
In one embodiment, the method further comprises the following steps: when the data in the training set is smaller than a preset value, training the plurality of artificial neural networks with the same structure by adopting a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein the plurality of artificial neural networks with the same structure form a random neural network.
In one embodiment, the method further comprises the following steps: and generating new optimal experimental parameters through variation operation in a differential evolution algorithm according to the optimal experimental parameters.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
An artificial neural network-based atomic cooling parameter online optimization device, comprising:
the optimization target determining module is used for inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
the sample construction module is used for evolving the data pairs of the experimental parameters and the phase space density by utilizing a differential evolution algorithm to obtain the next generation of the experimental parameters and the phase space density pairs until an evolution parameter set consisting of a plurality of generations of the experimental parameters and the phase space density pairs is output;
the network training module is used for taking the evolution parameter set as a training set and a testing set of the artificial neural network, training a plurality of artificial neural networks with the same structure by using the training set, and obtaining a plurality of trained artificial neural networks;
the parameter optimization module is used for optimizing the trained artificial neural network to obtain a plurality of optimal experimental parameters; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the atomic cooling parameter optimization results in the round.
And the termination judging module is used for judging whether the optimization termination condition is met or not, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
According to the atomic cooling parameter online optimization method and device based on the artificial neural network, firstly, the phase space density is used for describing the atomic number density and the atomic temperature, the experiment parameter and the phase space density pair are evolved by using a differential evolution algorithm to obtain the next generation of experiment parameter and phase space density pair until an evolution parameter set consisting of a plurality of generations of experiment parameter and phase space density pairs is output, so that the effectiveness of data is improved, then the evolution parameter set is used as a training set and a testing set of the artificial neural network, so that the iterative calculation amount of the artificial neural network can be reduced, and by using a plurality of artificial neural networks with the same structure, the data disturbance can be reduced, so that the prediction precision is improved, and finally, the prediction parameters are fed back to the experiment device in real time, so that the instantaneity and the accuracy of the optimization process are improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for online optimization of atomic cooling parameters based on an artificial neural network according to an embodiment;
FIG. 2 is a block diagram of an atomic cooling parameter online optimization device based on an artificial neural network according to an embodiment;
FIG. 3 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided an atomic cooling parameter online optimization method based on an artificial neural network, including the following steps:
and 102, inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter.
The phase space density is related to the atomic number density and atomic temperature.
And 104, evolving the data pairs of the experimental parameters and the phase space density by using a differential evolution algorithm to obtain next generation of experimental parameter and phase space density pairs until an evolution parameter set consisting of a plurality of generations of experimental parameter and phase space density pairs is output.
And 106, taking the evolution parameter set as a training set and a test set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks.
In this step, a plurality of artificial neural networks may be selected, and the specific number may be set according to actual requirements, for example, 3 artificial neural networks may be set.
And 108, performing global optimization on the trained artificial neural network by using a genetic algorithm to obtain a plurality of optimal experimental parameters.
And 110, expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into the atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round.
And 112, judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
In the atomic cooling parameter online optimization method based on the artificial neural network, firstly, the atomic number density and the atomic temperature are described by using the phase space density, the experimental parameters and the phase space density pairs are evolved by using a differential evolution algorithm to obtain next generation experimental parameters and phase space density pairs until an evolution parameter set consisting of a plurality of generations of experimental parameters and phase space density pairs is output, so that the effectiveness of data is improved, then, the evolution parameter set is used as a training set and a testing set of the artificial neural network, so that the iterative computation amount of the artificial neural network can be reduced, and by using a plurality of artificial neural networks with the same structure, the data disturbance can be reduced, so that the prediction precision is improved, and finally, the prediction parameters are input into an experimental device, so that the instantaneity and the precision of the optimization process are improved.
In one embodiment, the phase space density is:
wherein the content of the first and second substances,the spatial density of the phases is represented,which represents the wavelength of de broglie,,in order to approximate the constant of planck,is a compound having a structure represented by the atomic mass,is a constant of boltzmann's constant,is the temperature of the atoms of the gas,represents the atomic number density.
Specifically, the atomic cooling experimental apparatus is a complete set of experimental systems or devices for atomic cooling, and generally includes a laser system, a vacuum system, a timing control system, a magnetic shield (or a compensation magnetic field), a magneto-optical trap, and other accessory devices and elements. For comprehensive evaluation of atomic number density and atomic temperature, a description is made using phase space density, in which atomic number densityThe atomic temperature is generally measured by the Time Of Flight (TOF) method, and is preferably acceleratedChanging the speed, reducing the calculation amount of each optimization cycle, measuring the size of the atomic group at the TOF time only twice, and then according to a formulaTo calculate the radical temperatureWhereinIs composed ofThe size of the radical measured at the time. As can be seen from the above description, the indexComprehensively reflects the atomic number density and the atomic temperature,the larger the atomic number density and the lower the atomic temperature, the optimization objective is to find a group of experimental parameters, so thatThe maximum value is taken.
In one embodiment, a plurality of sets of parameter sets composed of a plurality of different experimental parameters are selected, and a plurality of sets of variation parameters are calculated through a preset variation formula. Generating random numbers R and R, and setting(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1; judging preset cross probabilityOr(ii) a If yes, then set upWherein, in the process,represents the 1 st generationThe first of the cross parameter vectorThe number of the parameters is set to be,represents the 1 st generationThe first of the variation parametersA parameter; if not, setting,,First to show initial experimental parametersA parameter; when in useThen, the 1 st generation cross parameter vector set is generated,(ii) a Is provided withWill cross the parameter vectorInputting into an experimental device to obtain an optimized indexComparison ofAndin the size of (1)Then, thenAnd if not, the step (B),(ii) a When in useGeneration 1 evolution data set was generated,(ii) a Wherein the content of the first and second substances,represents the ith experimental parameter of the 1 st generation,the i-th phase space density of the 1 st generation is shown, and N represents the number of groups of variation parameters. By repeating the above steps, the evolution parameter set can be formed by multiple generations of evolution parameters.
In one embodiment, data in the evolution parameter set is scrambled, and after proportional rounding, a training set and a test set are obtained.
In one embodiment, when the data in the training set is smaller than a preset value, a plurality of artificial neural networks with the same structure are trained in a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein, a plurality of artificial neural networks with the same structure form a random neural network.
Specifically, a random Neural Network (SANN) is formed by a plurality of Artificial Neural Networks (ANN), so that random disturbance caused by weight initialization can be eliminated, the Network structures of each ANN need to be consistent, in order to improve optimization efficiency, a hidden layer of each ANN does not exceed 5 layers, each ANN needs to initialize a weight respectively, but other learning parameters need to be guaranteed to be consistent, and the hyper-parameters of the Neural Network are as follows: the total neuron number, the number of network layers, the learning rate, the maximum training times, the evaluation index and the activation function need to be adjusted flexibly according to the actual performance requirement.
In one embodiment, a plurality of trained ANN are subjected to global optimization through a genetic algorithm to obtain a plurality of corresponding optimal experimental parameters, and one of the ANN is taken as an example for explanation. Determining the variation range of each optimized parameter:(ii) a According to the variation range, carrying out binary coding on each random variable; random initialization populationWhere N is the population number, for efficient implementation of genetic algorithms,an even number is required; bringing the initial population into ANN, and solving the corresponding fitness(ii) a Screening individuals with higher adaptability in a roulette mode to form a new population:(ii) a Set the cross probability 0.6 (can be adjusted flexibly), to the populationPerforming a crossover operation on each individual to form a crossover population:(ii) a Set mutation probability 0.001 (flexibleAdjustment) of populationAnd (3) performing mutation operation on each individual to form a mutation population:the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed asOr(ii) a Will evolve the populationThe individuals are sequentially brought into the trained ANN model, and the corresponding fitness is solved:and then solving the standard deviation of the population fitness of the current generation(ii) a Repeating the iteration to evolve more generations of populationsWhereinIs the nth generation of genetic algorithm; standard deviation of fitness of 5 th generationNo longer changing significantly (e.g., less than 0.001), the evolution is terminated; and selecting the individual with the maximum fitness from all population individuals as a final optimization result.
In one embodiment, according to the optimal experimental parameters, new optimal experimental parameters are generated through variation operation in a differential evolution algorithm.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
The technical solution of the present invention is further illustrated by a specific example.
An atom cooling parameter online optimization method based on an artificial neural network comprises the following specific steps:
step 1: the variation range of each group of parameters is determined according to prior experience when the experimental parameters are initialized randomly, the variation range of each group of parameters is uniformly sampled to obtain the initial value of the experimental parameters, in order to utilize the subsequent differential evolution algorithm,the minimum value cannot be lower than 3, and the maximum value can affect the initialization efficiency and is generally 10-50.
And 2, step: the interval of each experimental cycle is 10 ms-1 s (the specific time is determined according to the actual performance of the experimental device used), so as to eliminate the mutual influence or coupling between each group of experimental parameters.
And 3, step 3: mutation probability in differential evolution algorithmAnd cross probability0.5 and 0.1 can be selected, the method can also be flexibly adjusted according to the actual effect, and then the parameter set is subjected to differential evolution according to the following steps:
step 3.1 in parameter setRandomly selecting three different parameter vectorsWherein a, b and c are different from each other.
Step 3.4 to ensure that at least one mutated gene is inherited by the next generation, a mutation of between 1 and 1 is first generated(containing 1 and) Random number ofAnd make an order。
Step 3.5 generating a random number between 0 and 1And judging whether the conditions are met:orIf the condition is satisfied, then order(wherein,represents the 1 st generationThe first of the cross parameter vectorThe number of the parameters is one,represents the 1 st generationThe first of the variation vectorsOne parameter), otherwise, order,。
Step 3.6And repeating the step 3.5 untilThus, a 1 st generation cross-parameter vector set is generated,。
Step 3.8 treatingBringing into an experimental device to obtain an optimized indexComparison ofAndsize of (1), ifThen, thenAnd if not, the step (B),。
And 4, step 4: the termination criteria can be determined based on the total number of ANN neurons or on expected experimental criteria (atomic density and atomic temperature), for example, if the total number of neurons per ANN isThen whenThe iteration can be terminated.
And 5: data setBefore entering the ANN for training, the original sequence is disturbed, and after rounding in proportion, the sequence is divided into a training set and a verification set.
And 6: matters to be noted in ANN training:
6.1 If the data size is not large enough, a K-fold cross validation mode can be adopted by a training mechanism.
6.2 A random neural network (SANN) is formed by a plurality of ANNs, so that random disturbance caused by weight initialization can be eliminated, the network structures of all the ANNs need to be consistent, and in order to improve optimization efficiency, the hidden layer of each ANN is recommended not to exceed 5 layers.
6.3 Each ANN should initialize the weight value separately, but it should be guaranteed that other learning parameters are consistent.
6.4 Hyper-parameters of neural networks, such as: the total neuron number, the number of network layers, the learning rate, the maximum training times, the evaluation index and the activation function need to be flexibly adjusted according to the actual performance requirement.
And 7: global optimization is carried out on a plurality of trained ANNs through a genetic algorithm, a plurality of corresponding optimal experimental parameters are obtained respectively, and one of the ANNs is taken as an example for explanation:
7.2 according to the variation range, carrying out binary coding on each random variable;
7.3 random initialization populationWhere N is the population number, for efficient implementation of genetic algorithms,an even number is required;
7.6 setting the cross probability 0.6 (flexible adjustment), for the populationPerforming cross operation on each individual to form a cross population:;
7.7 set mutation probability 0.001 (flexible adjustment), for populationPerforming mutation operation on each individual to form a mutation population:the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed asOr;
7.8 evolving populationsThe individuals are sequentially brought into the trained ANN model, and the corresponding fitness is solved:and then solving the standard deviation of the population fitness of the current generation;
7.9 Repeat 7.5-7.8 to evolve more generations of populationWhereinIs the nth generation of genetic algorithm;
7.10 Standard deviation of fitness of approximately 5 generationsNo longer changing significantly (e.g., less than 0.001), the evolution is terminated;
7.11 selecting the individual with the maximum fitness from all population individuals as the final optimization result of the ANN.
And 8: the generation of the evolution parameter set comprises the following specific steps:
And step 9: the specific steps of termination judgment are as follows:
9.1 Parameter set to evolveInputting into an atomic cooling experimental device to obtain multiple optimal phase space densities;
9.2 selectionThe most important ofThe experimental parameters corresponding to the large values are used as the optimization results of the atomic cooling parameters of the round;
9.3 And judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, if not, supplementing the parameter set of the round to the original parameter set, retraining the network, and starting the next round of iteration.
The termination condition generally consists of the following three items, and if any one of the three items is satisfied, the iteration can be terminated:
(1) The standard deviation of the optimization result of the 5 rounds is better than the expected lowest standard deviation;
(2) The maximum cycle number is reached;
(3) The optimization results of 10 consecutive rounds are not better than the optimal values of the last time.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 2, there is provided an atomic cooling parameter online optimization device based on an artificial neural network, including: an optimization goal determination module 202, a sample construction module 204, a network training module 206, a parameter optimization module 208, and a termination discrimination module 210, wherein:
an optimization target determining module 202, configured to input a preset experiment parameter set into an atomic cooling experiment apparatus, so as to obtain a phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
a sample construction module 204, configured to utilize a differential evolution algorithm to evolve the data pairs of the experimental parameters and the phase space density to obtain next generation of the experimental parameter and the phase space density pairs until an evolution parameter set composed of a plurality of generations of the experimental parameters and the phase space density pairs is output;
a network training module 206, configured to use the evolution parameter set as a training set and a test set of an artificial neural network, and train a plurality of artificial neural networks with the same structure using the training set to obtain a plurality of trained artificial neural networks;
the parameter optimization module 208 is configured to obtain a plurality of optimal experimental parameters by optimizing the trained artificial neural network; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round.
And a termination judging module 210, configured to judge whether an optimization termination condition is met, if yes, terminate the optimization process, and use the optimization result of the current round as a final optimization result, and if not, supplement the parameter set of the current round to the original parameter set, retrain the network, and start a next round of iteration.
In one embodiment, the phase space density is:
wherein the content of the first and second substances,the spatial density of the phases is represented,which represents the wavelength of de broglie,,in order to approximate the constant of planck,is a mixture of a carbon atom and a nitrogen atom,is a constant of boltzmann's constant,is the temperature of the atoms of the reaction mixture,represents the atomic number density.
In one embodiment, the sample construction module 204 is further configured to select multiple sets of parameter sets composed of multiple different experimental parameters, and calculate multiple sets of variation parameters according to a preset variation formula;
generating random numbers R and R, and setting(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
If yes, then set upWherein, in the process,represents the 1 st generationThe first of the cross parameter vectorThe number of the parameters is one,represents the 1 st generationThe first of the variation parametersA parameter; if not, setting,,First to show initial experimental parametersA parameter;
Is provided withWill cross the parameter vectorInputting into an experimental device to obtain an optimized indexComparison ofAndsize of (1), ifThen, thenIf not, then,;
when the temperature is higher than the set temperatureGeneration 1 evolution data set was generated,(ii) a Wherein the content of the first and second substances,represents the ith experimental parameter of the 1 st generation,represents the ith phase space density of the 1 st generation, and N represents the number of groups of variation parameters.
In one embodiment, the network training module 206 is further configured to scramble data in the evolution parameter set, and obtain a training set and a test set after rounding according to a ratio.
In one embodiment, the network training module 206 is further configured to train the multiple artificial neural networks with the same structure in a K-fold cross validation manner to obtain multiple trained artificial neural networks when the data in the training set is smaller than a preset value; wherein the plurality of artificial neural networks with the same structure form a random neural network.
In one embodiment, the parameter optimization module 208 is further configured to generate a new optimal experimental parameter through a variation operation in a differential evolution algorithm according to the optimal experimental parameter.
In one embodiment, the termination judging module 210 is further configured to judge whether an optimization termination condition is satisfied, and obtain a final optimization result.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
For specific limitations of the online atomic cooling parameter optimization device based on the artificial neural network, reference may be made to the above limitations of the online atomic cooling parameter optimization method based on the artificial neural network, and details are not repeated here. The modules in the artificial neural network-based atomic cooling parameter online optimization device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an artificial neural network-based atomic cooling parameter online optimization method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method in the above embodiments when the processor executes the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method in the above-mentioned embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (9)
1. An atom cooling parameter online optimization method based on an artificial neural network is characterized by comprising the following steps:
inputting a preset experiment parameter set into an atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
evolving the data pairs of the experimental parameters and the phase space density by using a differential evolution algorithm to obtain next generation of pairs of the experimental parameters and the phase space density until an evolution parameter set consisting of a plurality of generations of pairs of the experimental parameters and the phase space density is output;
taking the evolution parameter set as a training set and a testing set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks;
global optimization is carried out on the trained artificial neural network through a genetic algorithm to obtain a plurality of optimal experimental parameters;
expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round;
and judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, if not, supplementing the parameter set of the round to the original parameter set, retraining the network, and starting the next round of iteration.
2. The method of claim 1, wherein the phase space density is:
wherein the content of the first and second substances,the spatial density of the phases is represented,which represents the wavelength of de broglie,,in order to approximate the constant of planck,is a compound having a structure represented by the atomic mass,is a constant of boltzmann's constant,is the temperature of the atoms of the gas,represents the atomic number density.
3. The method of claim 1, wherein evolving the experimental parameter and the phase space density pair using a differential evolution algorithm to obtain a next generation of the experimental parameter and the phase space density pair comprises:
selecting a plurality of groups of parameter groups consisting of a plurality of different experimental parameters, and calculating to obtain a plurality of groups of variation parameters through a preset variation formula;
generating random numbers R and R, and setting(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
If yes, then set upWherein, in the step (A),represents the 1 st generationA first of the cross parameter vectorsThe number of the parameters is one,represents the 1 st generationThe first of the variation parametersA parameter; if not, setting,,First to show initial experimental parametersA parameter;
when the temperature is higher than the set temperatureThen, the 1 st generation cross parameter vector set is generated,;
Is provided withWill cross the parameter vectorInputting into an experimental device to obtain an optimized indexComparison ofAndin the size of (1)Then, thenIf not, then,;
when the temperature is higher than the set temperatureGeneration 1 evolution data set was generated,(ii) a Wherein, the first and the second end of the pipe are connected with each other,represents the ith experimental parameter of the 1 st generation,represents the ith phase space density of the 1 st generation, N represents the number of groups of variation parameters,
and repeating the operation, and continuing to evolve until the evolution algebra meets the termination condition to obtain the evolution parameter set.
4. The method of claim 1, wherein using the set of evolutionary parameters as a training set and a testing set of artificial neural networks comprises:
and disturbing the data in the evolution parameter set, and rounding the data according to the proportion to obtain a training set and a test set.
5. The method of claim 1, wherein training a plurality of structurally identical artificial neural networks using the training set to obtain a plurality of trained artificial neural networks comprises:
when the data in the training set is smaller than a preset value, training the plurality of artificial neural networks with the same structure by adopting a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein the plurality of artificial neural networks with the same structure form a random neural network.
6. The method of claim 1, wherein the trained artificial neural network is globally optimized by a genetic algorithm to obtain a plurality of optimal experimental parameters, comprising:
according to the variation range, carrying out binary coding on each random variable;
random initialization populationWhere N is the population number, for efficient implementation of genetic algorithms,it needs to be an even number;
inputting the initial population into an artificial neural network, and solving the corresponding fitness;
Setting cross probability, to the populationEach individual in the group (a) is subjected to cross operation to form a cross population:;
Setting variation probability, to the populationAnd (3) performing mutation operation on each individual to form a mutation population:the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed asOr;
Will evolve the populationThe individuals are sequentially brought into the trained artificial neural network, and the corresponding fitness is solved:and then solving the standard deviation of the population fitness of the current generation;
Repeating iteration to evolve multi-generation populationWhereinIs the nth generation of genetic algorithm;
standard deviation of fitness for multiple successive generationsStopping evolution if no large change occurs any more;
and selecting the individual with the maximum fitness from all population individuals as a final optimization result of the artificial neural network.
7. The method according to any one of claims 1 to 5, wherein the expanding of the optimal experimental parameters comprises:
and generating new optimal experimental parameters through variation operation in a differential evolution algorithm according to the optimal experimental parameters.
8. The method of any one of claims 1 to 5, wherein the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
9. An atomic cooling parameter online optimization device based on an artificial neural network is characterized by comprising:
the optimization target determining module is used for inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
the sample construction module is used for carrying out evolution on the experiment parameters and the phase space density pairs by utilizing a differential evolution algorithm to obtain next generation of data pairs of the experiment parameters and the phase space density until an evolution parameter set consisting of a plurality of generations of the experiment parameters and the phase space density pairs is output;
the network training module is used for taking the evolution parameter set as a training set and a testing set of the artificial neural network, training a plurality of artificial neural networks with the same structure by using the training set, and obtaining a plurality of trained artificial neural networks;
the parameter optimization module is used for optimizing the trained artificial neural network to obtain a plurality of optimal experimental parameters; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round;
and the termination judging module is used for judging whether the optimization termination condition is met or not, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310089181.9A CN115796269A (en) | 2023-02-09 | 2023-02-09 | Atom cooling parameter online optimization method and device based on artificial neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310089181.9A CN115796269A (en) | 2023-02-09 | 2023-02-09 | Atom cooling parameter online optimization method and device based on artificial neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115796269A true CN115796269A (en) | 2023-03-14 |
Family
ID=85430686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310089181.9A Pending CN115796269A (en) | 2023-02-09 | 2023-02-09 | Atom cooling parameter online optimization method and device based on artificial neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115796269A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200161446A1 (en) * | 2018-11-20 | 2020-05-21 | ColdQuanta, Inc. | Quantum tunneling matter-wave transistor system |
CN113268925A (en) * | 2021-05-18 | 2021-08-17 | 南京邮电大学 | Dynamic soft measurement method based on differential evolution algorithm time delay estimation |
CN113449930A (en) * | 2021-07-27 | 2021-09-28 | 威海长和光导科技有限公司 | Optical fiber preform preparation quality prediction method based on BP neural network |
CN114861881A (en) * | 2022-05-06 | 2022-08-05 | 那一麟 | Method for optimizing super-cold atom evaporative cooling parameters by applying machine learning |
-
2023
- 2023-02-09 CN CN202310089181.9A patent/CN115796269A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200161446A1 (en) * | 2018-11-20 | 2020-05-21 | ColdQuanta, Inc. | Quantum tunneling matter-wave transistor system |
CN113268925A (en) * | 2021-05-18 | 2021-08-17 | 南京邮电大学 | Dynamic soft measurement method based on differential evolution algorithm time delay estimation |
CN113449930A (en) * | 2021-07-27 | 2021-09-28 | 威海长和光导科技有限公司 | Optical fiber preform preparation quality prediction method based on BP neural network |
CN114861881A (en) * | 2022-05-06 | 2022-08-05 | 那一麟 | Method for optimizing super-cold atom evaporative cooling parameters by applying machine learning |
Non-Patent Citations (4)
Title |
---|
A.D. TRANTER ET AL.: "Multiparameter optimisation of a magneto-optical trap using deep learning" * |
AJ BARKER ET AL.: "Applying machine learning optimization methods to the production of a quantum gas" * |
朱若谷 等: "《激光应用技术》", 国防工业出版社 * |
潘建松: "基于超冷原子的量子模拟研究" * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10885435B2 (en) | System and method for training neural networks | |
Cremer et al. | From optimization-based machine learning to interpretable security rules for operation | |
Wang et al. | A grey prediction-based evolutionary algorithm for dynamic multiobjective optimization | |
WO2020028036A1 (en) | Robust von neumann ensembles for deep learning | |
Kang et al. | Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network | |
CN108876038B (en) | Big data, artificial intelligence and super calculation synergetic material performance prediction method | |
CN109767034B (en) | Relay protection constant value optimization method and device, computer equipment and storage medium | |
Rojo | Machine Learning tools for global PDF fits | |
Shin et al. | Physics-informed variational inference for uncertainty quantification of stochastic differential equations | |
CN112862004B (en) | Power grid engineering cost control index prediction method based on variational Bayesian deep learning | |
US20230059708A1 (en) | Generation of Optimized Hyperparameter Values for Application to Machine Learning Tasks | |
Shemyakin et al. | Online identification of large-scale chaotic system | |
Li et al. | Learning slow and fast system dynamics via automatic separation of time scales | |
CN115796269A (en) | Atom cooling parameter online optimization method and device based on artificial neural network | |
Job et al. | Systematic comparison of deep belief network training using quantum annealing vs. classical techniques | |
Yassin et al. | Comparison between NARX parameter estimation methods with Binary Particle Swarm Optimization-based structure selection method | |
Wu et al. | Improved saddle point prediction in stochastic two-player zero-sum games with a deep learning approach | |
CN113641907B (en) | Super-parameter self-adaptive depth recommendation method and device based on evolutionary algorithm | |
Peck et al. | Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring | |
Baldi et al. | The ebb and flow of deep learning: a theory of local learning | |
Tyas et al. | Implementation of Particle Swarm Optimization (PSO) to improve neural network performance in univariate time series prediction | |
Hauser et al. | Probabilistic forecasting of symbol sequences with deep neural networks | |
Cai et al. | Surrogate-assisted operator-repeated evolutionary algorithm for computationally expensive multi-objective problems | |
Zhang et al. | resnetCox: a residual neural network method for high-throughput survival Analysis | |
Teng et al. | A Simulated Annealing BP Algorithm for Adaptive Temperature Setting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230314 |
|
RJ01 | Rejection of invention patent application after publication |