CN115796269A - Atom cooling parameter online optimization method and device based on artificial neural network - Google Patents

Atom cooling parameter online optimization method and device based on artificial neural network Download PDF

Info

Publication number
CN115796269A
CN115796269A CN202310089181.9A CN202310089181A CN115796269A CN 115796269 A CN115796269 A CN 115796269A CN 202310089181 A CN202310089181 A CN 202310089181A CN 115796269 A CN115796269 A CN 115796269A
Authority
CN
China
Prior art keywords
parameter
parameters
artificial neural
phase space
experiment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310089181.9A
Other languages
Chinese (zh)
Inventor
朱凌晓
梁昌文
颜树华
杨俊�
李期学
刘纪勋
王国超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202310089181.9A priority Critical patent/CN115796269A/en
Publication of CN115796269A publication Critical patent/CN115796269A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to an atom cooling parameter online optimization method and device based on an artificial neural network. The method comprises the following steps: inputting an experiment parameter set into an atom cooling experiment device to obtain phase space density corresponding to the experiment parameter, evolving the experiment parameter and the data pair of the phase space density by using a differential evolution algorithm to obtain a next generation experiment parameter and phase space density pair until the evolution parameter set is output, taking the evolution parameter set as a training set and a testing set of an artificial neural network, training to obtain a plurality of trained artificial neural networks, optimizing the trained artificial neural network to obtain a plurality of optimal experiment parameters, expanding the optimal experiment parameters to further obtain a plurality of optimal phase space densities, and selecting the experiment parameter corresponding to the maximum value in the plurality of optimal phase space densities as the optimization result of the atom cooling parameter of the round. The method can realize the online optimization of the atomic cooling parameters.

Description

Atomic cooling parameter online optimization method and device based on artificial neural network
Technical Field
The application relates to the technical field of atomic cooling, in particular to an atomic cooling parameter online optimization method and device based on an artificial neural network.
Background
The atomic cooling technology is an important support technology in the fields of quantum precision measurement, quantum information processing, einstein condensation state preparation and the like, the cooling process is influenced by a plurality of parameters, for example, polarization Gradient Cooling (PGC) of Rb atoms, experimental parameters to be optimized mainly include a gradient magnetic field, a compensation magnetic field, rubidium source current, cooling light detuning, cooling light power, loading time of a Magneto-optical trap (MOT), polarization gradient cooling time, laser detuning amount during Polarization gradient cooling, and laser power change rate during Polarization gradient cooling, and the whole atomic cooling process is a very complex and highly nonlinear process.
At present, most of the methods for optimizing the atomic cooling parameters are manual adjustment or parameter-by-parameter scanning. The manual adjustment mainly depends on personal feelings, the requirements on the experience of the personnel are high, the optimization has certain blindness, an initial parameter combination is generally given according to the experience, then the optimal value of the initial parameter combination is searched one by one through the parameters, and because the atomic cooling experiment is a highly nonlinear process, the other problem caused by the optimization mode is that the optimization mode is easy to fall into local optimization. The parameter-by-parameter scanning mode can generally find the optimal parameters, but the accuracy of the optimal parameters depends on the scanning step length of each parameter, the calculated amount increases exponentially with the increase of the number of the optimized parameters, the calculated amount is huge, and the optimization efficiency is low. Some documents propose atomic cooling parameter optimization schemes based on machine learning, but most of the schemes are based on Gaussian process models and evolutionary algorithms, the advantages of deep learning on large data sets are not exerted, and some documents propose optimization schemes based on deep learning, but most of the schemes rely on historical data and are executed offline, training models are difficult to update in real time, and some deep learning schemes in the documents cannot effectively avoid local optimization, so that the optimization effect is limited.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an intelligent algorithm architecture and apparatus capable of efficiently optimizing atomic cooling parameters.
An atomic cooling parameter online optimization method based on an artificial neural network, the method comprising:
inputting a preset experiment parameter set into an atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
evolving the experiment parameter and the phase space density pair by using a differential evolution algorithm to obtain a next generation of data pair of the experiment parameter and the phase space density until an evolution parameter set consisting of a plurality of generations of the experiment parameter and the phase space density pair is output;
taking the evolution parameter set as a training set and a testing set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks;
carrying out global optimization on the trained artificial neural network by utilizing a genetic algorithm to obtain a plurality of optimal experimental parameters;
expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization result of the atomic cooling parameters of the current round, judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the current atomic cooling parameters as the final optimization result, if not, supplementing all parameter sets to the original parameter sets, retraining the neural network, and performing iteration of the next round until the termination condition is met.
In one embodiment, the phase space density is:
Figure SMS_1
wherein the content of the first and second substances,
Figure SMS_4
the spatial density of the phases is represented,
Figure SMS_5
which represents the wavelength of de broglie,
Figure SMS_7
Figure SMS_3
in order to approximate the constant of planck,
Figure SMS_6
is a compound having a structure represented by the atomic mass,
Figure SMS_8
is a constant of boltzmann's constant,
Figure SMS_9
is the temperature of the atoms of the gas,
Figure SMS_2
represents the atomic number density.
In one embodiment, the method further comprises the following steps: selecting a plurality of groups of parameter groups consisting of a plurality of different experimental parameters, and calculating to obtain a plurality of groups of variation parameters through a preset variation formula;
generating random numbers R and R, and setting
Figure SMS_10
(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
judging preset cross probability
Figure SMS_11
Or
Figure SMS_12
If yes, then set up
Figure SMS_14
Wherein, in the step (A),
Figure SMS_16
represents the 1 st generation
Figure SMS_19
A first of the cross parameter vectors
Figure SMS_15
The number of the parameters is one,
Figure SMS_17
represents the 1 st generation
Figure SMS_20
The first of the variation parameters
Figure SMS_22
A parameter; if not, setting
Figure SMS_13
Figure SMS_18
Figure SMS_21
First to show initial experimental parameters
Figure SMS_23
A parameter;
when in use
Figure SMS_24
Then, the 1 st generation cross parameter vector set is generated
Figure SMS_25
Figure SMS_26
Is provided with
Figure SMS_28
Will cross the parameter vector
Figure SMS_30
Inputting into an experimental device to obtain an optimized index
Figure SMS_32
Comparison of
Figure SMS_29
And
Figure SMS_31
size of (1), if
Figure SMS_33
Then, then
Figure SMS_34
And if not, the step (B),
Figure SMS_27
when in use
Figure SMS_35
Generation 1 evolution data set was generated
Figure SMS_36
Figure SMS_37
(ii) a Wherein the content of the first and second substances,
Figure SMS_38
represents the ith experimental parameter of the 1 st generation,
Figure SMS_39
represents the ith phase space density of the 1 st generation, and N represents the number of groups of variation parameters.
In one embodiment, the method further comprises the following steps: and (4) disordering the data in the evolution parameter set, and obtaining a training set and a test set after rounding according to a proportion.
In one embodiment, the method further comprises the following steps: when the data in the training set is smaller than a preset value, training the plurality of artificial neural networks with the same structure by adopting a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein the plurality of artificial neural networks with the same structure form a random neural network.
In one embodiment, the method further comprises the following steps: and generating new optimal experimental parameters through variation operation in a differential evolution algorithm according to the optimal experimental parameters.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
An artificial neural network-based atomic cooling parameter online optimization device, comprising:
the optimization target determining module is used for inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
the sample construction module is used for evolving the data pairs of the experimental parameters and the phase space density by utilizing a differential evolution algorithm to obtain the next generation of the experimental parameters and the phase space density pairs until an evolution parameter set consisting of a plurality of generations of the experimental parameters and the phase space density pairs is output;
the network training module is used for taking the evolution parameter set as a training set and a testing set of the artificial neural network, training a plurality of artificial neural networks with the same structure by using the training set, and obtaining a plurality of trained artificial neural networks;
the parameter optimization module is used for optimizing the trained artificial neural network to obtain a plurality of optimal experimental parameters; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the atomic cooling parameter optimization results in the round.
And the termination judging module is used for judging whether the optimization termination condition is met or not, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
According to the atomic cooling parameter online optimization method and device based on the artificial neural network, firstly, the phase space density is used for describing the atomic number density and the atomic temperature, the experiment parameter and the phase space density pair are evolved by using a differential evolution algorithm to obtain the next generation of experiment parameter and phase space density pair until an evolution parameter set consisting of a plurality of generations of experiment parameter and phase space density pairs is output, so that the effectiveness of data is improved, then the evolution parameter set is used as a training set and a testing set of the artificial neural network, so that the iterative calculation amount of the artificial neural network can be reduced, and by using a plurality of artificial neural networks with the same structure, the data disturbance can be reduced, so that the prediction precision is improved, and finally, the prediction parameters are fed back to the experiment device in real time, so that the instantaneity and the accuracy of the optimization process are improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for online optimization of atomic cooling parameters based on an artificial neural network according to an embodiment;
FIG. 2 is a block diagram of an atomic cooling parameter online optimization device based on an artificial neural network according to an embodiment;
FIG. 3 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided an atomic cooling parameter online optimization method based on an artificial neural network, including the following steps:
and 102, inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter.
The phase space density is related to the atomic number density and atomic temperature.
And 104, evolving the data pairs of the experimental parameters and the phase space density by using a differential evolution algorithm to obtain next generation of experimental parameter and phase space density pairs until an evolution parameter set consisting of a plurality of generations of experimental parameter and phase space density pairs is output.
And 106, taking the evolution parameter set as a training set and a test set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks.
In this step, a plurality of artificial neural networks may be selected, and the specific number may be set according to actual requirements, for example, 3 artificial neural networks may be set.
And 108, performing global optimization on the trained artificial neural network by using a genetic algorithm to obtain a plurality of optimal experimental parameters.
And 110, expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into the atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round.
And 112, judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
In the atomic cooling parameter online optimization method based on the artificial neural network, firstly, the atomic number density and the atomic temperature are described by using the phase space density, the experimental parameters and the phase space density pairs are evolved by using a differential evolution algorithm to obtain next generation experimental parameters and phase space density pairs until an evolution parameter set consisting of a plurality of generations of experimental parameters and phase space density pairs is output, so that the effectiveness of data is improved, then, the evolution parameter set is used as a training set and a testing set of the artificial neural network, so that the iterative computation amount of the artificial neural network can be reduced, and by using a plurality of artificial neural networks with the same structure, the data disturbance can be reduced, so that the prediction precision is improved, and finally, the prediction parameters are input into an experimental device, so that the instantaneity and the precision of the optimization process are improved.
In one embodiment, the phase space density is:
Figure SMS_40
wherein the content of the first and second substances,
Figure SMS_42
the spatial density of the phases is represented,
Figure SMS_45
which represents the wavelength of de broglie,
Figure SMS_47
Figure SMS_43
in order to approximate the constant of planck,
Figure SMS_44
is a compound having a structure represented by the atomic mass,
Figure SMS_46
is a constant of boltzmann's constant,
Figure SMS_48
is the temperature of the atoms of the gas,
Figure SMS_41
represents the atomic number density.
Specifically, the atomic cooling experimental apparatus is a complete set of experimental systems or devices for atomic cooling, and generally includes a laser system, a vacuum system, a timing control system, a magnetic shield (or a compensation magnetic field), a magneto-optical trap, and other accessory devices and elements. For comprehensive evaluation of atomic number density and atomic temperature, a description is made using phase space density, in which atomic number density
Figure SMS_51
The atomic temperature is generally measured by the Time Of Flight (TOF) method, and is preferably acceleratedChanging the speed, reducing the calculation amount of each optimization cycle, measuring the size of the atomic group at the TOF time only twice, and then according to a formula
Figure SMS_52
To calculate the radical temperature
Figure SMS_54
Wherein
Figure SMS_50
Is composed of
Figure SMS_53
The size of the radical measured at the time. As can be seen from the above description, the index
Figure SMS_55
Comprehensively reflects the atomic number density and the atomic temperature,
Figure SMS_56
the larger the atomic number density and the lower the atomic temperature, the optimization objective is to find a group of experimental parameters, so that
Figure SMS_49
The maximum value is taken.
In one embodiment, a plurality of sets of parameter sets composed of a plurality of different experimental parameters are selected, and a plurality of sets of variation parameters are calculated through a preset variation formula. Generating random numbers R and R, and setting
Figure SMS_58
(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1; judging preset cross probability
Figure SMS_70
Or
Figure SMS_78
(ii) a If yes, then set up
Figure SMS_61
Wherein, in the process,
Figure SMS_71
represents the 1 st generation
Figure SMS_79
The first of the cross parameter vector
Figure SMS_85
The number of the parameters is set to be,
Figure SMS_59
represents the 1 st generation
Figure SMS_66
The first of the variation parameters
Figure SMS_72
A parameter; if not, setting
Figure SMS_80
Figure SMS_60
Figure SMS_74
First to show initial experimental parameters
Figure SMS_82
A parameter; when in use
Figure SMS_86
Then, the 1 st generation cross parameter vector set is generated
Figure SMS_64
Figure SMS_67
(ii) a Is provided with
Figure SMS_75
Will cross the parameter vector
Figure SMS_83
Inputting into an experimental device to obtain an optimized index
Figure SMS_57
Comparison of
Figure SMS_65
And
Figure SMS_73
in the size of (1)
Figure SMS_81
Then, then
Figure SMS_62
And if not, the step (B),
Figure SMS_69
(ii) a When in use
Figure SMS_77
Generation 1 evolution data set was generated
Figure SMS_84
Figure SMS_63
(ii) a Wherein the content of the first and second substances,
Figure SMS_68
represents the ith experimental parameter of the 1 st generation,
Figure SMS_76
the i-th phase space density of the 1 st generation is shown, and N represents the number of groups of variation parameters. By repeating the above steps, the evolution parameter set can be formed by multiple generations of evolution parameters.
In one embodiment, data in the evolution parameter set is scrambled, and after proportional rounding, a training set and a test set are obtained.
In one embodiment, when the data in the training set is smaller than a preset value, a plurality of artificial neural networks with the same structure are trained in a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein, a plurality of artificial neural networks with the same structure form a random neural network.
Specifically, a random Neural Network (SANN) is formed by a plurality of Artificial Neural Networks (ANN), so that random disturbance caused by weight initialization can be eliminated, the Network structures of each ANN need to be consistent, in order to improve optimization efficiency, a hidden layer of each ANN does not exceed 5 layers, each ANN needs to initialize a weight respectively, but other learning parameters need to be guaranteed to be consistent, and the hyper-parameters of the Neural Network are as follows: the total neuron number, the number of network layers, the learning rate, the maximum training times, the evaluation index and the activation function need to be adjusted flexibly according to the actual performance requirement.
In one embodiment, a plurality of trained ANN are subjected to global optimization through a genetic algorithm to obtain a plurality of corresponding optimal experimental parameters, and one of the ANN is taken as an example for explanation. Determining the variation range of each optimized parameter:
Figure SMS_87
(ii) a According to the variation range, carrying out binary coding on each random variable; random initialization population
Figure SMS_89
Where N is the population number, for efficient implementation of genetic algorithms,
Figure SMS_99
an even number is required; bringing the initial population into ANN, and solving the corresponding fitness
Figure SMS_90
(ii) a Screening individuals with higher adaptability in a roulette mode to form a new population:
Figure SMS_95
(ii) a Set the cross probability 0.6 (can be adjusted flexibly), to the population
Figure SMS_92
Performing a crossover operation on each individual to form a crossover population:
Figure SMS_96
(ii) a Set mutation probability 0.001 (flexibleAdjustment) of population
Figure SMS_102
And (3) performing mutation operation on each individual to form a mutation population:
Figure SMS_103
the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed as
Figure SMS_91
Or
Figure SMS_98
(ii) a Will evolve the population
Figure SMS_93
The individuals are sequentially brought into the trained ANN model, and the corresponding fitness is solved:
Figure SMS_100
and then solving the standard deviation of the population fitness of the current generation
Figure SMS_94
(ii) a Repeating the iteration to evolve more generations of populations
Figure SMS_97
Wherein
Figure SMS_88
Is the nth generation of genetic algorithm; standard deviation of fitness of 5 th generation
Figure SMS_101
No longer changing significantly (e.g., less than 0.001), the evolution is terminated; and selecting the individual with the maximum fitness from all population individuals as a final optimization result.
In one embodiment, according to the optimal experimental parameters, new optimal experimental parameters are generated through variation operation in a differential evolution algorithm.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
The technical solution of the present invention is further illustrated by a specific example.
An atom cooling parameter online optimization method based on an artificial neural network comprises the following specific steps:
step 1: the variation range of each group of parameters is determined according to prior experience when the experimental parameters are initialized randomly, the variation range of each group of parameters is uniformly sampled to obtain the initial value of the experimental parameters, in order to utilize the subsequent differential evolution algorithm,
Figure SMS_104
the minimum value cannot be lower than 3, and the maximum value can affect the initialization efficiency and is generally 10-50.
And 2, step: the interval of each experimental cycle is 10 ms-1 s (the specific time is determined according to the actual performance of the experimental device used), so as to eliminate the mutual influence or coupling between each group of experimental parameters.
And 3, step 3: mutation probability in differential evolution algorithm
Figure SMS_105
And cross probability
Figure SMS_106
0.5 and 0.1 can be selected, the method can also be flexibly adjusted according to the actual effect, and then the parameter set is subjected to differential evolution according to the following steps:
step 3.1 in parameter set
Figure SMS_107
Randomly selecting three different parameter vectors
Figure SMS_108
Wherein a, b and c are different from each other.
Step 3.2 according to the variation formula
Figure SMS_109
Generating new mutated parameter vectors
Figure SMS_110
Step 3.3 repeat steps 3.1, 3.2
Figure SMS_111
Next, generating other variation vectors
Figure SMS_112
Figure SMS_113
Step 3.4 to ensure that at least one mutated gene is inherited by the next generation, a mutation of between 1 and 1 is first generated
Figure SMS_114
(containing 1 and
Figure SMS_115
) Random number of
Figure SMS_116
And make an order
Figure SMS_117
Step 3.5 generating a random number between 0 and 1
Figure SMS_118
And judging whether the conditions are met:
Figure SMS_123
or
Figure SMS_126
If the condition is satisfied, then order
Figure SMS_119
(wherein,
Figure SMS_122
represents the 1 st generation
Figure SMS_125
The first of the cross parameter vector
Figure SMS_128
The number of the parameters is one,
Figure SMS_121
represents the 1 st generation
Figure SMS_124
The first of the variation vectors
Figure SMS_127
One parameter), otherwise, order
Figure SMS_129
Figure SMS_120
Step 3.6
Figure SMS_130
And repeating the step 3.5 until
Figure SMS_131
Thus, a 1 st generation cross-parameter vector set is generated
Figure SMS_132
Figure SMS_133
Step 3.7 order
Figure SMS_134
Step 3.8 treating
Figure SMS_135
Bringing into an experimental device to obtain an optimized index
Figure SMS_136
Comparison of
Figure SMS_137
And
Figure SMS_138
size of (1), if
Figure SMS_139
Then, then
Figure SMS_140
And if not, the step (B),
Figure SMS_141
step 3.9 order
Figure SMS_142
And repeating step 3.8 until
Figure SMS_143
Thus, generation 1 evolution data sets were generated
Figure SMS_144
Figure SMS_145
And 4, step 4: the termination criteria can be determined based on the total number of ANN neurons or on expected experimental criteria (atomic density and atomic temperature), for example, if the total number of neurons per ANN is
Figure SMS_146
Then when
Figure SMS_147
The iteration can be terminated.
And 5: data set
Figure SMS_148
Before entering the ANN for training, the original sequence is disturbed, and after rounding in proportion, the sequence is divided into a training set and a verification set.
And 6: matters to be noted in ANN training:
6.1 If the data size is not large enough, a K-fold cross validation mode can be adopted by a training mechanism.
6.2 A random neural network (SANN) is formed by a plurality of ANNs, so that random disturbance caused by weight initialization can be eliminated, the network structures of all the ANNs need to be consistent, and in order to improve optimization efficiency, the hidden layer of each ANN is recommended not to exceed 5 layers.
6.3 Each ANN should initialize the weight value separately, but it should be guaranteed that other learning parameters are consistent.
6.4 Hyper-parameters of neural networks, such as: the total neuron number, the number of network layers, the learning rate, the maximum training times, the evaluation index and the activation function need to be flexibly adjusted according to the actual performance requirement.
And 7: global optimization is carried out on a plurality of trained ANNs through a genetic algorithm, a plurality of corresponding optimal experimental parameters are obtained respectively, and one of the ANNs is taken as an example for explanation:
7.1 determining the variation range of each optimized parameter:
Figure SMS_149
7.2 according to the variation range, carrying out binary coding on each random variable;
7.3 random initialization population
Figure SMS_150
Where N is the population number, for efficient implementation of genetic algorithms,
Figure SMS_151
an even number is required;
7.4 bringing the initial population into ANN to solve the corresponding fitness
Figure SMS_152
7.5 screening out individuals with higher fitness in a roulette mode to form a new population:
Figure SMS_153
7.6 setting the cross probability 0.6 (flexible adjustment), for the population
Figure SMS_154
Performing cross operation on each individual to form a cross population:
Figure SMS_155
7.7 set mutation probability 0.001 (flexible adjustment), for population
Figure SMS_156
Performing mutation operation on each individual to form a mutation population:
Figure SMS_157
the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed as
Figure SMS_158
Or
Figure SMS_159
7.8 evolving populations
Figure SMS_160
The individuals are sequentially brought into the trained ANN model, and the corresponding fitness is solved:
Figure SMS_161
and then solving the standard deviation of the population fitness of the current generation
Figure SMS_162
7.9 Repeat 7.5-7.8 to evolve more generations of population
Figure SMS_163
Wherein
Figure SMS_164
Is the nth generation of genetic algorithm;
7.10 Standard deviation of fitness of approximately 5 generations
Figure SMS_165
No longer changing significantly (e.g., less than 0.001), the evolution is terminated;
7.11 selecting the individual with the maximum fitness from all population individuals as the final optimization result of the ANN.
And 8: the generation of the evolution parameter set comprises the following specific steps:
8.1 Assuming a total of 3 ANNs, each ANN finds an optimal parameter
Figure SMS_166
Figure SMS_167
Figure SMS_168
8.2 Generating new parameters through mutation operations in the differential evolution algorithm:
Figure SMS_169
8.3 Parameter(s)
Figure SMS_170
Figure SMS_171
Figure SMS_172
Figure SMS_173
Co-forming evolution parameter sets
Figure SMS_174
And step 9: the specific steps of termination judgment are as follows:
9.1 Parameter set to evolve
Figure SMS_175
Inputting into an atomic cooling experimental device to obtain multiple optimal phase space densities
Figure SMS_176
9.2 selection
Figure SMS_177
The most important ofThe experimental parameters corresponding to the large values are used as the optimization results of the atomic cooling parameters of the round;
9.3 And judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, if not, supplementing the parameter set of the round to the original parameter set, retraining the network, and starting the next round of iteration.
The termination condition generally consists of the following three items, and if any one of the three items is satisfied, the iteration can be terminated:
(1) The standard deviation of the optimization result of the 5 rounds is better than the expected lowest standard deviation;
(2) The maximum cycle number is reached;
(3) The optimization results of 10 consecutive rounds are not better than the optimal values of the last time.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 2, there is provided an atomic cooling parameter online optimization device based on an artificial neural network, including: an optimization goal determination module 202, a sample construction module 204, a network training module 206, a parameter optimization module 208, and a termination discrimination module 210, wherein:
an optimization target determining module 202, configured to input a preset experiment parameter set into an atomic cooling experiment apparatus, so as to obtain a phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
a sample construction module 204, configured to utilize a differential evolution algorithm to evolve the data pairs of the experimental parameters and the phase space density to obtain next generation of the experimental parameter and the phase space density pairs until an evolution parameter set composed of a plurality of generations of the experimental parameters and the phase space density pairs is output;
a network training module 206, configured to use the evolution parameter set as a training set and a test set of an artificial neural network, and train a plurality of artificial neural networks with the same structure using the training set to obtain a plurality of trained artificial neural networks;
the parameter optimization module 208 is configured to obtain a plurality of optimal experimental parameters by optimizing the trained artificial neural network; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round.
And a termination judging module 210, configured to judge whether an optimization termination condition is met, if yes, terminate the optimization process, and use the optimization result of the current round as a final optimization result, and if not, supplement the parameter set of the current round to the original parameter set, retrain the network, and start a next round of iteration.
In one embodiment, the phase space density is:
Figure SMS_178
wherein the content of the first and second substances,
Figure SMS_180
the spatial density of the phases is represented,
Figure SMS_182
which represents the wavelength of de broglie,
Figure SMS_184
Figure SMS_181
in order to approximate the constant of planck,
Figure SMS_183
is a mixture of a carbon atom and a nitrogen atom,
Figure SMS_185
is a constant of boltzmann's constant,
Figure SMS_186
is the temperature of the atoms of the reaction mixture,
Figure SMS_179
represents the atomic number density.
In one embodiment, the sample construction module 204 is further configured to select multiple sets of parameter sets composed of multiple different experimental parameters, and calculate multiple sets of variation parameters according to a preset variation formula;
generating random numbers R and R, and setting
Figure SMS_187
(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
judging preset cross probability
Figure SMS_188
Or
Figure SMS_189
If yes, then set up
Figure SMS_191
Wherein, in the process,
Figure SMS_193
represents the 1 st generation
Figure SMS_196
The first of the cross parameter vector
Figure SMS_192
The number of the parameters is one,
Figure SMS_195
represents the 1 st generation
Figure SMS_198
The first of the variation parameters
Figure SMS_200
A parameter; if not, setting
Figure SMS_190
Figure SMS_194
Figure SMS_197
First to show initial experimental parameters
Figure SMS_199
A parameter;
when in use
Figure SMS_201
Then, the 1 st generation cross parameter vector set is generated
Figure SMS_202
Figure SMS_203
Is provided with
Figure SMS_205
Will cross the parameter vector
Figure SMS_207
Inputting into an experimental device to obtain an optimized index
Figure SMS_209
Comparison of
Figure SMS_206
And
Figure SMS_208
size of (1), if
Figure SMS_210
Then, then
Figure SMS_211
If not, then,
Figure SMS_204
when the temperature is higher than the set temperature
Figure SMS_212
Generation 1 evolution data set was generated
Figure SMS_213
Figure SMS_214
(ii) a Wherein the content of the first and second substances,
Figure SMS_215
represents the ith experimental parameter of the 1 st generation,
Figure SMS_216
represents the ith phase space density of the 1 st generation, and N represents the number of groups of variation parameters.
In one embodiment, the network training module 206 is further configured to scramble data in the evolution parameter set, and obtain a training set and a test set after rounding according to a ratio.
In one embodiment, the network training module 206 is further configured to train the multiple artificial neural networks with the same structure in a K-fold cross validation manner to obtain multiple trained artificial neural networks when the data in the training set is smaller than a preset value; wherein the plurality of artificial neural networks with the same structure form a random neural network.
In one embodiment, the parameter optimization module 208 is further configured to generate a new optimal experimental parameter through a variation operation in a differential evolution algorithm according to the optimal experimental parameter.
In one embodiment, the termination judging module 210 is further configured to judge whether an optimization termination condition is satisfied, and obtain a final optimization result.
In one embodiment, the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
For specific limitations of the online atomic cooling parameter optimization device based on the artificial neural network, reference may be made to the above limitations of the online atomic cooling parameter optimization method based on the artificial neural network, and details are not repeated here. The modules in the artificial neural network-based atomic cooling parameter online optimization device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an artificial neural network-based atomic cooling parameter online optimization method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method in the above embodiments when the processor executes the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method in the above-mentioned embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. An atom cooling parameter online optimization method based on an artificial neural network is characterized by comprising the following steps:
inputting a preset experiment parameter set into an atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
evolving the data pairs of the experimental parameters and the phase space density by using a differential evolution algorithm to obtain next generation of pairs of the experimental parameters and the phase space density until an evolution parameter set consisting of a plurality of generations of pairs of the experimental parameters and the phase space density is output;
taking the evolution parameter set as a training set and a testing set of the artificial neural network, and training a plurality of artificial neural networks with the same structure by using the training set to obtain a plurality of trained artificial neural networks;
global optimization is carried out on the trained artificial neural network through a genetic algorithm to obtain a plurality of optimal experimental parameters;
expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round;
and judging whether the optimization termination condition is met, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, if not, supplementing the parameter set of the round to the original parameter set, retraining the network, and starting the next round of iteration.
2. The method of claim 1, wherein the phase space density is:
Figure QLYQS_1
wherein the content of the first and second substances,
Figure QLYQS_3
the spatial density of the phases is represented,
Figure QLYQS_5
which represents the wavelength of de broglie,
Figure QLYQS_7
Figure QLYQS_4
in order to approximate the constant of planck,
Figure QLYQS_6
is a compound having a structure represented by the atomic mass,
Figure QLYQS_8
is a constant of boltzmann's constant,
Figure QLYQS_9
is the temperature of the atoms of the gas,
Figure QLYQS_2
represents the atomic number density.
3. The method of claim 1, wherein evolving the experimental parameter and the phase space density pair using a differential evolution algorithm to obtain a next generation of the experimental parameter and the phase space density pair comprises:
selecting a plurality of groups of parameter groups consisting of a plurality of different experimental parameters, and calculating to obtain a plurality of groups of variation parameters through a preset variation formula;
generating random numbers R and R, and setting
Figure QLYQS_10
(ii) a Wherein, the value range of R is 1 to V, and the value range of R is 0 to 1;
judging preset cross probability
Figure QLYQS_11
Or
Figure QLYQS_12
If yes, then set up
Figure QLYQS_14
Wherein, in the step (A),
Figure QLYQS_16
represents the 1 st generation
Figure QLYQS_19
A first of the cross parameter vectors
Figure QLYQS_15
The number of the parameters is one,
Figure QLYQS_17
represents the 1 st generation
Figure QLYQS_20
The first of the variation parameters
Figure QLYQS_22
A parameter; if not, setting
Figure QLYQS_13
Figure QLYQS_18
Figure QLYQS_21
First to show initial experimental parameters
Figure QLYQS_23
A parameter;
when the temperature is higher than the set temperature
Figure QLYQS_24
Then, the 1 st generation cross parameter vector set is generated
Figure QLYQS_25
Figure QLYQS_26
Is provided with
Figure QLYQS_28
Will cross the parameter vector
Figure QLYQS_30
Inputting into an experimental device to obtain an optimized index
Figure QLYQS_32
Comparison of
Figure QLYQS_29
And
Figure QLYQS_31
in the size of (1)
Figure QLYQS_33
Then, then
Figure QLYQS_34
If not, then,
Figure QLYQS_27
when the temperature is higher than the set temperature
Figure QLYQS_35
Generation 1 evolution data set was generated
Figure QLYQS_36
Figure QLYQS_37
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure QLYQS_38
represents the ith experimental parameter of the 1 st generation,
Figure QLYQS_39
represents the ith phase space density of the 1 st generation, N represents the number of groups of variation parameters,
and repeating the operation, and continuing to evolve until the evolution algebra meets the termination condition to obtain the evolution parameter set.
4. The method of claim 1, wherein using the set of evolutionary parameters as a training set and a testing set of artificial neural networks comprises:
and disturbing the data in the evolution parameter set, and rounding the data according to the proportion to obtain a training set and a test set.
5. The method of claim 1, wherein training a plurality of structurally identical artificial neural networks using the training set to obtain a plurality of trained artificial neural networks comprises:
when the data in the training set is smaller than a preset value, training the plurality of artificial neural networks with the same structure by adopting a K-fold cross validation mode to obtain a plurality of trained artificial neural networks; wherein the plurality of artificial neural networks with the same structure form a random neural network.
6. The method of claim 1, wherein the trained artificial neural network is globally optimized by a genetic algorithm to obtain a plurality of optimal experimental parameters, comprising:
determining the variation range of each optimization parameter:
Figure QLYQS_40
according to the variation range, carrying out binary coding on each random variable;
random initialization population
Figure QLYQS_41
Where N is the population number, for efficient implementation of genetic algorithms,
Figure QLYQS_42
it needs to be an even number;
inputting the initial population into an artificial neural network, and solving the corresponding fitness
Figure QLYQS_43
Screening out individuals with high fitness in the form of roulette to form a new population
Figure QLYQS_44
Figure QLYQS_45
Setting cross probability, to the population
Figure QLYQS_46
Each individual in the group (a) is subjected to cross operation to form a cross population
Figure QLYQS_47
Figure QLYQS_48
Setting variation probability, to the population
Figure QLYQS_49
And (3) performing mutation operation on each individual to form a mutation population:
Figure QLYQS_50
the variant population is a new generation population evolved by genetic algorithm, and more generally, can be expressed as
Figure QLYQS_51
Or
Figure QLYQS_52
Will evolve the population
Figure QLYQS_53
The individuals are sequentially brought into the trained artificial neural network, and the corresponding fitness is solved:
Figure QLYQS_54
and then solving the standard deviation of the population fitness of the current generation
Figure QLYQS_55
Repeating iteration to evolve multi-generation population
Figure QLYQS_56
Wherein
Figure QLYQS_57
Is the nth generation of genetic algorithm;
standard deviation of fitness for multiple successive generations
Figure QLYQS_58
Stopping evolution if no large change occurs any more;
and selecting the individual with the maximum fitness from all population individuals as a final optimization result of the artificial neural network.
7. The method according to any one of claims 1 to 5, wherein the expanding of the optimal experimental parameters comprises:
and generating new optimal experimental parameters through variation operation in a differential evolution algorithm according to the optimal experimental parameters.
8. The method of any one of claims 1 to 5, wherein the data in the evolutionary parameter set is equal to the data in the experimental parameter set.
9. An atomic cooling parameter online optimization device based on an artificial neural network is characterized by comprising:
the optimization target determining module is used for inputting a preset experiment parameter set into the atomic cooling experiment device to obtain the phase space density corresponding to each experiment parameter; the phase space density is related to atomic number density and atomic temperature;
the sample construction module is used for carrying out evolution on the experiment parameters and the phase space density pairs by utilizing a differential evolution algorithm to obtain next generation of data pairs of the experiment parameters and the phase space density until an evolution parameter set consisting of a plurality of generations of the experiment parameters and the phase space density pairs is output;
the network training module is used for taking the evolution parameter set as a training set and a testing set of the artificial neural network, training a plurality of artificial neural networks with the same structure by using the training set, and obtaining a plurality of trained artificial neural networks;
the parameter optimization module is used for optimizing the trained artificial neural network to obtain a plurality of optimal experimental parameters; expanding the optimal experiment parameters, inputting the expanded optimal experiment parameters into an atomic cooling experiment device to obtain a plurality of optimal phase space densities, and selecting the experiment parameters corresponding to the maximum value of the optimal phase space densities as the optimization results of the atomic cooling parameters in the round;
and the termination judging module is used for judging whether the optimization termination condition is met or not, if so, terminating the optimization process, taking the optimization result of the round as a final optimization result, and if not, supplementing the parameter set of the round to the original parameter set, retraining the network and starting the next round of iteration.
CN202310089181.9A 2023-02-09 2023-02-09 Atom cooling parameter online optimization method and device based on artificial neural network Pending CN115796269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310089181.9A CN115796269A (en) 2023-02-09 2023-02-09 Atom cooling parameter online optimization method and device based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310089181.9A CN115796269A (en) 2023-02-09 2023-02-09 Atom cooling parameter online optimization method and device based on artificial neural network

Publications (1)

Publication Number Publication Date
CN115796269A true CN115796269A (en) 2023-03-14

Family

ID=85430686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310089181.9A Pending CN115796269A (en) 2023-02-09 2023-02-09 Atom cooling parameter online optimization method and device based on artificial neural network

Country Status (1)

Country Link
CN (1) CN115796269A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200161446A1 (en) * 2018-11-20 2020-05-21 ColdQuanta, Inc. Quantum tunneling matter-wave transistor system
CN113268925A (en) * 2021-05-18 2021-08-17 南京邮电大学 Dynamic soft measurement method based on differential evolution algorithm time delay estimation
CN113449930A (en) * 2021-07-27 2021-09-28 威海长和光导科技有限公司 Optical fiber preform preparation quality prediction method based on BP neural network
CN114861881A (en) * 2022-05-06 2022-08-05 那一麟 Method for optimizing super-cold atom evaporative cooling parameters by applying machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200161446A1 (en) * 2018-11-20 2020-05-21 ColdQuanta, Inc. Quantum tunneling matter-wave transistor system
CN113268925A (en) * 2021-05-18 2021-08-17 南京邮电大学 Dynamic soft measurement method based on differential evolution algorithm time delay estimation
CN113449930A (en) * 2021-07-27 2021-09-28 威海长和光导科技有限公司 Optical fiber preform preparation quality prediction method based on BP neural network
CN114861881A (en) * 2022-05-06 2022-08-05 那一麟 Method for optimizing super-cold atom evaporative cooling parameters by applying machine learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A.D. TRANTER ET AL.: "Multiparameter optimisation of a magneto-optical trap using deep learning" *
AJ BARKER ET AL.: "Applying machine learning optimization methods to the production of a quantum gas" *
朱若谷 等: "《激光应用技术》", 国防工业出版社 *
潘建松: "基于超冷原子的量子模拟研究" *

Similar Documents

Publication Publication Date Title
US10885435B2 (en) System and method for training neural networks
Cremer et al. From optimization-based machine learning to interpretable security rules for operation
Wang et al. A grey prediction-based evolutionary algorithm for dynamic multiobjective optimization
WO2020028036A1 (en) Robust von neumann ensembles for deep learning
Kang et al. Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
CN108876038B (en) Big data, artificial intelligence and super calculation synergetic material performance prediction method
CN109767034B (en) Relay protection constant value optimization method and device, computer equipment and storage medium
Rojo Machine Learning tools for global PDF fits
Shin et al. Physics-informed variational inference for uncertainty quantification of stochastic differential equations
CN112862004B (en) Power grid engineering cost control index prediction method based on variational Bayesian deep learning
US20230059708A1 (en) Generation of Optimized Hyperparameter Values for Application to Machine Learning Tasks
Shemyakin et al. Online identification of large-scale chaotic system
Li et al. Learning slow and fast system dynamics via automatic separation of time scales
CN115796269A (en) Atom cooling parameter online optimization method and device based on artificial neural network
Job et al. Systematic comparison of deep belief network training using quantum annealing vs. classical techniques
Yassin et al. Comparison between NARX parameter estimation methods with Binary Particle Swarm Optimization-based structure selection method
Wu et al. Improved saddle point prediction in stochastic two-player zero-sum games with a deep learning approach
CN113641907B (en) Super-parameter self-adaptive depth recommendation method and device based on evolutionary algorithm
Peck et al. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring
Baldi et al. The ebb and flow of deep learning: a theory of local learning
Tyas et al. Implementation of Particle Swarm Optimization (PSO) to improve neural network performance in univariate time series prediction
Hauser et al. Probabilistic forecasting of symbol sequences with deep neural networks
Cai et al. Surrogate-assisted operator-repeated evolutionary algorithm for computationally expensive multi-objective problems
Zhang et al. resnetCox: a residual neural network method for high-throughput survival Analysis
Teng et al. A Simulated Annealing BP Algorithm for Adaptive Temperature Setting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230314

RJ01 Rejection of invention patent application after publication