WO2024066143A1 - 分子碰撞截面的预测方法、装置、设备及存储介质 - Google Patents

分子碰撞截面的预测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2024066143A1
WO2024066143A1 PCT/CN2023/072743 CN2023072743W WO2024066143A1 WO 2024066143 A1 WO2024066143 A1 WO 2024066143A1 CN 2023072743 W CN2023072743 W CN 2023072743W WO 2024066143 A1 WO2024066143 A1 WO 2024066143A1
Authority
WO
WIPO (PCT)
Prior art keywords
collision cross
section
preset
neural network
cross
Prior art date
Application number
PCT/CN2023/072743
Other languages
English (en)
French (fr)
Inventor
孙东伟
李兴文
周永言
张博雅
唐念
郝迈
Original Assignee
广东电网有限责任公司
广东电网有限责任公司电力科学研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东电网有限责任公司, 广东电网有限责任公司电力科学研究院 filed Critical 广东电网有限责任公司
Publication of WO2024066143A1 publication Critical patent/WO2024066143A1/zh

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/20Identification of molecular entities, parts thereof or of chemical compositions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C10/00Computational theoretical chemistry, i.e. ICT specially adapted for theoretical aspects of quantum chemistry, molecular mechanics, molecular dynamics or the like
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/70Machine learning, data mining or chemometrics

Definitions

  • the present application relates to the field of molecular collision technology, and in particular to a method, device, equipment and storage medium for predicting a molecular collision cross section.
  • the collision between an electron and a neutral molecule causes the electron to be adsorbed on the neutral molecule, which is called the adsorption cross section.
  • the adsorption cross section belongs to the collision cross section, which mainly includes the ionization cross section, adsorption cross section, excitation cross section, elastic cross section and momentum transfer cross section. These cross sections can be obtained through electron beam experiments or quantum chemical theory calculations. However, due to errors in both experiments and calculations, there is a large difference between the electron group parameters calculated by combining these collision cross sections and the measured values.
  • the relevant technology uses the summarized cross-section set to solve the Boltzmann equation to calculate the electron group parameters of the molecule and compare the electron group parameters with the experimental measurement values; then the cross-section set is repeatedly corrected manually to continuously improve the consistency between the calculated parameters and the experimental data, and finally a complete and self-consistent collision cross-section set is obtained.
  • this iterative correction process is very cumbersome and inefficient, and is heavily dependent on expert experience.
  • the present application provides a method, device, equipment and storage medium for predicting a molecular collision cross section, so as to solve the technical problems of low efficiency and reliance on expert experience in the correction process of a collision cross section set.
  • the present application provides a method for predicting a molecular collision cross section, comprising:
  • the collision cross section prediction model is used to predict target collision cross section data of the target gas.
  • generating multiple groups of collision cross-section sets based on collision cross-section data of multiple gases in a preset database includes:
  • the new collision cross-section data are classified to obtain a plurality of collision cross-section sets.
  • performing weighted geometric averaging on the collision cross-section data to generate new collision cross-section data includes:
  • the collision cross-section data is subjected to weighted geometric mean processing to generate new collision cross-section data.
  • the preset weighted geometric mean function is:
  • ⁇ new ( ⁇ ) is the new collision cross-section data
  • r represents a random number in the interval (0,1)
  • ⁇ i represents the i-th collision cross-section data
  • ⁇ j represents the j-th collision cross-section data
  • ⁇ i represents the threshold energy corresponding to the i-th collision cross-section
  • ⁇ j is the threshold energy corresponding to the j-th collision cross-section
  • is the energy corresponding to the new collision cross-section.
  • the calculation function of the energy corresponding to the new collision cross section is:
  • s represents a random number in the interval [-1,1]
  • ⁇ min represents the preset minimum energy level
  • ⁇ max represents the preset maximum energy level.
  • calculating the electron group parameters of each set of the collision cross section sets using a preset electron group parameter calculation tool includes:
  • the electron group parameters are obtained by solving a plurality of equally logarithmically spaced reduced field intensities within a preset energy range of the collision cross section set using a preset electron group parameter calculation tool at a preset temperature.
  • the preset neural network is a fully connected neural network
  • the electronic group parameters are used to train the preset neural network until the loss function of the preset neural network reaches a preset convergence condition to obtain a collision cross-section prediction model, including:
  • the electron group parameters include an effective ionization rate coefficient, an electron drift velocity, and an electron longitudinal diffusion coefficient;
  • the loss function is less than a preset value, it is determined that the fully connected neural network training is completed and the collision cross-section prediction model is obtained.
  • the loss function is:
  • loss is the output value of the loss function
  • N represents the amount of data
  • yi represents the collision cross-section data as a training label
  • ⁇ ( xi ) represents the output of the fully connected neural network.
  • the present application also provides a device for predicting a molecular collision cross section, comprising:
  • a generation module used for generating multiple groups of collision cross section sets based on collision cross section data of multiple gases already in a preset database
  • a calculation module used to calculate the electron group parameters of each set of collision cross section sets using a preset electron group parameter calculation tool
  • a training module used to train a preset neural network using the electronic group parameters until the loss function of the preset neural network reaches a preset convergence condition, thereby obtaining a collision cross-section prediction model
  • the prediction module is used to predict the target collision cross-section data of the target gas by using the collision cross-section prediction model.
  • the present application further provides a computer device, comprising a processor and a memory, wherein the memory is used to store a computer program, and when the computer program is executed by the processor, the method for predicting a molecular collision cross section as described in the first aspect is implemented.
  • the present application further provides a computer-readable storage medium storing a computer program, which, when executed by a processor, implements the method for predicting a molecular collision cross section as described in the first aspect.
  • the preset neural network is then trained using the electron group parameters until the loss function of the preset neural network reaches a preset convergence condition, thereby obtaining a collision cross-section prediction model, and using the collision cross-section prediction model to predict the target collision cross-section data of the target gas, thereby using machine learning to establish an accurate inversion model to accelerate the acquisition of a complete collision cross-section set of gases and reduce the subjectivity of manual correction, effectively solving the problems of low efficiency and reliance on expert experience in the existing cross-section set correction process.
  • FIG1 is a schematic flow chart of a method for predicting a molecular collision cross section according to an embodiment of the present application
  • FIG2 is a schematic diagram of new collision cross-section data shown in an embodiment of the present application.
  • FIG3 is a schematic diagram of the structure of a fully connected neural network shown in an embodiment of the present application.
  • FIG4 is a schematic cross-sectional diagram of momentum transfer of silicon hydride gas shown in an embodiment of the present application
  • FIG5 is a schematic diagram of the structure of a device for predicting molecular collision cross sections according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the structure of a computer device according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a method for predicting a molecular collision cross section provided in an embodiment of the present application.
  • the method for predicting a molecular collision cross section in an embodiment of the present application can be applied to computer devices, including but not limited to smart phones, laptops, tablet computers, desktop computers, physical servers, and cloud servers.
  • the prediction of the molecular collision cross section in this embodiment The method includes steps S101 to S104, which are described in detail as follows:
  • Step S101 generating a plurality of collision cross section sets based on collision cross section data of a plurality of gases existing in a preset database.
  • a large amount of collision cross section data is synthesized based on the electron-molecule collision cross section data of existing gases in the LXCat database, and the amount of data is not less than 10 4 groups.
  • the step S101 includes:
  • the new collision cross-section data are classified to obtain a plurality of collision cross-section sets.
  • the neural network requires a large amount of cross-sectional data for training.
  • the existing collision cross-sectional data of LXCat is difficult to support the neural network to complete the training, so this application generates new collision cross-sectional data.
  • the collision cross-sectional data of any two gases are weighted geometrically averaged with a randomly generated random number r ⁇ (0,1).
  • the weighted geometric mean processing includes:
  • the collision cross-section data is subjected to weighted geometric mean processing to generate new collision cross-section data.
  • the preset weighted geometric mean function is:
  • ⁇ new ( ⁇ ) is the new collision cross-section data
  • r represents a random number in the interval (0,1)
  • ⁇ i represents the i-th collision cross-section data
  • ⁇ j represents the j-th collision cross-section data
  • ⁇ i represents the threshold energy corresponding to the i-th collision cross-section
  • ⁇ j is the threshold energy corresponding to the j-th collision cross-section
  • is the energy corresponding to the new collision cross-section.
  • the generation method can generate physically meaningful electron-molecule collision cross-section data and retain the correlation between the cross-section and the energy, thereby ensuring the validity of the collision cross-section data used for model training, and further ensuring the model performance.
  • the calculation function of the energy corresponding to the new collision cross section is:
  • s represents a random number in the interval [-1,1]
  • ⁇ min represents the preset minimum energy level
  • ⁇ max represents the preset maximum energy level.
  • this embodiment uses 12 gases, among which one is separated as the verification gas, and the remaining 11 gases are used to generate 55 types of new collision cross-section data in a permutation and combination manner.
  • Each type of cross-section data generates 1.6 ⁇ 10 3 groups of collision cross-section sets, totaling 8.8 ⁇ 10 4 groups of collision cross-section sets.
  • the new collision cross-section data (i.e., synthetic cross-section data) are shown in FIG2 , where Cross Section is the collision cross-section, represented by ⁇ ( ⁇ ), and the unit is m 2 ; Energy is the collision cross-section energy, represented by ⁇ , and the unit is eV.
  • Step S102 using a preset electron group parameter calculation tool to calculate the electron group parameters of each set of collision cross section sets.
  • Bolsig+ software is used as an electron group parameter calculation tool to calculate the electron group parameters corresponding to each collision cross section set.
  • the step S102 includes:
  • the electron group parameters are obtained by solving a plurality of equally logarithmically spaced reduced field intensities within a preset energy range of the collision cross section set using a preset electron group parameter calculation tool at a preset temperature.
  • Step S103 using the electronic group parameters to train a preset neural network until the loss function of the preset neural network reaches a preset convergence condition, thereby obtaining a collision cross-section prediction model.
  • the preset neural network uses a fully connected neural network.
  • the simplest fully connected neural network is an affine transformation from an input vector x to an output vector y, as shown below:
  • the matrix W and vector b are neural network parameters.
  • the step S103 includes:
  • the electron group parameters include an effective ionization rate coefficient, an electron drift velocity, and an electron longitudinal diffusion coefficient;
  • the loss function is less than a preset value, it is determined that the fully connected neural network training is completed and the collision cross-section prediction model is obtained.
  • the fully connected neural network takes one input layer, one output layer and three hidden layers as an example, and is expressed as:
  • y is the output
  • x is the input
  • Swish is the activation function
  • the neural network input layer used in this embodiment contains 25 ⁇ 3 groups of electron group parameters, a total of 75 inputs, and selects collision cross sections at 15 discrete energies as outputs.
  • the input layer includes effective ionization rate coefficient, electron drift velocity, and normalized longitudinal electron diffusion coefficient, and selects 15 values under reduced field strength respectively. These three types of electron group parameters have low correlation, so they are well representative and can better contain all the characteristics of the electron group parameters.
  • the electron group parameters are sent to the fully connected neural network for analysis.
  • the input and output of the neural network are as follows:
  • W is the electron drift velocity
  • n 0 D L is the normalized longitudinal electron diffusion coefficient
  • is the effective ionization rate coefficient
  • En /n 0 is the reduced field strength
  • is the collision cross section value
  • ⁇ n is the electron energy.
  • the number of hidden layers of the neural network model in this example is 3, and 60 neurons are finally selected for the three hidden layers.
  • the final structure of the fully connected neural network is shown in Figure 3.
  • this embodiment uses the mean absolute error as the loss function:
  • loss is the output value of the loss function
  • N represents the amount of data
  • yi represents the collision cross-section data as a training label, which is used as a standard value here
  • ⁇ ( xi ) represents the output of the fully connected neural network, which is used as a prediction value here.
  • the bias is set to 0, and the weight is a uniformly distributed random number matrix.
  • Step S104 using the collision cross section prediction model to predict target collision cross section data of the target gas.
  • tetrahydrosilicon SiH 4 is selected as the verification gas, and its momentum transfer cross section result is shown in Figure 4, where Cross Section is the collision cross section, represented by ⁇ ( ⁇ ), and the unit is m 2 ; Energy is the collision cross section energy, represented by ⁇ , and the unit is eV; MTCS is the reference result in the database, and Predict is the prediction result of the neural network. As shown in Figure 4, when the electron energy is greater than 0.8eV, the prediction effect is better.
  • FIG. 5 shows a block diagram of a molecular collision cross section prediction device provided by an embodiment of the present application. For ease of explanation, only the parts related to the present embodiment are shown.
  • the molecular collision cross section prediction device provided in the application embodiment includes:
  • a generation module 501 is used to generate multiple groups of collision cross-section sets based on collision cross-section data of multiple gases in a preset database;
  • a calculation module 502 is used to calculate the electron group parameters of each set of collision cross section sets using a preset electron group parameter calculation tool
  • a training module 503 is used to train a preset neural network using the electronic group parameters until the loss function of the preset neural network reaches a preset convergence condition, thereby obtaining a collision cross-section prediction model;
  • the prediction module 504 is used to predict the target collision cross section data of the target gas by using the collision cross section prediction model.
  • the generating module 501 includes:
  • a generating unit used for performing weighted geometric mean processing on the collision cross-section data to generate new collision cross-section data
  • the classification unit is used to classify the new collision cross-section data to obtain multiple groups of collision cross-section sets.
  • the generating unit is used to:
  • the collision cross-section data is subjected to weighted geometric mean processing to generate new collision cross-section data.
  • the preset weighted geometric mean function is:
  • ⁇ new ( ⁇ ) is the new collision cross-section data
  • r represents a random number in the interval (0,1)
  • ⁇ i represents the i-th collision cross-section data
  • ⁇ j represents the j-th collision cross-section data
  • ⁇ i represents the threshold energy corresponding to the i-th collision cross-section
  • ⁇ j is the threshold energy corresponding to the j-th collision cross-section
  • is the energy corresponding to the new collision cross-section.
  • the calculation function of the energy corresponding to the new collision cross section is:
  • s represents a random number in the interval [-1,1]
  • ⁇ min represents the preset minimum energy level
  • ⁇ max represents the preset maximum energy level.
  • the calculation module 502 is used to:
  • the electron group parameters are obtained by solving a plurality of equally logarithmically spaced reduced field intensities within a preset energy range of the collision cross section set using a preset electron group parameter calculation tool at a preset temperature.
  • the preset neural network is a fully connected neural network
  • the training module 503 is used to:
  • the electron group parameters include an effective ionization rate coefficient, an electron drift velocity, and an electron longitudinal diffusion coefficient;
  • the loss function is less than a preset value, it is determined that the fully connected neural network training is completed and the collision cross-section prediction model is obtained.
  • the loss function is:
  • loss is the output value of the loss function
  • N represents the amount of data
  • yi represents the collision cross-section data as a training label
  • ⁇ ( xi ) represents the output of the fully connected neural network.
  • the above-mentioned molecular collision cross section prediction device can implement the molecular collision cross section prediction method of the above-mentioned method embodiment.
  • the options in the above-mentioned method embodiment are also applicable to this embodiment and will not be described in detail here.
  • the rest of the contents of the embodiment of the present application can refer to the contents of the above-mentioned method embodiment, and will not be repeated in this embodiment.
  • FIG6 is a schematic diagram of the structure of a computer device provided in an embodiment of the present application.
  • the computer device 6 of this embodiment includes: at least one processor 60 (only one is shown in FIG6 ), a memory 61, and a computer program 62 stored in the memory 61 and executable on the at least one processor 60, and when the processor 60 executes the computer program 62, the steps in any of the above method embodiments are implemented.
  • the computer device 6 may be a computing device such as a smart phone, a tablet computer, a desktop computer, a cloud server, etc.
  • the computer device may include but is not limited to a processor 60 and a memory 61.
  • Technicians can understand that Figure 6 is only an example of computer device 6 and does not constitute a limitation on computer device 6. It can include more or fewer components than shown in the figure, or a combination of certain components, or different components, for example, it can also include input and output devices, network access devices, etc.
  • the processor 60 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the memory 61 may be an internal storage unit of the computer device 6 in some embodiments, such as a hard disk or memory of the computer device 6.
  • the memory 61 may also be an external storage device of the computer device 6 in other embodiments, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card (Flash Card), etc. equipped on the computer device 6.
  • the memory 61 may also include both an internal storage unit and an external storage device of the computer device 6.
  • the memory 61 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program.
  • the memory 61 may also be used to temporarily store data that has been output or is to be output.
  • an embodiment of the present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in any of the above method embodiments are implemented.
  • An embodiment of the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer device, the computer device implements the steps in the above-mentioned method embodiments when executing the computer device.
  • each box in the flowchart or block diagram may represent a module, a program segment or a portion of a code, and the module, program segment or a portion of a code contains one or more executable instructions for implementing a specified logical function.
  • the functions marked in the box may also be different from those in the accompanying drawings. For example, two consecutive blocks may actually be executed substantially in parallel, or they may sometimes be executed in the reverse order, depending on the functions involved.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be essentially or partly embodied in the form of a software product that contributes to the prior art.
  • the computer software product is stored in a storage medium and includes several instructions for a computer device to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请公开了一种分子碰撞截面的预测方法、装置、设备及存储介质,通过基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集,并利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数,以实现利用已有气体的碰撞截面数据分析电子群参数特征;再利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型,以及利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据,从而利用机器学习方式建立准确的反演模型,以加速气体完整碰撞截面集的获取和降低人为修正的主观性,有效解决现有截面集修正过程效率低和依赖专家经验的问题。

Description

分子碰撞截面的预测方法、装置、设备及存储介质 技术领域
本申请涉及分子碰撞技术领域,尤其涉及一种分子碰撞截面的预测方法、装置、设备及存储介质。
背景技术
电子与中性分子的碰撞导致该电子被吸附在中性分子,即为吸附截面。吸附截面属于碰撞截面,碰撞截面主要包括电离截面、吸附截面、激发截面、弹性截面和动量转移截面,这些截面均可以通过电子束实验或量子化学理论计算得到,但由于实验和计算都存在误差,导致这些碰撞截面组合在一起计算得到的电子群参数与实测值之间存在较大的差别。
目前,对于某一种分子,相关技术利用汇总的截面集求解Boltzmann方程,以计算该分子的电子群参数,并将电子群参数与实验测量值进行比较;再通过人为反复修正截面集,以不断提高计算参数与实验数据的一致性,最终得到一套完整自洽的碰撞截面集,但是该迭代修正过程十分繁琐和效率低,且严重依赖于专家经验。
发明内容
本申请提供了一种分子碰撞截面的预测方法、装置、设备及存储介质,以解决碰撞截面集的修正过程效率低和依赖专家经验的技术问题。
为了解决上述技术问题,第一方面,本申请提供了一种分子碰撞截面的预测方法,包括:
基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集;
利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数;
利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型;
利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据。
在一些实现方式中,所述基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集,包括:
对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据;
对所述新的碰撞截面数据进行分类,得到多组所述碰撞截面集。
在一些实现方式中,所述对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,包括:
利用预设加权几何平均函数,根据目标随机数,对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,所述预设加权几何平均函数为:
其中,σnew(∈)为新的碰撞截面数据,r表示区间(0,1)内的随机数,σi表示第i个碰撞截面数据,σj表示第j个碰撞截面数据,∈i表示第i个碰撞截面对应的阈值能量,∈j为第j个碰撞截面对应的阈值能量,∈为新的碰撞截面对应的能量。
在一些实现方式中,所述新的碰撞截面对应的能量的计算函数为:
其中,s表示区间[-1,1]内的随机数,∈min表示预设的能级最小值,∈max表示预设的能级最大值。
在一些实现方式中,所述利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数,包括:
利用预设电子群参数计算工具,以预设温度,对所述碰撞截面集的预设能量范围内多个等对数间距的约化场强进行求解,得到所述电子群参数。
在一些实现方式中,所述预设神经网络为全连接神经网络,所述利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型,包括:
对所述电子群参数进行归一化,得到目标电子群参数,所述电子群参数包括有效电离速率系数、电子漂移速度和电子纵向扩散系数;
以所述目标电子群参数作为所述全连接神经网络的输入,以所述碰撞截面数据作为所述全连接神经网络的训练标签,对所述全连接神经网络进行训练,并计算每次训练过程的损失函数;
若所述损失函数小于预设值,则判定所述全连接神经网络训练完成,得到所述碰撞截面预测模型。
在一些实现方式中,所述损失函数为:
其中,loss为损失函数的输出值,N表示数据量,yi表示作为训练标签的所述碰撞截面数据,σ(xi)表示全连接神经网络的输出。
第二方面,本申请还提供一种分子碰撞截面的预测装置,包括:
生成模块,用于基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集;
计算模块,用于利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数;
训练模块,用于利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型;
预测模块,用于利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据。
第三方面,本申请还提供一种计算机设备,包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时实现如第一方面所述的分子碰撞截面的预测方法。
第四方面,本申请还提供一种计算机可读存储介质,其存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的分子碰撞截面的预测方法。
与现有技术相比,本申请至少具备以下有益效果:
通过基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集,并利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数,以实现利用已有气体的碰撞截面数据分析电子群参数特征;再利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型,以及利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据,从而利用机器学习方式建立准确的反演模型,以加速气体完整碰撞截面集的获取和降低人为修正的主观性,有效解决现有截面集修正过程效率低和依赖专家经验的问题。
附图说明
图1为本申请实施例示出的分子碰撞截面的预测方法的流程示意图;
图2为本申请实施例示出的新的碰撞截面数据的示意图;
图3为本申请实施例示出的全连接神经网络的结构示意图;
图4为本申请实施例示出的四氢化硅气体的动量转移截面示意图;
图5为本申请实施例示出的分子碰撞截面的预测装置的结构示意图;
图6为本申请实施例示出的计算机设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参照图1,图1为本申请实施例提供的一种分子碰撞截面的预测方法的流程示意图。本申请实施例的分子碰撞截面的预测方法可应用于计算机设备,该计算机设备包括但不限于智能手机、笔记本电脑、平板电脑、桌上型计算机、物理服务器和云服务器等设备。如图1所示,本实施例的分子碰撞截面的预测 方法包括步骤S101至步骤S104,详述如下:
步骤S101,基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集。
在本步骤中,根据LXCat数据库中已有气体的电子-分子碰撞截面数据,合成出大量碰撞截面数据,数据量不少于104组。
在一些实施例中,所述步骤S101,包括:
对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据;
对所述新的碰撞截面数据进行分类,得到多组所述碰撞截面集。
在本实施例中,神经网络需要大量的截面数据进行训练,以LXCat已有的碰撞截面数据难以支撑神经网络完成训练,所以本申请生成新的碰撞截面数据。其中,为了避免生成无意义的截面数据,以随机生成的随机数r∈(0,1)对任意两种气体的碰撞截面数据进行加权几何平均值处理。
可选地,加权几何平均处理过程,包括:
利用预设加权几何平均函数,根据目标随机数,对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,所述预设加权几何平均函数为:
其中,σnew(∈)为新的碰撞截面数据,r表示区间(0,1)内的随机数,σi表示第i个碰撞截面数据,σj表示第j个碰撞截面数据,∈i表示第i个碰撞截面对应的阈值能量,∈j为第j个碰撞截面对应的阈值能量,∈为新的碰撞截面对应的能量。
在本可选实施例中,该生成方式可以生成物理上存在意义的电子-分子碰撞截面数据并保留截面与能量之间的相关性,从而保证用于模型训练的碰撞截面数据的有效性,进而保证模型性能,
可选地,所述新的碰撞截面对应的能量的计算函数为:
其中,s表示区间[-1,1]内的随机数,∈min表示预设的能级最小值,∈max表示预设的能级最大值。
示例性地,本实施例使用12种气体,其中分离出1种作为验证气体后,用其余11种气体按照排列组合的方式生成55类新的碰撞截面数据,每类截面数据生成1.6×103组碰撞截面集,总计8.8×104组碰撞截面集,新的碰撞截面数据(即合成截面数据)如图2所示,其中Cross Section为碰撞截面,以σ(∈)表示,单位为m2;Energy为碰撞截面能量,以∈表示,单位为eV。
步骤S102,利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数。
在本步骤中,使用Bolsig+软件作为电子群参数计算工具,计算出每组碰撞截面集对应的电子群参数。
在一些实施例中,所述步骤S102,包括:
利用预设电子群参数计算工具,以预设温度,对所述碰撞截面集的预设能量范围内多个等对数间距的约化场强进行求解,得到所述电子群参数。
在本实施例中,Bolsig+软件设置温度为T=300K,对(10-3Td,103Td)能范围内15个等对数间距的约化场强进行求解,得到电子群参数,电子群参数包括有效电离速率系数、电子漂移速度和纵向电子扩散系数。其中,若需要极高的电场,Bolsig+可以自动将截面外推到能量范围。
步骤S103,利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型。
在本步骤中,为了在给定电子碰撞截面输运系数的情况下获得碰撞截面数据,预设神经网络采用全连接神经网络。可选地,最简单的全连接神经网络是由输入向量x到输出向量y的仿射变换,如下所示:
其中,矩阵W和向量b为神经网络参数。
在一些实施例中,所述步骤S103,包括:
对所述电子群参数进行归一化,得到目标电子群参数,所述电子群参数包括有效电离速率系数、电子漂移速度和电子纵向扩散系数;
以所述目标电子群参数作为所述全连接神经网络的输入,以所述碰撞截面数据作为所述全连接神经网络的训练标签,对所述全连接神经网络进行训练,并计算每次训练过程的损失函数;
若所述损失函数小于预设值,则判定所述全连接神经网络训练完成,得到所述碰撞截面预测模型。
在本实施例中,示例性地,全连接神经网络以一个输入层、一个输出层以及三个隐藏层为例,表示为:
其中,y为输出,x为输入,Swish为激活函数:
本实施例所使用的神经网络输入层包含25×3组电子群参数,共计75个输入,选用15个离散能量下的碰撞截面作为输出。输入层包括有效电离速率系数、电子漂移速度和归一化纵向电子扩散系数,分别选取15个约化场强下的值。这三类电子群参数的相关性较低,因此具有较好的代表性,可以较好地包含电子群参数的所有特征,并将电子群参数送入全连接神经网络进行分析,该神经网络的输入和输出如下所示:
其中W为电子漂移速度,n0DL为归一化纵向电子扩散系数,α为有效电离速率系数,En/n0约化场强,σ为碰撞截面值,∈n为电子能量。
本例神经网络模型的隐藏层层数为3层,三层隐藏层最终选择均使用60个神经元,最终全连接神经网络的结构如图3所示。
进一步地,不同输入量有不同的量纲和不同的量级,过大的量级会分到更高的权重使神经网络非常的不稳定,并且使求解器寻找最速降线也变得更加困难,这会对神经网络学习趋势的能力造成极大的阻碍。归一化后能使不同量级的输入分得几乎相等的权重。归一化过程为:
y=lg(x);
其中x为归一化前的原始数据,z为归一化之后的数据,y、ymin和ymax为中间量。当x=0时取一个合适的等价无穷小代替。
可选地,本实施例以平均绝对误差作为损失函数:
其中,loss为损失函数的输出值,N表示数据量;yi表示作为训练标签的所述碰撞截面数据,在此作为标准值;σ(xi)表示全连接神经网络的输出,在此作为预测值。
可选地,在搭建神经网络时,将偏置设置为0,权重为均匀分布的随机数矩阵。使用Adam优化方式,学习率取为1×10-3,指数衰减率分别为β1=0.9,β2=0.999。
步骤S104,利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据。
在本步骤中,训练好神经网络后,选用四氢化硅SiH4作为验证气体,其动量转移截面结果如图4所示,其中,Cross Section为碰撞截面,以σ(∈)表示,单位为m2;Energy为碰撞截面能量,以∈表示,单位为eV;MTCS是数据库中的参考结果,Predict是神经网络的预测结果。如图4可知,当电子能量大于0.8eV时,预测效果更好。
为了执行上述方法实施例对应的分子碰撞截面的预测方法,以实现相应的功能和技术效果。参见图5,图5示出了本申请实施例提供的一种分子碰撞截面的预测装置的结构框图。为了便于说明,仅示出了与本实施例相关的部分,本 申请实施例提供的分子碰撞截面的预测装置,包括:
生成模块501,用于基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集;
计算模块502,用于利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数;
训练模块503,用于利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型;
预测模块504,用于利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据。
在一些实施例中,所述生成模块501,包括:
生成单元,用于对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据;
分类单元,用于对所述新的碰撞截面数据进行分类,得到多组所述碰撞截面集。
在一些实施例中,所述生成单元,用于:
利用预设加权几何平均函数,根据目标随机数,对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,所述预设加权几何平均函数为:
其中,σnew(∈)为新的碰撞截面数据,r表示区间(0,1)内的随机数,σi表示第i个碰撞截面数据,σj表示第j个碰撞截面数据,∈i表示第i个碰撞截面对应的阈值能量,∈j为第j个碰撞截面对应的阈值能量,∈为新的碰撞截面对应的能量。
在一些实施例中,所述新的碰撞截面对应的能量的计算函数为:
其中,s表示区间[-1,1]内的随机数,∈min表示预设的能级最小值,∈max表示预设的能级最大值。
在一些实施例中,所述计算模块502,用于:
利用预设电子群参数计算工具,以预设温度,对所述碰撞截面集的预设能量范围内多个等对数间距的约化场强进行求解,得到所述电子群参数。
在一些实施例中,所述预设神经网络为全连接神经网络,所述训练模块503,用于:
对所述电子群参数进行归一化,得到目标电子群参数,所述电子群参数包括有效电离速率系数、电子漂移速度和电子纵向扩散系数;
以所述目标电子群参数作为所述全连接神经网络的输入,以所述碰撞截面数据作为所述全连接神经网络的训练标签,对所述全连接神经网络进行训练,并计算每次训练过程的损失函数;
若所述损失函数小于预设值,则判定所述全连接神经网络训练完成,得到所述碰撞截面预测模型。
在一些实施例中,所述损失函数为:
其中,loss为损失函数的输出值,N表示数据量,yi表示作为训练标签的所述碰撞截面数据,σ(xi)表示全连接神经网络的输出。
上述的分子碰撞截面的预测装置可实施上述方法实施例的分子碰撞截面的预测方法。上述方法实施例中的可选项也适用于本实施例,这里不再详述。本申请实施例的其余内容可参照上述方法实施例的内容,在本实施例中,不再进行赘述。
图6为本申请一实施例提供的计算机设备的结构示意图。如图6所示,该实施例的计算机设备6包括:至少一个处理器60(图6中仅示出一个)处理器、存储器61以及存储在所述存储器61中并可在所述至少一个处理器60上运行的计算机程序62,所述处理器60执行所述计算机程序62时实现上述任意方法实施例中的步骤。
所述计算机设备6可以是智能手机、平板电脑、桌上型计算机和云端服务器等计算设备。该计算机设备可包括但不仅限于处理器60、存储器61。本领域 技术人员可以理解,图6仅仅是计算机设备6的举例,并不构成对计算机设备6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器60可以是中央处理单元(Central Processing Unit,CPU),该处理器60还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器61在一些实施例中可以是所述计算机设备6的内部存储单元,例如计算机设备6的硬盘或内存。所述存储器61在另一些实施例中也可以是所述计算机设备6的外部存储设备,例如所述计算机设备6上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述计算机设备6的内部存储单元也包括外部存储设备。所述存储器61用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。
另外,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述任意方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机设备上运行时,使得计算机设备执行时实现上述各个方法实施例中的步骤。
在本申请所提供的几个实施例中,可以理解的是,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意的是,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图 中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述的具体实施例,对本申请的目的、技术方案和有益效果进行了进一步的详细说明,应当理解,以上所述仅为本申请的具体实施例而已,并不用于限定本申请的保护范围。特别指出,对于本领域技术人员来说,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种分子碰撞截面的预测方法,其特征在于,包括:
    基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集;
    利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数;
    利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型;
    利用所述碰撞截面预测模型,预测目标气体的目标碰撞截面数据。
  2. 如权利要求1所述的分子碰撞截面的预测方法,其特征在于,所述基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集,包括:
    对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据;
    对所述新的碰撞截面数据进行分类,得到多组所述碰撞截面集。
  3. 如权利要求2所述的分子碰撞截面的预测方法,其特征在于,所述对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,包括:
    利用预设加权几何平均函数,根据目标随机数,对所述碰撞截面数据进行加权几何平均处理,生成新的碰撞截面数据,所述预设加权几何平均函数为:
    其中,σnew(∈)为新的碰撞截面数据,r表示区间(0,1)内的随机数,σi表示第i个碰撞截面数据,σj表示第j个碰撞截面数据,∈i表示第i个碰撞截面对应的阈值能量,∈j为第j个碰撞截面对应的阈值能量,∈为新的碰撞截面对应的能量。
  4. 如权利要求3所述的分子碰撞截面的预测方法,其特征在于,所述新的碰撞截面对应的能量的计算函数为:
    其中,s表示区间[-1,1]内的随机数,∈min表示预设的能级最小值,∈max表示预设的能级最大值。
  5. 如权利要求1所述的分子碰撞截面的预测方法,其特征在于,所述利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数,包括:
    利用预设电子群参数计算工具,以预设温度,对所述碰撞截面集的预设能量范围内多个等对数间距的约化场强进行求解,得到所述电子群参数。
  6. 如权利要求1所述的分子碰撞截面的预测方法,其特征在于,所述预设神经网络为全连接神经网络,所述利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型,包括:
    对所述电子群参数进行归一化,得到目标电子群参数,所述电子群参数包括有效电离速率系数、电子漂移速度和电子纵向扩散系数;
    以所述目标电子群参数作为所述全连接神经网络的输入,以所述碰撞截面数据作为所述全连接神经网络的训练标签,对所述全连接神经网络进行训练,并计算每次训练过程的损失函数;
    若所述损失函数小于预设值,则判定所述全连接神经网络训练完成,得到所述碰撞截面预测模型。
  7. 如权利要求5所述的分子碰撞截面的预测方法,其特征在于,所述损失函数为:
    其中,loss为损失函数的输出值,N表示数据量,yi表示作为训练标签的所述碰撞截面数据,σ(xi)表示全连接神经网络的输出。
  8. 一种分子碰撞截面的预测装置,其特征在于,包括:
    生成模块,用于基于预设数据库中已有的多种气体的碰撞截面数据,生成多组碰撞截面集;
    计算模块,用于利用预设电子群参数计算工具,计算每组所述碰撞截面集的电子群参数;
    训练模块,用于利用所述电子群参数,对预设神经网络进行训练,直至所述预设神经网络的损失函数达到预设收敛条件,得到碰撞截面预测模型;
    预测模块,用于利用所述碰撞截面预测模型,预测目标气体的目标碰撞截 面数据。
  9. 一种计算机设备,其特征在于,包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7任一项所述的分子碰撞截面的预测方法。
  10. 一种计算机可读存储介质,其特征在于,其存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的分子碰撞截面的预测方法。
PCT/CN2023/072743 2022-09-30 2023-01-17 分子碰撞截面的预测方法、装置、设备及存储介质 WO2024066143A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211218777.6 2022-09-30
CN202211218777.6A CN115422817A (zh) 2022-09-30 2022-09-30 分子碰撞截面的预测方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024066143A1 true WO2024066143A1 (zh) 2024-04-04

Family

ID=84207018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072743 WO2024066143A1 (zh) 2022-09-30 2023-01-17 分子碰撞截面的预测方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN115422817A (zh)
WO (1) WO2024066143A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422817A (zh) * 2022-09-30 2022-12-02 广东电网有限责任公司 分子碰撞截面的预测方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595820A (zh) * 2018-04-19 2018-09-28 广东电网有限责任公司电力科学研究院 一种分子电离碰撞截面的计算方法及装置
US20200326303A1 (en) * 2019-04-15 2020-10-15 Waters Technologies Ireland Limited Techniques for predicting collision cross-section values
CN112100896A (zh) * 2020-09-08 2020-12-18 东南大学 一种基于机器学习的气体分子电离碰撞截面预测方法
CN113345527A (zh) * 2021-05-28 2021-09-03 广东电网有限责任公司 一种基于电子群参数获取分子吸附截面的方法
CN113971987A (zh) * 2021-10-14 2022-01-25 国网安徽省电力有限公司电力科学研究院 一种乙醇中气泡放电等离子体动态演化的模拟方法
CN115422817A (zh) * 2022-09-30 2022-12-02 广东电网有限责任公司 分子碰撞截面的预测方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595820A (zh) * 2018-04-19 2018-09-28 广东电网有限责任公司电力科学研究院 一种分子电离碰撞截面的计算方法及装置
US20200326303A1 (en) * 2019-04-15 2020-10-15 Waters Technologies Ireland Limited Techniques for predicting collision cross-section values
CN112100896A (zh) * 2020-09-08 2020-12-18 东南大学 一种基于机器学习的气体分子电离碰撞截面预测方法
CN113345527A (zh) * 2021-05-28 2021-09-03 广东电网有限责任公司 一种基于电子群参数获取分子吸附截面的方法
CN113971987A (zh) * 2021-10-14 2022-01-25 国网安徽省电力有限公司电力科学研究院 一种乙醇中气泡放电等离子体动态演化的模拟方法
CN115422817A (zh) * 2022-09-30 2022-12-02 广东电网有限责任公司 分子碰撞截面的预测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115422817A (zh) 2022-12-02

Similar Documents

Publication Publication Date Title
Tong et al. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling
JP5418408B2 (ja) シミュレーションパラメータ校正方法、装置及びプログラム
WO2021120677A1 (zh) 一种仓储模型训练方法、装置、计算机设备及存储介质
CN112101530A (zh) 神经网络训练方法、装置、设备及存储介质
US20210377122A1 (en) Mixed-precision neural networks
CN111461445B (zh) 短期风速预测方法、装置、计算机设备及存储介质
CN111026544A (zh) 图网络模型的节点分类方法、装置及终端设备
WO2024066143A1 (zh) 分子碰撞截面的预测方法、装置、设备及存储介质
AU2021245165A1 (en) Method and device for processing quantum data
US20230342606A1 (en) Training method and apparatus for graph neural network
US10929755B2 (en) Optimization processing for neural network model
CN113541985A (zh) 物联网故障诊断方法、模型的训练方法及相关装置
Anderson et al. Certifying neural network robustness to random input noise from samples
US11080365B2 (en) Solving lattice problems using annealing
CN116302088B (zh) 一种代码克隆检测方法、存储介质及设备
CN109993374B (zh) 货物量预测方法及装置
Zheng et al. Modulus-based successive overrelaxation method for pricing American options
US20230161783A1 (en) Device for accelerating self-attention operation in neural networks
CN114118381B (zh) 基于自适应聚合稀疏通信的学习方法、装置、设备及介质
CN109728958A (zh) 一种网络节点信任预测方法、装置、设备及介质
CN115099875A (zh) 基于决策树模型的数据分类方法及相关设备
US20240103920A1 (en) Method and system for accelerating the convergence of an iterative computation code of physical parameters of a multi-parameter system
CN112561050B (zh) 一种神经网络模型训练方法及装置
CN112766537A (zh) 一种短期电负荷预测方法
Navin et al. Modeling of random variable with digital probability hyper digraph: data-oriented approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869407

Country of ref document: EP

Kind code of ref document: A1