CN117912585A - Optimization method for combustion chemical reaction based on deep artificial neural network - Google Patents

Optimization method for combustion chemical reaction based on deep artificial neural network Download PDF

Info

Publication number
CN117912585A
CN117912585A CN202410320144.9A CN202410320144A CN117912585A CN 117912585 A CN117912585 A CN 117912585A CN 202410320144 A CN202410320144 A CN 202410320144A CN 117912585 A CN117912585 A CN 117912585A
Authority
CN
China
Prior art keywords
training sample
neural network
artificial neural
model
combustion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410320144.9A
Other languages
Chinese (zh)
Other versions
CN117912585B (en
Inventor
韩建慧
洪延姬
崔海超
杜宝盛
郑永赞
毛晨涛
张腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Original Assignee
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peoples Liberation Army Strategic Support Force Aerospace Engineering University filed Critical Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority to CN202410320144.9A priority Critical patent/CN117912585B/en
Publication of CN117912585A publication Critical patent/CN117912585A/en
Application granted granted Critical
Publication of CN117912585B publication Critical patent/CN117912585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an optimization method of combustion chemical reaction under a deep artificial neural network, which comprises the following steps: sampling a low-difference sequence of an original combustion chemical reaction dynamic model parameter space to obtain an initial training sample, and preprocessing the initial training sample; performing dimension reduction analysis on the preprocessed model parameter space by adopting a manifold learning method to obtain a preliminary training sample set; then screening the preliminary training sample set by using an Euclidean distance method to obtain an effective training sample set; and (3) training an effective sample set by adopting a mathematical relation method in an artificial neural network model to replace a complicated and complicated materialized equation to be directly solved, and then carrying out optimization analysis on the original combustion chemical reaction kinetic model parameters by a sensitivity analysis method. The invention analyzes the influence process and the change rule of multiple physical parameters in the chemical reaction combustion process by adopting a mathematical substitution method in the artificial neural network model, and has important guiding significance for engineering practice combustion process control and combustion device design.

Description

Optimization method for combustion chemical reaction based on deep artificial neural network
Technical Field
The invention relates to the technical field of combustion chemical reaction dynamics, in particular to an optimization method of combustion chemical reaction under a deep artificial neural network.
Background
Combustion reaction kinetics have wide application in industrial, environmental and safety fields. A thorough understanding of the combustion process has driven advances in the research of combustion reaction dynamics. But it remains a difficult problem in the combustion science field considering the multiple physical field interactions involved in the combustion process. One of the key challenges facing global energy development is how to improve the combustion efficiency of power equipment such as aeroengines, gas turbines, etc., reduce pollutant emissions, and achieve efficient clean utilization of fuels.
Early combustion reaction kinetics studies relied on elementary reactions to resolve combustion phenomena. In 1965, dixon-Lewis et al have used the primitive reaction rate constant expression to conduct numerical simulations, revealing the structure of a laminar premixed flame. Subsequently, based on the combustion mechanism of hydrogen and synthesis gas, a core mechanism containing small carbon molecules C2-C4 has been developed. However, macromolecular hydrocarbon fuels involve numerous species and reactions, and the combustion mechanism is still not well defined. Uncertainty in the primitive reaction rate coefficients poses a challenge to combustion numerical computation because it can affect the determination of critical paths and active primitive reactions. In power plant design, combustion numerical simulation plays an important role because it can help us to precisely grasp the law of fuel release heat energy, the detailed structure of flow field and the path of chemical reaction, which is important for analyzing combustion dynamics mechanism. Therefore, finding efficient and accurate chemical reaction model optimization methods is critical to improving combustion efficiency.
Disclosure of Invention
The invention aims to provide an optimization method of combustion chemical reaction under a deep artificial neural network, so as to solve the defects in the prior art, and the technical problems to be solved by the invention are realized by the following technical scheme. The invention uses artificial deep neural network to replace traditional ordinary differential equation set solver, uses isobaric zero-dimensional homogeneous ignition process of dimethyl ether/air as an example, and uses the processes of sampling, preprocessing containing Box-Cox conversion, manifold learning to realize model dimension reduction, staged neural network training and the like by using a low-difference sequence method, thereby realizing the accuracy requirement of engineering and calculating the chemical reaction source term in combustion with higher efficiency.
The invention provides an optimization method of combustion chemical reaction under a deep artificial neural network, which specifically comprises the following steps:
s1, sampling a low-difference sequence of an original combustion chemical reaction dynamics model parameter space to obtain an initial training sample, and preprocessing the initial training sample;
s2, performing dimension reduction analysis on the preprocessed model parameter space by adopting a manifold learning method to obtain a preliminary training sample set; screening the preliminary training sample set by using an Euclidean distance method to obtain an effective training sample set;
S3, training an effective sample set by adopting a mathematical relation method in an artificial neural network model to replace a materialized equation which is complicated and complicated to directly solve, and then carrying out optimization analysis on the original combustion chemical reaction kinetic model parameters by adopting a sensitivity analysis method.
Further, the step S1 specifically includes:
Sampling by adopting a low-difference sequence method, and preprocessing the sample by Box-Cox conversion;
the general form of the Box-Cox transform is:
In the method, in the process of the invention, Is an output parameter; /(I)For inputting parameters (/ >)> 0) I=1, 2, n; lambda is the transformation parameter.
Further, the step S2 of reducing the dimension of the model parameter space by adopting a manifold learning method to obtain a preliminary training sample set, and the step of screening the preliminary training sample by using the Euclidean distance method to obtain an effective training sample set specifically comprises the following steps:
the manifold learning method uses the sampled model output parameters as training sample set At/> i Non-linear math/>And dataset {/>Transform of }. Assume a given training sample set/>={/>R D, i=1, 2, N can pass through the low dimension spatial dataset y= {/>}/>R d, i=1, 2, N is subjected to a series of nonlinear mathematical transformations/>The specific relational expression is generated as follows: /(I)=/>(/>)+/> i D < < D, a mapping from a high dimensional space to a low dimensional space can be constructed, namely: /(I)R D→Rd, thereby obtaining the low-dimensional coordinates of the high-dimensional observation data. Wherein D represents a low-dimensional manifold dimension, D represents a high-dimensional manifold dimension, D < < D; epsilon i denotes the sample noise and R denotes the real space. The low-dimensional manifold dimension d may be calculated by calculating the low-dimensional coordinates of the known training dataset X.
Further, the selection idea of the sample neighbor point number (k value for short) comprises the following steps: given an unknown sample, finding k samples closest to the unknown sample in a training data set, wherein the k training samples are k 'neighbors' of the unknown sample, determining the class to which the unknown sample belongs according to the k neighbors, and classifying the unknown sample into the most common class in the neighbors.
Further, to ensure correlation between samples of the reduced-dimension dataset, for a given relationship between pairs of high-dimensional data points, a set of points is re-established in low-dimensional space, requiring that the distance between pairs of reduced-dimension spatial data points be best fit to the distance relationship between pairs of original spatial data points. The distance between the unknown samples divided into the most similar samples in k neighbors is calculated by using the Euclidean distance method:
In the method, in the process of the invention, Respectively representing the characteristic vector of one sample in the training sample set and the characteristic vector of the other sample in the training sample set, wherein N is the dimension of the characteristic vector; /(I)And/>Represented as the ith and jth sample vectors in the kth dimension of the training sample set, respectively. The Euclidean distance is inversely related to the sample data point correlation. The correlation degree is low when the distance is large, and the correlation degree is high when the distance is close. And (3) eliminating the training sample data sample points with low correlation degree, and reserving the training sample data sample points with high correlation degree, namely an effective training sample set.
Further, the step S3 specifically includes:
The artificial neural network model adopts a mathematical substitution method, and brings effective training sample data points satisfying the formula (10) into the formula (11):
+···
In the method, in the process of the invention, Output parameters such as ignition delay time in combustion chemical reaction, mass fraction of chemical components and the like; Is a constant term; /(I) And/>Is a characteristic coefficient, x 1,x2,···xn is an input parameter of a training sample in the combustion chemical reaction;
preferably, the step S3 further includes: sensitivity analysis is performed on model parameters of the mathematical substitution method,
The local sensitivity coefficient of the i-th parameter is expressed as:
Wherein S i represents the local sensitivity corresponding to the i-th parameter x i; x 0 represents the original value of the input parameter, and f (x 0) represents the original model predicted value, i.e., the model predicted value when the parameter uncertainty is not considered; To a disturbance value of an input parameter, correlating with uncertainty of a rate constant; /(I) The model prediction result after the disturbance is carried out on the parameter x i.
Further, the method comprises the steps of,
Using a multi-layer perceptron model consisting of an input layer, a hidden layer and an output layer, and using samples generated under each calculation condition to train an artificial neural network substitution model; the nodes on the input layer correspond to input variables of combustion behaviors, the nodes on the output layer correspond to different prediction targets, and the nodes on the hidden layer are connected with the input layer and the output layer.
The number of samples N required for training the artificial neural network and the number of input parameters M accord with the following fitting relation:
Compared with the prior art, the invention has the beneficial effects that:
(1) The application of the deep artificial neural network model improves the calculation efficiency by one order of magnitude compared with the calculation efficiency of a traditional model algorithm under the condition of ensuring the calculation accuracy.
(2) The change process and the law of multiple physical parameters in the deep artificial neural network model in the combustion process are expected to have important guiding significance for engineering practice combustion process control and combustion device design.
Drawings
FIG. 1 is a flow chart of the steps of a method for optimizing combustion chemistry under a deep artificial neural network of the present invention;
FIG. 2 is a schematic representation of the selection of neighbor point k values of the present invention;
FIG. 3 is a schematic diagram of a perception model machine model of a single hidden layer of the present invention;
FIG. 4 is a schematic representation of the temperature of dimethyl ether/air over time in a combustion reaction according to the present invention;
FIG. 5 is a schematic representation of the evolution over time of the chemical reaction components of the dimethyl ether/air combustion reaction of the present invention;
fig. 6 is a graph of ignition delay time error analysis and a graph of calculation method efficiency versus time.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The present invention will be described in detail with reference to examples, but the present invention is not limited to these examples.
FIG. 1 is a flow chart of the steps of a method for optimizing combustion chemistry in deep artificial neural networks according to the present invention, specifically comprising:
S1, sampling an original model parameter space by adopting a low-difference sequence method, and performing Box-Cox pretreatment on the original model parameter space;
The original model parameter space is first sampled. The random number sampling method is the simplest sampling method, i.e., a random number is generated by a computer in a parameter space as an input parameter of a sample. But this method is prone to the problem of uneven stacking of samples. In order to avoid sample stacking problems of the extracted samples in the parameter value space. The low variance sequential method was used for sampling in this study. The low-difference sequence method has the characteristic of super-uniform distribution in parameter space, and can avoid the phenomenon of repeated stacking in the sampling process.
Further, the sampled model parameters are subjected to box-cox transformation (box-Cox transformation, abbreviated as BCT). BCT is a widely applied generalized power transformation method model jointly proposed by two people of Box-Cox, and the principle is that a transformation coefficient lambda is introduced, and the original sequence is subjected to exponential and logarithmic transformation to obtain a new sequence consistent with the original sequence information, and the characteristics of linearity, normality, independence, uniform variance and the like are met. The general form of the Box-Cox transform is:
(1)
Where x i is the input parameter (x i>0);yi is the output parameter, i=1, 2, n; lambda is the transformation parameter.
It is clear that a reasonable choice of λ in the relation is important. Further, the estimate is developed for the lambda parameter. A maximum likelihood method is used for the selection of λ.
Assuming that there are s independent variables (c 1,c2,···,cs) and n degradation amounts (x 1,x2,···xn), the linear relationship thereof can be expressed as follows:
(2)
In the method, in the process of the invention, ,/>,…/>As a regression function,/>~N(0,/>) Is a random error term.
Transformed vector: And satisfies the following:
(3)
In the method, in the process of the invention, In order to regress the matrix of independent variables,Random coefficient,/>Error term obeying normal distribution,/>And (5) transposition. Can be obtained by the above methodThe distribution of (2) is: /(I)~N(0,/>)
In which x isThe probability density function of (2) is expressed as:
(4)
In the method, in the process of the invention, Is a jacobian determinant.
Taking a log likelihood function of a probability density function of the original data to obtain:
(5)
Further, the transformation parameter lambda is fixed, so that the likelihood function pairs And/>And the bias is 0, to obtain:
(6)
(7)
Further, will And/>Substitution into log likelihood functions can result in:
(8)
The log likelihood function L max (lambda|x) is subjected to multidimensional search, a group of training data sets are given, corresponding parameter values can be obtained, and the value at the extreme value of the function is the optimal value.
S2, performing parameter dimension reduction analysis on the model, wherein a manifold learning method is adopted.
The manifold learning is a new machine learning method, which was first proposed by Bregler and Omohundro in 1995, and the core idea is to mine the inherent popular structure in the high-dimensional data space, and to perform nonlinear dimension reduction on the structure, reveal the manifold distribution, and obtain the meaningful low-dimensional structure hidden in the high-dimensional data space. Briefly, manifold learning method uses sampled model output parameters as a preliminary training sample setAt/>Under the condition of (1) reconstructing nonlinear mathematical transformation/>And dataset {/>}。
Assume a given preliminary training sample dataset={{/>}/>R D, i=1, 2, N is defined by a low dimensional space data set y= {/>}/>R d, i=1, 2, N is subjected to a series of nonlinear mathematical transformations/>Generating: /(I)=/>(/>)+/> i A mapping from the high-dimensional space to the low-dimensional space is then constructed, namely: /(I)R D→Rd. And obtaining a low-dimensional coordinate training sample data set under the condition of keeping the geometric information characteristics of the original high-dimensional data unchanged by a manifold learning method, and reconstructing the original high-dimensional data by the low-dimensional coordinate training sample data.
Further, a method for determining the dimension d and the number of neighbors k in the preliminary training sample. Since a certain point on a manifold is characterized by a number of parameter variables, we refer to the minimum number of variables that need to be represented as the dimension of that point, denoted by d. The low-dimensional manifold dimension d and the neighbor point number k have larger influence on the finally generated low-dimensional popular features, and are important attention parameters in manifold learning. In practical application, the low-dimensional coordinate representation of the new high-dimensional observation sample is given through the known data, and the low-dimensional coordinate of the known training sample data set is calculated to obtain the dimension d. The choice of the k value of the number of neighbor points generally follows the following principle: (a) k cannot be too large, which would result in an incorrect representation of its local geometry for a high curvature manifold; too small a k (b) may result in manifold discontinuities. (c) k > d will increase the robustness of the algorithm; based on the principle, the selection idea of the k value of the neighboring point is as follows: given an unknown sample, in the training data setAnd finding out k training samples which are the nearest to the unknown sample, wherein the k training samples are k 'neighbors' of the unknown sample. The class to which the unknown sample belongs is then determined from the k neighbors and divided into this neighbor common class.
FIG. 2 is a schematic representation of the selection of neighbor point k values of the present invention; the square and rectangular points in the figure represent two known classes in the training data, respectively, and the circular point represents an unknown sample. When k is equal to 1, the most recent sample of the unknown sample is a circular sample, and is therefore classified as a circular point. When k is equal to 6, there are 4 rectangular point samples and 2 circular samples in the 6 nearest samples of the unknown sample, so the unknown sample is classified into the class of rectangular points according to the "unknown sample is classified into the most common class of k neighbors" principle. By the above method we have determined the d value and the k value.
Further, to ensure correlation between samples in the converted low-dimensional preliminary training sample dataset, for a given relationship between pairs of high-dimensional data points, a set of points is re-established in the low-dimensional space, requiring that the distance between pairs of dimension-reduced spatial data points be best fit to the distance relationship between pairs of original spatial data points. The distance measure between samples is generally calculated by euclidean distance method. The euclidean distance is inversely related to the sample data point correlation. The correlation degree is low when the distance is large, and the correlation degree is high when the distance is close. And eliminating training sample data sample points with low correlation degree in the research, retaining the training sample data sample points with high correlation degree, and screening the preliminary training sample data set to obtain an effective training sample set sample.
(9)
In the method, in the process of the invention,,/>Respectively representing the characteristic vector of one sample in the training sample set and the characteristic vector of the other sample in the training sample set, wherein N is the dimension of the characteristic vector; /(I)And/>Represented as the ith and jth sample vectors in the kth dimension of the training sample set, respectively.
S3 artificial neural network model construction
The artificial neural network model is an efficient mathematical substitution model method, and can be used for optimizing and accelerating a model calculation process and problem analysis of high-dimensional problems. The method has the core ideas that mathematical relations are established to replace an original model, a large number of samples are generated through acceleration to replace a complicated physical and chemical equation solving process, and complex analysis and calculation are performed. And inputting the obtained effective training sample data into combustion chemical reaction dynamics simulation, and calculating to obtain chemical reaction multi-physical parameter values. An artificial neural network model is trained under each computational condition to replace the kinetic simulation process. And rapidly generating a large number of samples through the artificial neural network for calculation, and optimizing model parameters by using the generated samples. The mathematical substitution model mainly considers two aspects of accuracy and calculation efficiency. Accuracy and computational efficiency measure the degree of agreement between the predicted result of the mathematical model substitution model and the original model sample value, respectively, and the computational cost required for constructing the mathematical substitution model with specific accuracy. The computational effort of the combustion reaction dynamics simulation is much greater than the computational effort of building the surrogate model, so the efficiency of the construction of a mathematical surrogate model can be measured by the number of training samples required to achieve a particular accuracy.
In general, the relationship of the mathematical substitution can be expressed as follows:
+… (10)
wherein x 1,x2···xn is an input parameter, typically chemical reaction rate; The output parameters are generally the parameter characteristics of the mole fraction, temperature, pressure and ignition delay time of the components after chemical reaction to be calculated; /(I) Is a constant term, and a r and β ij are characteristic coefficients.
Further, parameter sensitivity analysis in mathematical substitution is performed. And (3) inputting the effective training sample data (such as physical parameter x i for representing the combustion chemical reaction rate) obtained by meeting the Euclidean distance formula (9) into a mathematical substitution model formula (10) for model parameter sensitivity analysis. The sensitivity coefficient is used for judging the key chemical reaction equation, the combustion chemical equation with low sensitivity coefficient is removed, the quantity of the combustion chemical reaction primitive equations is screened and optimized, and the overall calculation level of the model is improved. The local sensitivity coefficient analysis method is to estimate the partial derivative of the output parameter to the input parameter by a first order difference method. Local sensitivity analysis of a combustion reaction kinetic model containing n rate parameters requires n+1 calculations. In the direct calculation method, the local sensitivity coefficient of the i-th parameter can be expressed as:
(11)
Wherein S i represents the local sensitivity corresponding to the i-th parameter x i; x 0 represents the original value of the input parameter, and f (x 0) represents the original model predicted value, i.e., the model predicted value when the parameter uncertainty is not considered; To a disturbance value of an input parameter, correlating with uncertainty of a rate constant; /(I) The model prediction result after the disturbance is carried out on the parameter x i.
Further, after local sensitivity analysis, the number of model parameters is significantly reduced under each calculation condition. The samples generated under each calculation condition were used to train a DNN substitution model using a multi-layer perceptron model consisting of three parts, an input layer, a hidden layer, and an output layer. Wherein the nodes on the input layer correspond to the input variables of the combustion behavior (such as the combustion reaction rate parameters), the nodes on the output layer correspond to different prediction targets, and the nodes on the hidden layer are connected to the input layer and the output layer, as shown in fig. 3.
Sampling screening parameters under each condition, generating input parameter values of random samples, and carrying out combustion reaction dynamics numerical simulation to calculate predicted values corresponding to the samples, wherein the obtained samples are used for training the characteristic parameters of DNN. The training samples required by the DNN model are also continuously improved along with the increase of the parameter dimension, wherein the number N of samples required by training the DNN and the number M of input parameters accord with the following fitting relation:
(12)
And in the calculation process, DNN of one hidden layer is selected for calculation, and the node number of the hidden layer is set to be half of the parameter number of the input layer. Nine tenths of the total calculated samples are randomly selected as a training set, and the remaining tenths are used as a test set to test DNN errors. The weight matrix is adjusted through a back propagation algorithm, and the activation function adopts a sigmoid function, so that the method is suitable for fitting a nonlinear mapping relation. The adjustment between hidden layer nodes describes iterative computation by a gradient descent algorithm to adjust the weight matrix and reduce the difference between the MLP prediction result and the actual output until the difference is reduced to a set threshold, and the computation ends.
Further, the case analysis was performed with the two-stage self-ignition problem example of dimethyl ether/air using the method and operational procedure described above.
FIG. 4 is a graph of dimethyl ether/air temperature over time in a combustion reaction. The results obtained by using the open source chemical reaction kinetics analysis software Cantera are plotted by lines, and the results obtained by using the deep artificial network model DNN are calculated by using a scatter diagram.
As shown in fig. 4, the results of the temperature-dependent curves calculated by the two methods respectively remain substantially one, the average relative error is only 0.4%, the prediction effect is slightly worse near 940K, and the maximum relative error is also only 2.6%. In addition, the chemical reaction components in the dimethyl ether/air combustion reaction are shown in the graph of fig. 5 over time.
FIG. 5 is a schematic representation of the evolution of chemical reaction components in the dimethyl ether/air combustion reaction of the present invention over time. As can be seen from fig. 5, the results obtained for the curves of the evolution over time of the combustion main components CH3OCH3 and HO2, whether by the Cantera method or the DNN method, remain the same. fig. 6 is a graph showing the predicted ignition delay time as a function of the initial temperature under the condition that the initial pressure p=10 atm and the equivalence ratio ϕ =1.0. Two branches are included: fire delay time due to high temperature chemistry (HTC for short) and fire delay time due to low temperature chemistry (LTC for short). The temperature interval in which the two are both present is the two-stage self-ignition interval, and the interval in which the high-temperature chemistry is present is the single-stage self-ignition interval. The standard used to capture low temperature chemistry is a heat release rate of not less than 108. When the temperature increases, the delay time of the low temperature chemistry shortens, and at the same time, the heat release rate of the effect decreases, and the characteristics thereof fade in the temperature image. Negative temperature coefficient phenomena can be observed in the high temperature chemistry plot, i.e., the temperature rise firing delay time may instead increase, as may the boundaries of the two predicted low temperature chemistry disappearance. From this, it can be seen that the DNN method can be kept at a good level in terms of accuracy.
In addition, FIG. 6 also shows the single step computational efficiency of Cantera and DNN on the CPU. For the current code framework, the speed of DNN (DNN-serial) on the CPU is increased by about 4 times compared with Cantera, and the speed of DNN (DNN-parallel) on the CPU is increased by about 15 times compared with Cantera.
Through the dimethyl ether/air specific example, the developed combustion chemical reaction source term solver based on the deep neural network is verified to be capable of accurately predicting a single-stage self-ignition process and also capable of accurately predicting a two-stage self-ignition process. In addition, the method can improve the calculation speed by an order of magnitude under the condition of ensuring the calculation accuracy compared with the traditional ordinary differential equation set solver. Therefore, the invention shows that the neural network method DNN can replace the traditional ordinary differential equation set solver in the combustion numerical simulation considering the detailed chemical reaction mechanism, realize the accurate prediction of the multi-stage self-ignition process, effectively improve the actual engineering calculation efficiency and can be expanded and applied to more fuel combustion simulation applications.
While the application has been described in terms of preferred embodiments, it will be understood by those skilled in the art that various changes and modifications can be made without departing from the scope of the application, and it is intended that the application is not limited to the specific embodiments disclosed.

Claims (9)

1. An optimization method of combustion chemical reaction under deep artificial neural network, which is characterized by comprising the following steps:
s1, sampling a low-difference sequence of an original combustion chemical reaction dynamics model parameter space to obtain an initial training sample, and preprocessing the initial training sample;
s2, performing dimension reduction analysis on the preprocessed model parameter space by adopting a manifold learning method to obtain a preliminary training sample set; screening the preliminary training sample set by using an Euclidean distance method to obtain an effective training sample set;
S3, training an effective sample set by adopting a mathematical relation method in an artificial neural network model to replace a materialized equation which is complicated and complicated to directly solve, and then carrying out optimization analysis on the original combustion chemical reaction kinetic model parameters by adopting a sensitivity analysis method.
2. The method for optimizing combustion chemistry under deep artificial neural network according to claim 1, wherein the pre-processing the initial training sample in S1 comprises pre-processing the initial training sample by using Box-Cox transformation,
The Box-Cox transformation formula is:
In the method, in the process of the invention, Is input parameters, and/>>0/>In order to output the parameters of the device, i=1, 2, n; lambda is the transformation parameter.
3. The method for optimizing combustion chemical reaction under deep artificial neural network according to claim 1, wherein the step S2 of performing dimension reduction analysis on the model parameter space after pretreatment by adopting a manifold learning method comprises:
The manifold learning method is to use an initial training sample obtained after Box-Cox conversion in the following steps i Under the condition of (1) reconstructing nonlinear mathematical transformation/>And dataset {/>Assume a given sample set/>={/>R D, i=1, 2, N, and assuming thatIs obtained from the data set Y= { { Y i }/>, in low-dimensional spaceR d, i=1, 2, N is subjected to a series of nonlinear mathematical transformations/>The method comprises the following steps:
i=/>(/>)+/> i,
then a mapping from the high-dimensional space to the low-dimensional space is constructed, namely: r D→Rd, obtaining low-dimensional coordinates of high-dimensional observation data, wherein D represents the dimension of a low-dimensional manifold, D represents the dimension of a high-dimensional manifold, and D < < D; epsilon i denotes the sample noise, R denotes the real space, and N denotes the total number of samples of the sample set.
4. The method for optimizing combustion chemistry under deep artificial neural network according to claim 3, further comprising,
The manifold learning method further comprises the steps of determining the dimension d and the neighbor point number k in the preliminary training sample; the low-dimensional manifold dimension d is obtained by calculating the low-dimensional coordinates of the known training sample set X; the selecting of the k value of the neighboring point number comprises the following steps: given an unknown sample, finding out k samples closest to the unknown sample in a training sample set, wherein the k training samples are k 'neighbors' of the unknown sample, determining the class to which the unknown sample belongs according to the k neighbors, and classifying the unknown sample into the most common class in the neighbors.
5. The method for optimizing combustion chemistry under deep artificial neural network according to claim 4, wherein,
In order to ensure the relevance between samples in the training sample set after dimension reduction, a group of training sample data points are re-selected from k nearest neighbor similarity classes in a low-dimension space, so that the distance between the data points can be fitted with the distance relation between original space data point pairs, and the distance between the data points in the training sample is calculated by adopting the Euclidean distance method:
In the method, in the process of the invention, ,/>Respectively representing the characteristic vector of one sample in the training sample set and the characteristic vector of the other sample in the training sample set, wherein N is the dimension of the characteristic vector; /(I)And/>Respectively denoted as the ith and jth sample vectors in the kth dimension of the training sample set; the greater the Euclidean distance, the lower the degree of correlation of the two samples, and conversely, the higher the degree of correlation.
6. The method for optimizing combustion chemical reaction under deep artificial neural network according to claim 1, wherein the step S3 of training the effective sample set by using the artificial neural network model and using a mathematical relationship method is specifically as follows:
In the method, in the process of the invention, Output parameters of the combustion chemical reaction model; /(I)Is a constant term; x 1,x2,···xn is an input parameter for the combustion chemistry; /(I)And/>Is a characteristic coefficient.
7. The method for optimizing combustion chemistry under deep artificial neural network according to claim 6, wherein the sensitivity analysis method in S3 specifically comprises:
the local sensitivity coefficient of the i-th parameter is expressed as:
Wherein S i represents the local sensitivity corresponding to the i-th parameter x i; x 0 represents the original value of the input parameter, and f (x 0) represents the original model predicted value, i.e., the model predicted value when the parameter uncertainty is not considered; To a disturbance value of an input parameter, correlating with uncertainty of a rate constant; /(I) The model prediction result after the disturbance is carried out on the parameter x i.
8. The method for optimizing combustion chemistry under deep artificial neural network according to claim 1, further comprising,
Using a multi-layer perceptron model consisting of an input layer, a hidden layer and an output layer, and using samples generated under each calculation condition to train an artificial neural network substitution model; the nodes on the input layer correspond to input variables of combustion behaviors, the nodes on the output layer correspond to different prediction targets, and the nodes on the hidden layer are connected with the input layer and the output layer.
9. The method for optimizing combustion chemistry under deep artificial neural network according to claim 8, wherein,
The number of samples N required for training the artificial neural network and the number of input parameters M accord with the following fitting relation:
CN202410320144.9A 2024-03-20 2024-03-20 Optimization method for combustion chemical reaction based on deep artificial neural network Active CN117912585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410320144.9A CN117912585B (en) 2024-03-20 2024-03-20 Optimization method for combustion chemical reaction based on deep artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410320144.9A CN117912585B (en) 2024-03-20 2024-03-20 Optimization method for combustion chemical reaction based on deep artificial neural network

Publications (2)

Publication Number Publication Date
CN117912585A true CN117912585A (en) 2024-04-19
CN117912585B CN117912585B (en) 2024-06-25

Family

ID=90682397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410320144.9A Active CN117912585B (en) 2024-03-20 2024-03-20 Optimization method for combustion chemical reaction based on deep artificial neural network

Country Status (1)

Country Link
CN (1) CN117912585B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961719B1 (en) * 2002-01-07 2005-11-01 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hybrid neural network and support vector machine method for optimization
CN106547862A (en) * 2016-10-31 2017-03-29 中原智慧城市设计研究院有限公司 Traffic big data dimension-reduction treatment method based on manifold learning
RU2713850C1 (en) * 2018-12-10 2020-02-07 Федеральное государственное бюджетное учреждение науки Институт теплофизики им. С.С. Кутателадзе Сибирского отделения Российской академии наук (ИТ СО РАН) Fuel combustion modes monitoring system by means of torch images analysis using classifier based on convolutional neural network
CN116343936A (en) * 2023-05-10 2023-06-27 上海交通大学 Combustion chemical reaction calculation acceleration method based on deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961719B1 (en) * 2002-01-07 2005-11-01 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hybrid neural network and support vector machine method for optimization
CN106547862A (en) * 2016-10-31 2017-03-29 中原智慧城市设计研究院有限公司 Traffic big data dimension-reduction treatment method based on manifold learning
RU2713850C1 (en) * 2018-12-10 2020-02-07 Федеральное государственное бюджетное учреждение науки Институт теплофизики им. С.С. Кутателадзе Сибирского отделения Российской академии наук (ИТ СО РАН) Fuel combustion modes monitoring system by means of torch images analysis using classifier based on convolutional neural network
CN116343936A (en) * 2023-05-10 2023-06-27 上海交通大学 Combustion chemical reaction calculation acceleration method based on deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王佳星: "基于实验设计的燃烧反应动力学模型不确定性分析研究", 《中国博士学位论文全文数据库(电子期刊)》, no. 2, 15 February 2021 (2021-02-15), pages 039 - 8 *
黎金明, 李梦龙, 袁理想: "改良ANN-BP算法在炭黑工艺建模中的应用与研究", 四川大学学报(自然科学版), no. 03, 28 June 2004 (2004-06-28), pages 612 - 617 *

Also Published As

Publication number Publication date
CN117912585B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
Liang et al. A dynamic adaptive chemistry scheme for reactive flow computations
CN112085277B (en) SCR denitration system prediction model optimization method based on machine learning
CN112380765B (en) Photovoltaic cell parameter identification method based on improved balance optimizer algorithm
Togun et al. Genetic programming approach to predict torque and brake specific fuel consumption of a gasoline engine
CN111814956B (en) Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction
CN101802728A (en) Gaseous emission is carried out system and method based on the virtual sensing of experience set
CN112488145A (en) NO based on intelligent methodxOnline prediction method and system
CN113011660A (en) Air quality prediction method, system and storage medium
CN116227344A (en) Oblique knocking flow field solving method and system based on deep neural network
CN117912585B (en) Optimization method for combustion chemical reaction based on deep artificial neural network
Karpinski-Leydier et al. A Machine Learning Modeling Approach for High Pressure Direct Injection Dual Fuel Compressed Natural Gas Engines
CN115206448A (en) Chemical reaction dynamics calculation method based on ANN model
Su et al. Kinetics Parameter Optimization via Neural Ordinary Differential Equations
Yang et al. Prediction of Shanghai air quality index based on BP neural network optimized by genetic algorithm
CN115169710A (en) Integrated learning-based comprehensive energy system multi-energy load flow calculation method and system
Liu et al. Time series prediction with input noise based on the ESN and the EM and its industrial applications
Elliott et al. The optimisation of reaction rate parameters for chemical kinetic modelling using genetic algorithms
Yuan et al. Comparison of neural network and Kriging method for creating simulation-optimization metamodels
Xezonakis et al. Modelling and Energy Optimization of a Thermal Power Plant Using a Multi-Layer Perception Regression Method
CN114611398B (en) Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network
Xue et al. Air quality prediction model based on genetic algorithm and weighted extreme learning machine
Li et al. Modeling thermal efficiency of a 300 MW coal-fired boiler by online least square fast learning network
Zhang et al. Gradient genetic algorithm-based performance fault diagnosis model
Tosso et al. Spark Ignition Engine Modeling Using Optimized Artificial Neural Network. Energies 2022, 15, 6587
Liu et al. Multi-fidelity neural network for uncertainty quantification of chemical reaction models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant