CN114971032A - Electronic nose online gas concentration prediction method based on OS-ELM - Google Patents
Electronic nose online gas concentration prediction method based on OS-ELM Download PDFInfo
- Publication number
- CN114971032A CN114971032A CN202210608225.XA CN202210608225A CN114971032A CN 114971032 A CN114971032 A CN 114971032A CN 202210608225 A CN202210608225 A CN 202210608225A CN 114971032 A CN114971032 A CN 114971032A
- Authority
- CN
- China
- Prior art keywords
- data
- elm
- model
- hidden layer
- electronic nose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000013016 learning Effects 0.000 claims abstract description 15
- 239000002245 particle Substances 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 20
- 238000012360 testing method Methods 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000007423 decrease Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 4
- 230000007246 mechanism Effects 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract 1
- 238000004364 calculation method Methods 0.000 description 4
- 238000003909 pattern recognition Methods 0.000 description 3
- 238000004445 quantitative analysis Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000008786 sensory perception of smell Effects 0.000 description 1
- 230000009326 social learning Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/0004—Gaseous mixtures, e.g. polluted air
- G01N33/0009—General constructional details of gas analysers, e.g. portable test equipment
- G01N33/0062—General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/0004—Gaseous mixtures, e.g. polluted air
- G01N33/0009—General constructional details of gas analysers, e.g. portable test equipment
- G01N33/0062—General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display
- G01N33/0068—General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display using a computer specifically programmed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Chemical & Material Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Food Science & Technology (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Theoretical Computer Science (AREA)
- Marketing (AREA)
- Combustion & Propulsion (AREA)
- Game Theory and Decision Science (AREA)
- Educational Administration (AREA)
- Tourism & Hospitality (AREA)
- Medicinal Chemistry (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides an online gas concentration prediction method based on an OS-ELM electronic nose system, and belongs to the technical field of sensors. The method provides a solution to the problems of high model training cost and insufficient precision when the electronic nose carries out concentration prediction. Firstly, an online learning mechanism of the OS-ELM is utilized, and after a new sample is input, the existing model can be adapted to samples of different batches only by iteratively updating, so that the training cost is reduced. Meanwhile, an improved PSO algorithm is adopted to search the optimal value of the hyperparameter for the defect that the precision of a prediction result is reduced due to the fact that the input weight and the hidden layer bias are randomly determined for the ELM series model, and therefore the algorithm effect is guaranteed. The invention reduces the training cost problem of the repeated training of the electronic nose gas recognition algorithm model, and simultaneously improves the accuracy in concentration prediction, thereby having greater application value in an electronic nose system.
Description
Technical Field
The invention belongs to the technical field of sensors, and discloses an online gas concentration prediction method based on an OS-ELM electronic nose system.
Background
The electronic nose is an artificial olfaction system formed by combining hardware and software, and comprises a sensor array, a signal processing module and a pattern recognition algorithm. The pattern recognition algorithm is the most critical part of an electronic nose system, and at present, many high-precision sensors cannot convert an electric signal into an actual concentration value by using a simple linear conversion formula, and the effect of the pattern recognition algorithm has a large influence on the result of gas recognition.
The gas identification algorithm of the electronic nose is mainly divided into gas component identification and gas concentration prediction. Wherein the gas component identification belongs to qualitative analysis, and the electronic nose is used for identifying the gas components in the mixed gas; the gas concentration prediction belongs to quantitative analysis, an electronic nose system needs to predict the actual concentration value of an unknown sample according to the information of the known sample, and the quantitative analysis process is more difficult.
The gas concentration prediction algorithm of the electronic nose system mainly faces two problems: firstly, the accuracy problem in the prediction process; and secondly, when a new batch of samples appear in the training process, the model is repeatedly retrained, so that the training cost is caused. In practical applications, the gas concentration is not single and fixed, so to guarantee the prediction accuracy, an iterative model is needed each time a new sample is added, but if the model is retrained, all past samples need to be retrained, the process causes redundancy of information, so that the training cost is increased, but if the model is not updated because of the training cost, the prediction accuracy is reduced along with the increase of new samples which are not learned.
How to reduce the training cost while ensuring the prediction accuracy is a very important topic for the electronic nose gas concentration prediction. The online gas concentration prediction method based on the OS-ELM electronic nose system disclosed by the patent can obtain a better result in gas concentration prediction, and meanwhile, an original model is updated online through an online learning mechanism instead of being retrained, so that the training cost in the prediction process is reduced. The value of the electronic nose in a practical scene can be further increased by the method.
Disclosure of Invention
In view of the above, the present invention provides an online gas concentration prediction method based on an OS-ELM electronic nose system. A plurality of batches of samples, namely the first batch of samples, are used for initial training, and the later batches of samples are used for on-line training, so that the purposes of improving the prediction accuracy and reducing the training cost in the gas concentration prediction of the electronic nose system are achieved.
In order to achieve the purpose, the invention provides the following technical scheme:
an online gas concentration prediction method based on an OS-ELM electronic nose system comprises the following steps:
step 1) carrying out initial training on a model by using a first batch of samples;
further, the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1 ;
Step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical difference and large data dimension, and then carrying out PCA processing on the original data to obtain a sample set D subjected to data dimensionality reduction pca The characteristic dimension of the sample is N;
step 13) set of samples D pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) searching the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network by utilizing the improved PSO algorithm to obtain the D test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the PSO algorithm changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] min ,ω max ]The change is an exponential decline:
learning factor c 1 And c 2 Also affects the PSO algorithm search capability and therefore varies synchronously with the inertial weight factor ω, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
and updating the speed value of each particle in the PSO algorithm according to the inertia weight factor and the learning factor:
update the current position of the particle:
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particleIf so, replace the currentJudging the optimal solution of the current particleWhether it is better than the global optimal solution gbest k If so, replace the current global optimal solution gbest k 。
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (2);
step 16) mixing a best And beta best Substitution into ELM model for training, g (a) i ,β i ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
Step 2) performing on-line training according to the input sample batch number;
further, the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K ;
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on data with large numerical difference and large data dimension, then carrying out PCA processing on original data to reduce the data into data with characteristic dimension N to obtain a sample set D subjected to data dimension reduction pca ;
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us orderThus P can be substituted k+1 Expressed as:
and solving an output matrix after the OS-ELM iteration:
step 24) saving the output weight beta of the current round k+1 Waiting for the input of the next batch of samples;
and 3) predicting the sample test set by using the final model after iterative update.
Further, the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H through the hidden layer T And a final output weight beta N The prediction result T-H can be obtained T β N ;
Step 33) evaluating the model concentration prediction result by using the evaluation function.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention forms a network for online prediction of gas concentration through an online learning mechanism in the OS-ELM model so as to reduce the cost problem caused by retraining the model in the concentration prediction process. In addition, due to the fact that the ELM series models randomly determine the weight from the input layer to the hidden layer and bias of the hidden layer, the prediction result is uncertain, the global search capability is improved by the improved PSO algorithm, and the optimal parameters of the network are searched to improve the final prediction accuracy of the models. The method comprises the following steps: step 1) carrying out initial training on a model according to a first batch of samples, and searching out the optimal parameters of a network; step 2) iteratively updating the output weight of the model according to the input sample on the basis of the step 1) to achieve the purpose of online learning; and 3) on the basis of the step 2), using the updated final model to predict the result of the test set, and evaluating the model concentration prediction result through an evaluation function.
Step 1) carrying out initial training on a model by using a first batch of samples;
further, the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1 ;
And step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical value difference and large data dimension, and reducing the influence of the data on the model through the normalization processing. And then carrying out PCA processing on the original data, selecting principal components with larger contribution, and reducing noise and calculation overhead through the data after the PCA processing. Obtaining a sample set D after data dimension reduction pca The characteristic dimension of the sample is N;
step 13) set D of samples pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) because the weight a from the input layer to the hidden layer and the bias beta of the hidden layer in the ELM series model are randomly determined, the ELM model is directly used with certain uncertainty, and the accuracy of the prediction result can be ensured if a search formula algorithm is adopted to find the optimal solution of the parameters. Therefore, the improved PSO algorithm is utilized to search the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network, and D is obtained test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the traditional PSO algorithm is fixed, if omega is set to be relatively large, the algorithm can obtain good global search capability, but the later local search capability is weak, so that the algorithm convergence is not facilitated; and omega is set to be relatively small, the global search capability in the early stage of the algorithm is weak, the global optimal solution cannot be found, and if omega needs to present a descending trend in the iteration process of the algorithm, the calculation is carried outThe method has strong early global search capability and strong later local search capability, can improve the overall effect of the algorithm, can increase the calculation overhead if adopting a mixed algorithm form, and cannot exert the advantages of the PSO algorithm. Therefore, the inertia weight factor omega of the PSO algorithm directly changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] min ,ω max ]The change mode is exponential decline, and the nonlinear decline mode can obtain better effect than the general linear decline mode:
learning factor c 1 And c 2 It also affects PSO algorithm searching ability if only c is used 1 The algorithm then presents a cognitive model in which only the individual learning part, if only c is used 2 The method is a non-private model, and the comprehensive searching capability of the PSO algorithm cannot be improved in both cases. Therefore, the algorithm is required to depend on the individual learning factor c for each particle in the early stage 1 ,c 1 Presenting a descending trend during the iteration, and depending on the social learning factor c in the later period 2 Showing a rising trend and therefore varying synchronously with the inertial weight factor omega, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
the speed of the particles cannot be too fast or too slow, if the speed is too fast, the particles can fly out of the margin to cause the divergence of the particles, and finally the algorithm is difficult to converge, if the speed is too slow, the particles can converge slowly and cannot find a global optimum value, and the speed value of each particle in the PSO algorithm is updated according to the inertia weight factor and the learning factor:
update the current position of the particle:
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particleIf so, replace the currentJudging the optimal solution of the current particleWhether it is better than the global optimal solution gbest k If so, replacing the current global optimal solution gbest k This method belongs to a synchronous updating method, and it is determined whether the optimal position needs to be updated every time the velocity and position of a particle are calculated.
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (1).
Step 16) mixing a best And beta best The method has the advantages that the parameters of the ELM are determined and then do not need to be adjusted again, so that the calculation speed is high, and meanwhile, the accuracy of the algorithm can be guaranteed through the optimal parameters. By a best And beta best An output matrix H may be determined, followed by a kernel function g (a) i, β i ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
thereby can obtainCalculating initial output weight of ELM modelThe output weights in the OS-ELM model need to be updated iteratively, so the current output matrix weight values need to be recorded to facilitate the next update of the weight values. The OS-ELM can input and update a batch of samples, and can update the model on line by one piece of data.
Step 2) performing on-line training according to the input sample batch number;
further, the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K ;
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical value difference and large data dimension, then carrying out PCA processing on the original data to reduce the data into the data with the characteristic dimension N, wherein the sample data needs to keep the same dimension as the data of the first batch to ensure the consistency before and after obtaining a sample set D after data dimension reduction pca ;
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us orderThus P can be substituted k+1 Expressed as:
and solving an output matrix after the OS-ELM iteration:
step 24) saving the output weight beta of the current round k+1 Waiting for the input of the next batch of samples;
and 3) predicting the sample test set by using the final model after iterative update.
Further, the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H through the hidden layer T And a final output weight beta N The prediction result T-H can be obtained T β N ;
Step 33) the model concentration prediction result is evaluated by using an evaluation function, and the MAE, RMSE and R values are used as evaluation indexes because the concentration prediction belongs to quantitative analysis.
Finally, it is to be understood that the above real-time examples are intended to illustrate and not to limit the technical solutions of the present invention, and that, although the present invention has been described in detail by way of the above examples, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.
Claims (4)
1. An online gas concentration prediction method based on an OS-ELM electronic nose system is characterized by comprising the following steps:
step 1) carrying out initial training on a model by using a first batch of samples;
step 2) performing on-line training according to the input sample batch number;
and 3) predicting the sample test set by using the final model after iterative update.
2. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1 ;
Step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical difference and large data dimension, and then carrying out PCA processing on the original data to obtain a sample set D subjected to data dimensionality reduction pca The characteristic dimension of the sample is N;
step 13) set of samples D pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) searching the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network by utilizing the improved PSO algorithm to obtain the D test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the PSO algorithm changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] min ,ω max ]The change is an exponential decline:
learning factor c 1 And c 2 Also affects the PSO algorithm search capability and therefore varies synchronously with the inertial weight factor ω, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
and updating the speed value of each particle in the PSO algorithm according to the inertia weight factor and the learning factor:
update the current position of the particle:
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particleIf so, replace the currentJudging the optimal solution of the current particleWhether it is better than the global optimal solution gbest k If so, replace the current global optimal solution gbest k 。
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (1).
Step 16) mixing a best And beta best Substitution into ELM model for training, g (a) i ,β i ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
3. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K ;
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on data with large numerical difference and large data dimension, then carrying out PCA processing on original data to reduce the data into data with characteristic dimension N to obtain a sample set D subjected to data dimension reduction pca ;
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us orderThus P can be substituted k+1 Expressed as:
and solving an output matrix after the OS-ELM iteration:
step 24) saving the output weight beta of the current round k+1 And waiting for the input of the next batch of samples.
4. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H by the hidden layer T And a final output weight beta N The prediction result T ═ H can be obtained T β N ;
Step 33) evaluating the model concentration prediction result by using the evaluation function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608225.XA CN114971032A (en) | 2022-05-31 | 2022-05-31 | Electronic nose online gas concentration prediction method based on OS-ELM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608225.XA CN114971032A (en) | 2022-05-31 | 2022-05-31 | Electronic nose online gas concentration prediction method based on OS-ELM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114971032A true CN114971032A (en) | 2022-08-30 |
Family
ID=82957561
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210608225.XA Pending CN114971032A (en) | 2022-05-31 | 2022-05-31 | Electronic nose online gas concentration prediction method based on OS-ELM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114971032A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116735146A (en) * | 2023-08-11 | 2023-09-12 | 中国空气动力研究与发展中心低速空气动力研究所 | Wind tunnel experiment method and system for establishing aerodynamic model |
-
2022
- 2022-05-31 CN CN202210608225.XA patent/CN114971032A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116735146A (en) * | 2023-08-11 | 2023-09-12 | 中国空气动力研究与发展中心低速空气动力研究所 | Wind tunnel experiment method and system for establishing aerodynamic model |
CN116735146B (en) * | 2023-08-11 | 2023-10-13 | 中国空气动力研究与发展中心低速空气动力研究所 | Wind tunnel experiment method and system for establishing aerodynamic model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110532471B (en) | Active learning collaborative filtering method based on gated cyclic unit neural network | |
CN112949828B (en) | Graph convolution neural network traffic prediction method and system based on graph learning | |
CN109766583A (en) | Based on no label, unbalanced, initial value uncertain data aero-engine service life prediction technique | |
CN111861909B (en) | Network fine granularity image classification method | |
CN106203534A (en) | A kind of cost-sensitive Software Defects Predict Methods based on Boosting | |
CN111814956A (en) | Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction | |
CN106156805A (en) | A kind of classifier training method of sample label missing data | |
CN111831895A (en) | Network public opinion early warning method based on LSTM model | |
Wu et al. | Hot‐Rolled Steel Strip Surface Inspection Based on Transfer Learning Model | |
CN111652264A (en) | Negative migration sample screening method based on maximum mean difference | |
CN110717090A (en) | Network public praise evaluation method and system for scenic spots and electronic equipment | |
CN113743474A (en) | Digital picture classification method and system based on cooperative semi-supervised convolutional neural network | |
CN116415177A (en) | Classifier parameter identification method based on extreme learning machine | |
CN114971032A (en) | Electronic nose online gas concentration prediction method based on OS-ELM | |
CN116911191A (en) | Aeroengine gas circuit system modeling method based on improved PSO optimization BiLSTM | |
CN110515836B (en) | Weighted naive Bayes method for software defect prediction | |
CN106569954A (en) | Method based on KL divergence for predicting multi-source software defects | |
CN116227716A (en) | Multi-factor energy demand prediction method and system based on Stacking | |
Dobrea et al. | Machine Learning algorithms for air pollutants forecasting | |
Gaidhane et al. | An efficient approach for cement strength prediction | |
CN113283467B (en) | Weak supervision picture classification method based on average loss and category-by-category selection | |
CN110648023A (en) | Method for establishing data prediction model based on quadratic exponential smoothing improved GM (1,1) | |
Cui et al. | Prediction of Aeroengine Remaining Useful Life Based on SE-BiLSTM | |
CN114357284A (en) | Crowdsourcing task personalized recommendation method and system based on deep learning | |
CN113035363A (en) | Probability density weighted genetic metabolic disease screening data mixed sampling method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |