CN111103325A - Electronic nose signal drift compensation method based on integrated neural network learning - Google Patents

Electronic nose signal drift compensation method based on integrated neural network learning Download PDF

Info

Publication number
CN111103325A
CN111103325A CN201911316354.6A CN201911316354A CN111103325A CN 111103325 A CN111103325 A CN 111103325A CN 201911316354 A CN201911316354 A CN 201911316354A CN 111103325 A CN111103325 A CN 111103325A
Authority
CN
China
Prior art keywords
classifier
neural network
electronic nose
integrated
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911316354.6A
Other languages
Chinese (zh)
Other versions
CN111103325B (en
Inventor
章伟
冯李航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yideguan Electronic Technology Co ltd
Original Assignee
Nanjing Yideguan Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yideguan Electronic Technology Co ltd filed Critical Nanjing Yideguan Electronic Technology Co ltd
Priority to CN201911316354.6A priority Critical patent/CN111103325B/en
Publication of CN111103325A publication Critical patent/CN111103325A/en
Application granted granted Critical
Publication of CN111103325B publication Critical patent/CN111103325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N27/00Investigating or analysing materials by the use of electric, electrochemical, or magnetic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention discloses an electronic nose signal drift compensation method based on integrated neural network learning, which comprises the following steps: 1. extracting the data features of the electronic nose in each time interval, marking labels and sorting a data set; 2. training by utilizing the data set in each time interval to obtain respective shallow neural network classifiers; 3. defining a weighted set of all basic classifiers as classifiers in a future period, and solving the weight of each basic classifier; 4. outputting parameters and weights of all basic classifiers, and constructing the integrated classifier in the step 3 by using the parameters and the weights; the integrated classifier model established by the method contains the offset or heterogeneous characteristics of the historical data set, and the drift error of a future time period can be automatically compensated. The integrated classifier technology adopts the original weak neural network model of the electronic nose, does not need complex or deep learning and training, gives consideration to calculation timeliness and classifier precision, has low requirement on hardware and has strong practical applicability.

Description

Electronic nose signal drift compensation method based on integrated neural network learning
Technical Field
The invention relates to the field of electronic nose signal and information processing, in particular to an electronic nose signal drift compensation method based on integrated neural network learning.
Background
The electronic nose is used as an intelligent device for simulating a biological olfactory system, can realize the identification of simple or complex odor by utilizing signal characteristics such as a response map of a gas sensor array, and is widely applied to the fields of environment, food, medical treatment and the like. Theoretically, the same concentration response of the electronic nose to the same gas under the same measurement conditions should be the same. However, in practical applications, the sensors of the electronic nose are continuously aged, degraded, poisoned and the like with the increase of the use time, so that the response signals gradually depart from the values of the sensors, and the drift causes the identification precision of the electronic nose to be reduced and even become unreliable.
Existing electronic nose signal drift suppression or compensation techniques can be summarized in three categories: component correction methods, adjustment compensation methods, and machine learning methods. The component correction method mainly finds the signal drift direction through the space mapping transformation of response data and removes the components of the part, which is typically represented as a principal component analysis method; however, the compensation of this kind of method needs to be established when all data drifts are stable and consistent, which is different from the actual situation. The adjustment compensation method carries out differentiation adjustment according to the signal characteristics of the electronic nose sensor at different stages; however, this method is easy to falsely determine the transient response as the drift of the sensor of the electronic nose which is changing violently, and disturbs the original matching mode of the electronic nose, so that the originally accurate measurement cannot be correctly identified after being compensated. The machine learning method does not calculate or definitely describe the drift problem of the signal, but directly adjusts by means of a classifier obtained by training and learning of a large number of samples; therefore, the method can overcome the defects of a signal correction method or an adjustment compensation method, has wider adaptability, and becomes a technology and a method which are more concerned in recent years.
In the prior art, ZL201110340596.6 and ZL201110340338.8 disclose an electronic nose drift suppression method and an online drift compensation method based on a multiple self-organizing neural network, respectively, which are learning methods for adaptively changing neural network parameters and structures by automatically searching for internal rules in a sample; zl201610245615.x discloses an electronic nose signal error adaptive learning method of a subband-space projection; ZL201610218450.7 and ZL201610216768.1 respectively disclose an electronic nose gas identification method based on source domain migration limit learning and target domain migration limit learning, and compensation and inhibition can be carried out on heterogeneous or drift data of an electronic nose. However, these methods still have many disadvantages in practical electronic nose applications: 1) these learners are trained by only building a mathematical relationship model between the "undisloated data set (or source domain)" and the "drifted data set (or target domain)" or building a correlation model after mapping the two data sets. 2) In practical applications, these learners have better compensation effect in the current period, but have worse effect as time increases, and usually need to be retrained at intervals, and the time adaptivity and long-term stability are weak. 3) Models trained by the technologies are shallow machine learning methods which are carried out under the condition of limited samples, such as a support vector machine, a self-organizing neural network, an extreme learning machine and the like, and the performance or compensation effect of the obtained classifier is not satisfactory; ZL201610120715.X also discloses an electronic nose drift compensation method based on deep belief network feature extraction, which is a network architecture and method for large sample deep learning, but the computation complexity, time and hardware requirements of the network learning and training are higher. Therefore, a machine learning method which combines the calculation timeliness and high performance is necessary.
Disclosure of Invention
In order to solve the above problems, the present invention provides an integrated neural Network Learning (ENNL) -based electronic nose signal drift compensation method, which integrates weak neural Network classifiers trained currently and previously by using weighted combination and promotes the weak neural Network classifiers to be a new 'strong classifier', wherein the classifier contains offset or heterogeneous data information among all basic classifiers, and can automatically compensate drift errors in a future period of time. The method adopts the original weak neural network model of the electronic nose, and does not need complex or deep learning and training, thereby taking account of calculation timeliness and classifier precision, having low requirement on hardware and strong practical applicability.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an electronic nose signal drift compensation method based on integrated neural network learning comprises the following steps:
step 1, preprocessing the electronic nose data in each period, extracting the characteristics of a data set as input, and recording a label corresponding to the data set, namely, a complete data set with the current period of t can be represented as St={(x1,y1),(x2,y2),...(xi,yi)},...(xn,yn)} (1)
Wherein (x)i,yi) For the ith sample pair of the current t-period dataset, i ∈ [1, n ]]N is the total number of samples; at this time, the feature matrix and the tag of the electronic nose sensor can be respectively recorded as Xt={x1,x2,…,xnAnd Ytt={y1,y2,…,ynThe label is the kind of gas or analyte to be detected;
step 2, adopting a shallow neural network to perform data set S for each t time periodtTraining and learning are carried out to obtain respective basic classifiers Nt(x) These neural network classifier models can be written as
Net=[N1(x),N2(x),…Nj(x),…,Nt(x)](2)
Wherein N isjFor the jth classifier, j ∈ [1, t ]],NetRepresenting a set of these classifier models;
step 3, the classifier models N are usedjWeighted combination, whereby the model solution can be transformed into a numerical solution problem, i.e.
Figure BDA0002325925910000031
In the formula, βjI.e. the weight value corresponding to each classifier,
Figure BDA0002325925910000032
expression finding the optimal weight β that satisfies the minimum error of equation (3)jThe numerical solution algorithm of (2);
step 4, outputting parameters { N) of the neural network classifier1,N2,…,NtAnd its weight vector (β)12,…,βtAnd combining the current and previous basic classifiers in a weighting mode to obtain an integrated classifier defined as a future classifier with a time interval t +1, namely
Figure BDA0002325925910000033
The integrated classifier reserves the characteristic information of drift or heterogeneous data of the current and previous data sets and can automatically compensate drift errors in the future t +1 time period; the integrated classifier can automatically update along with the update of the data set in a condition judgment mode: when the new data acquisition time period t +1 is completed, the newly acquired data set S is judgedt+1Whether the number of samples meets the requirement or not, if so, automatically according to St+1Training a new basic neural network Nt+1And update NetIs [ N ]1(x),N2(x),…,Nt(x),Nt+1(x)]Training new ensemble learner f simultaneouslyt+2(x) (ii) a If not, judging whether the interval of the t +1 time interval is less than or equal to the t time interval again: if so, the data set S is considered to bet+1Sample distribution in (1) and previous lot stConsistently, the weight β is used directlyjAs the current lot St+1Due weight value, and training new integrated classifier ft+2(x) (ii) a If not, the information that the sample data amount needs to be increased is presented.
The improvement is that the data preprocessing in the step 1 comprises noise reduction and normalization processing of a raw signal measured by an electronic intranasal sensor, the raw signal contains steady-state response characteristics and transient response characteristics of the sensor, the characteristic value of the preprocessed signal is in a one-dimensional vector form, each time period t of sample collection is one month, the number n of samples is not less than 400, and the label is encoded in a 0 or 1 form.
The shallow neural network in the step 2 adopts a conventional forward multilayer perceptron or feedback neural network as a basic classifier, and the shallow neural network comprises a three-layer typical structure of an input layer, a hidden layer and an output layer, and the unit number of the hidden layer is 20.
As an improvement, the weight optimization problem in step 3 is solved by using a gradient iterative algorithm.
As an improvement, the number of the integrated basic classifiers in the step 4 should be not less than 5, so as to ensure good classifier drift information and characteristics.
Has the advantages that:
the invention provides an electronic nose signal drift compensation method based on integrated neural network learning, which integrates a current and previously trained basic classifier of an electronic nose into a new classifier by adopting a weighting mode and is used for gas identification and prediction updating in a future period of time; the integrated learning promotes the original 'weak classifier' into 'strong classifier', and the 'strong classifier' contains the drift information among the 'weak classifiers', thereby realizing drift compensation. Compared with the prior art, the technology of the invention adopts a weak learning model under the original limited sample of the electronic nose, does not need more complex or deep network learning and training, gives consideration to the calculation timeliness and the classifier precision through weighted integration, and has low requirement on hardware.
In addition, the integrated learning-based compensation method is also beneficial to the operation of collecting online samples of the electronic nose and compensating drift in real time, and the self drift consistency of the sensor does not need to be assumed, namely the performance requirement on the basic classifier is lower, so that the method has wider adaptability and is easier to convert into practical gas detection application.
Drawings
FIG. 1 is a flow chart of an integrated neural network learning-based electronic nose signal drift compensation method according to the present invention;
FIG. 2 is an algorithm model of the electronic nose signal drift compensation method of the present invention;
FIG. 3 is a schematic diagram of an example of a three-layer neural network based on Matlab environment according to the present invention;
FIG. 4 is an example of a gas sensitive response curve for a sensor employing an electronic nose in accordance with the present invention;
fig. 5 is a comparative example of the results of an e-nose dataset test according to the present invention.
Detailed Description
The fermentation process of the present invention is described and illustrated in detail below with reference to specific examples. The content is to explain the invention and not to limit the scope of protection of the invention.
Example 1
As shown in fig. 1, an integrated Neural network learning (ENNL) -based electronic nose signal drift compensation method includes the following steps:
step 1, preprocessing the electronic nose data in each period, extracting the characteristics of a data set as input, and recording the label corresponding to the data set, namely, the complete data set with the current period of t can be represented as
St={(x1,y1),(x2,y2),..(xi,yi)},…(xn,yn)} (1)
Wherein (x)i,yi) For the ith sample pair of the current t-period dataset, i ∈ [1, n ]]And n is the total number of samples. At this time, the feature matrix and the tag of the electronic nose sensor can be respectively recorded as Xt={x1,x2,...,xnAnd Yt={y1,y2,...,yn}; the label is the kind of gas or analyte to be detected;
step 2, adopting a shallow neural network to perform data set S for each t time periodtTraining and learning are carried out to obtain respective basic classifiers Nt(x) These neural network classifier models can be written as
Net=[N1(x),N2(x),...Nj(x),...,Nt(x)](2)
Wherein N isjFor the jth classifier, j ∈ [1, t ]],NetRepresenting a set of these classifier models;
step 3, the classifier models N are usedjWeighted combination, which in turn can transform the model solution into a numerical solution problem, the algorithmic framework is shown in figure 2,namely, it is
Figure BDA0002325925910000051
Wherein, βjI.e. the weight value corresponding to each classifier,
Figure BDA0002325925910000052
expression finding the optimal weight β that satisfies the minimum error of equation (3)jThe numerical solution algorithm of (2);
step 4, outputting parameters { N) of the neural network classifier1,N2,...,NtAnd its weight vector (β)1,β2,...,βtAnd combining the current and previous basic classifiers in a weighting mode to obtain an integrated classifier defined as a future classifier with a time interval t +1, namely
Figure BDA0002325925910000053
The integrated classifier reserves the characteristic information of drift or heterogeneous data of the current and previous data sets and can automatically compensate drift errors in the future t +1 time period; the integrated classifier can automatically update along with the update of the data set in a condition judgment mode: when the new data acquisition time period t +1 is completed, the newly acquired data set S is judgedt+1Whether the number of samples meets the requirement or not, if so, automatically according to St+1Training a new basic neural network Nt+1And update NetIs composed of
Net=[N1(x),N2(x),...Nj(x),...,Nt(x)](2)
Training new ensemble learner f simultaneouslyt+2(x) (ii) a If not, judging whether the interval of the t +1 time interval is less than or equal to the t time interval again: if so, the data set S is considered to bet+1Sample distribution in (1) and previous lot stConsistently, the weight β is used directlyjAs the current lot St+1Due weight value, and training new integrated classifier ft+2(x);If not, the information that the sample data amount needs to be increased is presented.
The data preprocessing in the step 1 comprises noise reduction and normalization processing of a raw signal measured by an electronic intranasal sensor, wherein the raw signal comprises steady-state response characteristics and transient response characteristics of the sensor, the characteristic value of the preprocessed signal is in a one-dimensional vector form, each time period t of sample collection is one month, the number n of samples is not less than 400, and the label is encoded in a 0 or 1 form.
Example 2
Based on embodiment 1, the electronic nose adopted in this embodiment is an array formed by four gas sensors, TGS series sensors of the company Figaro lnc are TGS2600, TGS2602, TGS2610, and TGS2620, response signals of a single TGS sensor are shown in fig. 3, and include characteristics of transient adsorption, a steady-state peak value, and a transient drop interval of each sensor response in the array, and a moving window function method in ZL201510252261.7 is used for feature extraction, so that a steady-state characteristic value Δ R, a transient adsorption rise characteristic u, and a transient desorption drop characteristic D can be obtained, and a single measurement is marked as xi=[ΔR1,U1,D1;ΔR2,U2,D2;...;ΔR4,U4,D4]And the selected tested analytes of ethylene, ethanol and acetone have the corresponding yi category labels marked as (0, 0, 1), (0, 1, 0) and (1, 0, 0) respectively.
The shallow Neural Network in step 2 may adopt a conventional forward multilayer perceptron (MLP) or a feedback Neural Network (BPNN) as a basic classifier, and the Neural Network includes a three-layer typical structure of an input layer, a hidden layer, and an output layer, and the number of hidden layer units is 20.
Example 3
On the basis of the embodiment 2, BPNN is adopted to carry out data set S in a single time interval ttPerforming classifier learning, wherein 80% of data is trained, and the rest 20% is tested; through test finding, the basic classifier Nt(x) Classification of current time period t (e.g., t ═ 1)The accuracy rate is greater than 80%, and the testing precision of the data set in the later period, for example, t ═ 2, is greatly reduced, which indicates that each trained basic neural network is a 'weak classifier', and a deviation is generated when data classification in a certain period in the future is faced.
Example 4
On the basis of embodiment 3, the invention trains the basic classifiers by using a machine learning toolkit in a Matlab2018R environment, as shown in fig. 4, a three-layer structure diagram of the basic classifier is shown, wherein w and b respectively represent the weight and bias of the trained neural network, the initial iteration cycle is set to 500, a pattern command is used for calling a classifier model, a transcg function is used for calling a normalized gradient back propagation algorithm for training, and a cross entropy function is used for evaluating the performance of the trained classifier.
The number of the integrated basic classifiers in the step 3 is not less than 5, so that better classifier drift information and characteristics are ensured, and finally the weight optimization problem is solved by adopting a gradient iterative algorithm. In a preferred embodiment of the present invention, the Matlab2018R environment or the Optimization Tool function is used to implement the compiling of formula (3), so that the optimal solution of the weight can be obtained quickly, and ten basic classifiers are integrated to obtain good performance.
The integrated classifier constructed in the step 4 can be automatically updated along with the update of the data set in a condition judgment mode: when the new data acquisition time period t +1 is completed, the newly acquired data set S is judgedt+1Whether the number of samples meets the requirement or not, if so, automatically according to St+1Training a new basic neural network Nt+1And update NetIs [ N ]1(x),N2(x),...,Nt(x),Nt+1(x)]Training new ensemble learner f simultaneouslyt+2(x) (ii) a If not, judging whether the interval of the t +1 time interval is less than or equal to the t time interval again: if so, the data set S is considered to bet+1Sample distribution in (1) and previous lot stConsistently, the weight β is used directlyiAs the current lot St+1Due weight value, and training new integrated classifier ft+2(x) (ii) a If not, prompting to proceedThe amount of sample data is increased.
Example 5
On the basis of the embodiment 4, in order to obtain a better drift compensation effect, the judgment of the newly acquired data set S is not satisfiedt+1Whether the number of samples meets the requirement "should not be more than twice, the total sample collection or measurement time should be less than the service life of the electronic nose internal sensor, e.g. set to half; similarly, the Matlab2018R environment can be used for compiling the condition judgment of the integrated classifier in the step 4, and a portable hardware platform adopted by the whole ENNL network algorithm is provided with a Central Processing Unit (CPU) of Intel (R) core (TM) i7-7700, the main frequency is 3.60GHz, and the cache RAM is 16.0GB, so that the training requirement can be met.
Example 6
Using the method of example 5, this example selects a method disclosed in UCl Machine learning throughput [ http: test validation was performed on part of the data from the database of// archive. ics. uci. edu/ml/datasets/Gas + Sensor + Array + Drift + Dataset, which took 3 years to collect 13910 samples, and 6 analytes comprising acetone, ethanol, acetaldehyde, ethylene, ammonia, and toluene; using this database, the test was comparatively analyzed for four test methods given in the literature [ Vergara A, Vembu S, Ayhan T, et al.chemical gas sensor drift polymerization using a classificator assays, Sensors and actors B: Chemical,2012,166: 320-: test 1-Test the current month with the classifier trained with the previous month's data; test 2-train an integrated neural network classifier with all the data of the previous month to Test the current month; test 3-similar to Test2 but with the same weights train the resulting classifier; test 4-similar to Test1 but with the addition of a principal component analysis-based component correction; the neural network trained in the first time batch is used as a Reference classifier and is marked as Reference. As shown in FIG. 5, Test2 is the result of using the ENNL method of the present invention, and it can be observed that the result method of the integrated neural network always maintains higher classifier accuracy and performance is better than the other methods as the time batch increases.
The above description is one embodiment of the present invention and is not intended to limit the present invention. All equivalents which come within the spirit of the invention are therefore intended to be embraced therein. Details not described herein are well within the skill of those in the art.

Claims (5)

1. An electronic nose signal drift compensation method based on integrated neural network learning is characterized by comprising the following steps:
step 1, preprocessing the electronic nose data in each period, extracting the characteristics of a data set as input, and recording the label corresponding to the data set, namely, the complete data set with the current period of t can be represented as
St={(x1,y1),(x2,y2),...(xi,yi)},...(xn,yn)} (1)
Wherein (x)i,yi) For the ith sample pair of the current t-period dataset, i ∈ [1, n ]]N is the total number of samples; at this time, the feature matrix and the tag of the electronic nose sensor can be respectively recorded as Xt={x1,x2,...,xnAnd Yt={y1,y2,...,ynThe label is the kind of gas or analyte to be detected;
step 2, adopting a shallow neural network to perform data set S for each t time periodtTraining and learning are carried out to obtain respective basic classifiers Nt(x) These neural network classifier models can be written as
Net=[N1(x),N2(x),...Nj(x),...,Nt(x)](2),
Wherein N isjFor the jth classifier, j ∈ [1, t ]],NetRepresenting a set of these classifier models;
step 3, the classifier models N are usedjWeighted combination, whereby the model solution can be transformed into a numerical solution problem, i.e.
Figure FDA0002325925900000011
In the formula, βjI.e. the weight value corresponding to each classifier,
Figure FDA0002325925900000012
expression finding the optimal weight β that satisfies the minimum error of equation (3)jThe numerical solution algorithm of (2);
step 4, outputting parameters { N) of the neural network classifier1,N2,...,NtAnd its weight vector (β)1,β2,...,βtAnd combining the current and previous basic classifiers in a weighting mode to obtain an integrated classifier defined as a future classifier with a time interval t +1, namely
Figure FDA0002325925900000013
The integrated classifier reserves the characteristic information of drift or heterogeneous data of the current and previous data sets and can automatically compensate drift errors in the future t +1 time period; the integrated classifier can automatically update along with the update of the data set in a condition judgment mode: when the new data acquisition time period t +1 is completed, the newly acquired data set S is judgedt+1Whether the number of samples meets the requirement or not, if so, automatically according to St+1Training a new basic neural network Nt+1And update NetIs [ N ]1(x),N2(x),...,Nt(x),Nt+1(x)]Training new ensemble learner f simultaneouslyt+2(x) (ii) a If not, judging whether the interval of the t +1 time interval is less than or equal to the t time interval again: if so, the data set S is considered to bet+1Sample distribution in (1) and previous lot stConsistently, the weight β is used directlyjAs the current lot St+1Due weight value, and training new integrated classifier ft+2(x) (ii) a If not, the information that the sample data amount needs to be increased is presented.
2. The integrated neural network learning-based electronic nose signal drift compensation method according to claim 1, wherein the data preprocessing in step 1 comprises noise reduction and normalization processing of raw signals measured by an electronic nose sensor, the raw signals comprise steady-state response characteristics and transient response characteristics of the sensor, the preprocessed signal characteristic values are in a one-dimensional vector form, each time period t of sample collection is one month, the number n of samples is not less than 400, and the label is encoded in a form of 0 or 1.
3. The electronic nose signal drift compensation method based on integrated neural network learning of claim 1, wherein the shallow neural network in step 2 adopts a conventional forward multilayer perceptron or feedback neural network as a basic classifier, the shallow neural network comprises a three-layer typical structure of an input layer, an implied layer and an output layer, and the number of units of the implied layer is 20.
4. The integrated neural network learning-based electronic nose signal drift compensation method according to claim 1, wherein the weight optimization problem in step 3 is solved by using a gradient iterative algorithm.
5. The integrated neural network learning-based electronic nose signal drift compensation method according to claim 1, wherein the number of the integrated basic classifiers in the step 4 is not less than 5.
CN201911316354.6A 2019-12-19 2019-12-19 Electronic nose signal drift compensation method based on integrated neural network learning Active CN111103325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911316354.6A CN111103325B (en) 2019-12-19 2019-12-19 Electronic nose signal drift compensation method based on integrated neural network learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911316354.6A CN111103325B (en) 2019-12-19 2019-12-19 Electronic nose signal drift compensation method based on integrated neural network learning

Publications (2)

Publication Number Publication Date
CN111103325A true CN111103325A (en) 2020-05-05
CN111103325B CN111103325B (en) 2022-03-29

Family

ID=70422189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911316354.6A Active CN111103325B (en) 2019-12-19 2019-12-19 Electronic nose signal drift compensation method based on integrated neural network learning

Country Status (1)

Country Link
CN (1) CN111103325B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112098605A (en) * 2020-09-21 2020-12-18 哈尔滨工业大学 High-robustness chemical sensor array soft measurement method
CN112418395A (en) * 2020-11-17 2021-02-26 吉林大学 Gas sensor array drift compensation method based on generation countermeasure network
CN112433028A (en) * 2020-11-09 2021-03-02 西南大学 Electronic nose gas classification method based on memristor cell neural network
CN112927763A (en) * 2021-03-05 2021-06-08 广东工业大学 Prediction method for odor descriptor rating based on electronic nose
CN113361194A (en) * 2021-06-04 2021-09-07 安徽农业大学 Sensor drift calibration method based on deep learning, electronic equipment and storage medium
CN115015472A (en) * 2022-02-25 2022-09-06 重庆邮电大学 Extreme learning machine sensor drift data reconstruction method based on domain self-adaptation
CN116718648A (en) * 2023-08-11 2023-09-08 合肥中科国探智能科技有限公司 Method for detecting and identifying thermal runaway gas of battery and alarm device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101158588A (en) * 2007-11-16 2008-04-09 北京航空航天大学 MEMS gyroscopes error compensation method for micro satellite based on integration nerval net
CN102507676A (en) * 2011-11-01 2012-06-20 重庆大学 On-line drift compensation method of electronic nose based on multiple self-organizing neural networks
CN103499345A (en) * 2013-10-15 2014-01-08 北京航空航天大学 Fiber-optic gyro temperature drift compensating method based on wavelet analysis and BP (back propagation) neutral network
CN105823801A (en) * 2016-03-03 2016-08-03 重庆大学 Deep belief network characteristic extraction-based electronic nose drift compensation method
CN105891422A (en) * 2016-04-08 2016-08-24 重庆大学 Electronic nose gas identification method based on source domain migration extreme learning to realize drift compensation
CN109521454A (en) * 2018-12-06 2019-03-26 中北大学 A kind of GPS/INS Combinated navigation method based on self study volume Kalman filtering
US20190147357A1 (en) * 2017-11-16 2019-05-16 Red Hat, Inc. Automatic detection of learning model drift

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101158588A (en) * 2007-11-16 2008-04-09 北京航空航天大学 MEMS gyroscopes error compensation method for micro satellite based on integration nerval net
CN102507676A (en) * 2011-11-01 2012-06-20 重庆大学 On-line drift compensation method of electronic nose based on multiple self-organizing neural networks
CN103499345A (en) * 2013-10-15 2014-01-08 北京航空航天大学 Fiber-optic gyro temperature drift compensating method based on wavelet analysis and BP (back propagation) neutral network
CN105823801A (en) * 2016-03-03 2016-08-03 重庆大学 Deep belief network characteristic extraction-based electronic nose drift compensation method
CN105891422A (en) * 2016-04-08 2016-08-24 重庆大学 Electronic nose gas identification method based on source domain migration extreme learning to realize drift compensation
US20190147357A1 (en) * 2017-11-16 2019-05-16 Red Hat, Inc. Automatic detection of learning model drift
CN109521454A (en) * 2018-12-06 2019-03-26 中北大学 A kind of GPS/INS Combinated navigation method based on self study volume Kalman filtering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QINGFENG WANG 等: "Time Series Prediction of E-nose Sensor Drift Based on Deep Recurrent Neural Network", 《PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112098605A (en) * 2020-09-21 2020-12-18 哈尔滨工业大学 High-robustness chemical sensor array soft measurement method
CN112433028A (en) * 2020-11-09 2021-03-02 西南大学 Electronic nose gas classification method based on memristor cell neural network
CN112418395A (en) * 2020-11-17 2021-02-26 吉林大学 Gas sensor array drift compensation method based on generation countermeasure network
CN112418395B (en) * 2020-11-17 2022-08-26 吉林大学 Gas sensor array drift compensation method based on generation countermeasure network
CN112927763A (en) * 2021-03-05 2021-06-08 广东工业大学 Prediction method for odor descriptor rating based on electronic nose
CN112927763B (en) * 2021-03-05 2023-04-07 广东工业大学 Prediction method for odor descriptor rating based on electronic nose
CN113361194A (en) * 2021-06-04 2021-09-07 安徽农业大学 Sensor drift calibration method based on deep learning, electronic equipment and storage medium
CN115015472A (en) * 2022-02-25 2022-09-06 重庆邮电大学 Extreme learning machine sensor drift data reconstruction method based on domain self-adaptation
CN116718648A (en) * 2023-08-11 2023-09-08 合肥中科国探智能科技有限公司 Method for detecting and identifying thermal runaway gas of battery and alarm device thereof
CN116718648B (en) * 2023-08-11 2023-11-10 合肥中科国探智能科技有限公司 Method for detecting and identifying thermal runaway gas of battery and alarm device thereof

Also Published As

Publication number Publication date
CN111103325B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN111103325B (en) Electronic nose signal drift compensation method based on integrated neural network learning
Yan et al. Calibration transfer and drift compensation of e-noses via coupled task learning
Garofalo et al. Evaluation of the performance of information theory-based methods and cross-correlation to estimate the functional connectivity in cortical networks
US8731839B2 (en) Method and system for robust classification strategy for cancer detection from mass spectrometry data
Martinelli et al. An adaptive classification model based on the Artificial Immune System for chemical sensor drift mitigation
CN111340132B (en) Machine olfaction mode identification method based on DA-SVM
WO1999027466A2 (en) System and method for intelligent quality control of a process
CN110880369A (en) Gas marker detection method based on radial basis function neural network and application
CN109143408B (en) Dynamic region combined short-time rainfall forecasting method based on MLP
JP2022525427A (en) Automatic boundary detection in mass spectrometry data
CN111683587A (en) Method, device, learning strategy and system for deep learning based on artificial neural network for analyte analysis
CN103714261A (en) Intelligent auxiliary medical treatment decision supporting method of two-stage mixed model
CN113837000A (en) Small sample fault diagnosis method based on task sequencing meta-learning
CN109450573A (en) A kind of frequency spectrum sensing method based on deep neural network
Cheng et al. A concentration-based drift calibration transfer learning method for gas sensor array data
Yang et al. Probabilistic characterisation of model error using Gaussian mixture model—With application to Charpy impact energy prediction for alloy steel
CN116416884A (en) Testing device and testing method for display module
CN112580539A (en) Long-term drift suppression method for electronic nose signals based on PSVM-LSTM
CN116933084A (en) Pollutant emission prediction method and device
CN114860922B (en) Method for obtaining classification model of psychological assessment scale, screening method and system
CN107229944B (en) Semi-supervised active identification method based on cognitive information particles
CN114998731A (en) Intelligent terminal navigation scene perception identification method
CN108108758A (en) Towards the multilayer increment feature extracting method of industrial big data
CN111160419B (en) Deep learning-based electronic transformer data classification prediction method and device
Foldager et al. On the role of model uncertainties in Bayesian optimisation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant