CN116304912A - Sensor gas concentration detection method based on deep learning transducer neural network - Google Patents
Sensor gas concentration detection method based on deep learning transducer neural network Download PDFInfo
- Publication number
- CN116304912A CN116304912A CN202310311018.2A CN202310311018A CN116304912A CN 116304912 A CN116304912 A CN 116304912A CN 202310311018 A CN202310311018 A CN 202310311018A CN 116304912 A CN116304912 A CN 116304912A
- Authority
- CN
- China
- Prior art keywords
- neural network
- data
- transducer
- model
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 54
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000003062 neural network model Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000010276 construction Methods 0.000 claims abstract description 5
- 230000007246 mechanism Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 230000008054 signal transmission Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 230000008034 disappearance Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000004140 cleaning Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 2
- 230000002776 aggregation Effects 0.000 claims 2
- 238000004220 aggregation Methods 0.000 claims 2
- 238000005520 cutting process Methods 0.000 claims 2
- 210000005036 nerve Anatomy 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 abstract description 4
- 238000002360 preparation method Methods 0.000 abstract description 3
- 239000007789 gas Substances 0.000 description 68
- JCXJVPUVTGWSNB-UHFFFAOYSA-N nitrogen dioxide Inorganic materials O=[N]=O JCXJVPUVTGWSNB-UHFFFAOYSA-N 0.000 description 8
- MGWGWNFMUOTEHG-UHFFFAOYSA-N 4-(3,5-dimethylphenyl)-1,3-thiazol-2-amine Chemical compound CC1=CC(C)=CC(C=2N=C(N)SC=2)=C1 MGWGWNFMUOTEHG-UHFFFAOYSA-N 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007789 sealing Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- OBTWBSRJZRCYQV-UHFFFAOYSA-N sulfuryl difluoride Chemical compound FS(F)(=O)=O OBTWBSRJZRCYQV-UHFFFAOYSA-N 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Investigating Or Analyzing Materials By The Use Of Electric Means (AREA)
Abstract
本发明公开了一种基于深度学习transformer神经网络的传感器气体浓度检测方法,属于传感器智能检测技术领域,包括数据集准备及预处理;Transformer神经网络模型的构建;Transformer神经网络模型的训练;用上述训练好的Transformer神经网络模型估算出实际环境参数值与气体浓度值。该算法通过已有的数据集训练过后的模型,实现气体传感器在快速,少样本条件下,对环境中的气体浓度快速检测;具有精度高、通用性强、鲁棒性好、实时性强等优点,可以克服传统方法存在的问题,实现更好的气体浓度检测效果。
The invention discloses a sensor gas concentration detection method based on a deep learning transformer neural network, which belongs to the technical field of sensor intelligent detection, including data set preparation and preprocessing; construction of a Transformer neural network model; training of a Transformer neural network model; The trained Transformer neural network model estimates the actual environmental parameter values and gas concentration values. The algorithm uses the model trained by the existing data set to realize the rapid detection of the gas concentration in the environment by the gas sensor under the condition of fast and few samples; it has high precision, strong versatility, good robustness, and strong real-time performance, etc. Advantages, it can overcome the problems existing in traditional methods and achieve better gas concentration detection results.
Description
技术领域technical field
本发明属于传感器智能检测技术领域,具体涉及一种基于深度学习transformer神经网络的传感器气体浓度检测方法。The invention belongs to the technical field of sensor intelligent detection, and in particular relates to a sensor gas concentration detection method based on a deep learning transformer neural network.
背景技术Background technique
在已注入某一定量气体的密封场景下,使用气体传感器检测环境中的气体浓度;气体传感器基本上都存在预热步骤,气体传感器的气敏单元与气体通过充分化学或者物理反应,产生自身性质变化,通过不同的气敏元件特性设计检测电路,最终将物理或者化学信号转变为电信号;从以上反应特性可得出,气体传感器的测量周期缓慢,并且精度保障存在很大困难。In the sealing scene where a certain amount of gas has been injected, the gas sensor is used to detect the gas concentration in the environment; the gas sensor basically has a preheating step, and the gas sensor of the gas sensor reacts fully with the gas to produce its own properties Changes, the detection circuit is designed through different gas sensor characteristics, and finally the physical or chemical signal is converted into an electrical signal; from the above reaction characteristics, it can be concluded that the measurement cycle of the gas sensor is slow, and there are great difficulties in ensuring the accuracy.
传统的气体传感器测量方式是利用下位机收集气体传感器电路发出的电信号的幅值,将幅值时间序列中的平稳值或者峰值设置为当前环境气体浓度值;如图1所示,在碳纳米管气体传感器测量SOF2和SO2F2气体浓度引起的电导幅值变化幅度值可以看出,整个反应过程存在峰值不规则变化现象。该次实验过程持续85h,传统收集时序幅值峰值的做法只能通过无限延长实验周期的方式减少误差,该过程存在耗时、实验结果偏差等问题。The traditional gas sensor measurement method is to use the lower computer to collect the amplitude of the electrical signal sent by the gas sensor circuit, and set the stable value or peak value in the amplitude time series as the current ambient gas concentration value; as shown in Figure 1, in the carbon nanometer The tube gas sensor measures the conductance amplitude change amplitude value caused by the SOF 2 and SO 2 F 2 gas concentration. It can be seen that there are irregular peak changes in the entire reaction process. The experiment process lasted 85 hours. The traditional method of collecting time-series amplitude peaks can only reduce errors by extending the experiment period infinitely. This process has problems such as time-consuming and deviation of experimental results.
在传统测量方案上提出基于时序导数等数学特征测量幅值峰值方法,该方法是根据幅值-时序曲线分析一阶导数值估计实际气体浓度值。改变原有的幅值峰值估计方式,采用测量速度值进一步估计实际值。该方法一定程度上缩减实验周期,但丢弃时序曲线的完整性,对一类时序加速度值连续变化曲线无法做到准确估计。另外,在动态测量过程中,气体类浓度极易受到历史浓度状态的影响。在封闭状态已有气体的状态下注入测试气体,浓度的测量曲线的过程导数受到多方面影响,可能会出现同一浓度却过程导数不同或者过程导数类似但实际浓度不匹配的现象,所以该方法依旧存在一定问题与局限性。Based on the traditional measurement scheme, a method of measuring the peak value of the amplitude based on mathematical characteristics such as time-series derivatives is proposed. This method estimates the actual gas concentration value based on the analysis of the first-order derivative value of the amplitude-time series curve. Change the original amplitude peak estimation method, and use the measured speed value to further estimate the actual value. This method reduces the experimental period to a certain extent, but discards the integrity of the time series curve, and cannot accurately estimate the continuous change curve of a class of time series acceleration values. In addition, during the dynamic measurement process, the gas concentration is easily affected by the historical concentration state. When the test gas is injected in the state of existing gas in a closed state, the process derivative of the concentration measurement curve is affected by many aspects. It may appear that the same concentration has different process derivatives, or the process derivatives are similar but the actual concentration does not match. Therefore, this method is still There are certain problems and limitations.
发明内容Contents of the invention
为了克服当前在气体传感器检测领域存在的检测时间周期长、受历史影响大、数据处理复杂、实验结果不精确、检测精度低等问题,本发明提供了一种基于深度学习transformer神经网络的传感器气体浓度检测方法,该算法通过已有的数据集训练过后的模型,实现气体传感器在快速,少样本条件下,对环境中的气体浓度快速检测。In order to overcome the current problems in the field of gas sensor detection, such as long detection time period, great historical influence, complex data processing, inaccurate experimental results, and low detection accuracy, the present invention provides a sensor gas sensor based on deep learning transformer neural network. Concentration detection method, the algorithm uses the model trained by the existing data set to realize the rapid detection of the gas concentration in the environment by the gas sensor under the condition of fast and few samples.
本发明通过如下技术方案实现:The present invention realizes through following technical scheme:
一种基于深度学习transformer神经网络的传感器气体浓度检测方法,具体包括如下步骤:A sensor gas concentration detection method based on a deep learning transformer neural network, specifically comprising the following steps:
步骤一:数据集准备及预处理;Step 1: Data set preparation and preprocessing;
采集气体传感器的数据,并对数据进行预处理,所述预处理包括清理、去噪及标准化,经预处理后得到带有时间维度的气体浓度序列数据;Collect the data of the gas sensor, and preprocess the data, the preprocessing includes cleaning, denoising and standardization, and obtain the gas concentration series data with time dimension after preprocessing;
步骤二:Transformer神经网络模型的构建;Step 2: Construction of the Transformer neural network model;
切分数据集,带入训练集于嵌入层(embedding),调节模型中编码器(Encoder)和解码器(Decoder)模块超参数,参考均方误差(MSE)、平均绝对误差(MAE)指标使用网格搜索等优化函数评估出最优的超参数组合,并使用最佳MSE、MAE表征模型性能;Segment the data set, bring the training set into the embedding layer (embedding), adjust the hyperparameters of the encoder (Encoder) and decoder (Decoder) modules in the model, and refer to the mean square error (MSE) and mean absolute error (MAE) indicators. Optimization functions such as grid search evaluate the optimal combination of hyperparameters, and use the best MSE and MAE to characterize model performance;
步骤三:Transformer神经网络模型的训练;Step 3: Training of the Transformer neural network model;
步骤四:用上述训练好的Transformer神经网络模型估算出实际环境参数值与气体浓度值。Step 4: Use the trained Transformer neural network model to estimate the actual environmental parameter values and gas concentration values.
进一步地,步骤一中,数据包括气体的浓度及时间值的序列值,将传感器中包括浓度和时间的序列值转换为用于Transformer模型的格式,从而进行模型训练任务,具体包括如下内容:Further, in step 1, the data includes sequence values of gas concentration and time values, and the sequence values including concentration and time in the sensor are converted into a format for the Transformer model, so as to perform model training tasks, specifically including the following:
A1、将时间序列离散化:将连续的时间序列数据离散化成固定为10min时间间隔的数据;A1. Discretize time series: discretize continuous time series data into data with a fixed time interval of 10 minutes;
A 2、序列标准化:对离散化后的时间序列进行均值归一化处理,使其具有相似的统计特征;A 2. Sequence standardization: perform mean normalization on discretized time series to make them have similar statistical characteristics;
A3、构建输入序列:将均值归一化后的时间序列数据转化为输入序列,即将一段固定时间长度的数据作为一个序列输入到Transformer神经网络模型中;A3. Construct the input sequence: convert the mean-normalized time series data into an input sequence, that is, input a fixed period of data as a sequence into the Transformer neural network model;
A4、批量化和填充:对于输入序列长度不足的情况,进行填充操作来保证输入序列长度的一致性。A4. Batching and padding: When the length of the input sequence is insufficient, a padding operation is performed to ensure the consistency of the length of the input sequence.
进一步地,步骤二中,采用Transformer神经网络的Encoder-Decoder模型及嵌入层对数据进行时序处理;其中,所述嵌入层用于将传感器采集到的数据转换为神经网络可以处理的向量形式;所述Encoder模块用于将输入序列转换为一组隐藏表示;所述Decoder模块用于根据Encoder模块提供的隐藏表示和之前生成的输出,生成当前时间步的输出。Further, in step 2, the Encoder-Decoder model and the embedding layer of the Transformer neural network are used to perform sequential processing on the data; wherein, the embedding layer is used to convert the data collected by the sensor into a vector form that the neural network can process; The Encoder module is used to convert the input sequence into a set of hidden representations; the Decoder module is used to generate the output of the current time step according to the hidden representation provided by the Encoder module and the output generated before.
进一步地,所述嵌入层由位置编码器及输入嵌入组成,位置编码器用于为每个时间点的输入数据添加位置信息,以便于模型学习时间序列的顺序;输入嵌入用于将每个时间点的输入数据转换为固定维度的向量表示,以便于后续的注意力机制、编码器和解码器进行处理。Further, the embedding layer is composed of a position encoder and an input embedding, the position encoder is used to add position information to the input data at each time point, so that the model can learn the order of the time series; the input embedding is used to convert each time point The input data of is converted into a fixed-dimensional vector representation, which is convenient for subsequent attention mechanism, encoder and decoder to process.
进一步地,所述Encoder模块包括:Further, the Encoder module includes:
多头注意力机制Multi-Head Attention,用于对输入序列进行加权汇聚,以便Encoder模块能够更好地利用输入序列的信息;Multi-Head Attention, a multi-head attention mechanism, is used to weight and aggregate the input sequence so that the Encoder module can better utilize the information of the input sequence;
前馈神经网络Position-wise Feed-Forward Network:用于对上述多头注意力机制的输出进行加权汇聚,以便生成一组隐藏表示。Feedforward Neural Network Position-wise Feed-Forward Network: It is used to weight the output of the above multi-head attention mechanism to generate a set of hidden representations.
进一步地,所述Decoder模块包括:Further, the Decoder module includes:
自注意力机制Masked Multi-Head Attention,用于计算当前时间步的输出与之前生成的输出之间的关系,以及将Encoder模块提供的隐藏表示与当前时间步的输出进行交互;Self-attention mechanism Masked Multi-Head Attention, used to calculate the relationship between the output of the current time step and the output generated before, and interact the hidden representation provided by the Encoder module with the output of the current time step;
多头注意力机制Multi-Head Attention,用于对Encoder模块提供的隐藏表示进行加权汇聚,以便Decoder模块能够更好地利用输入序列的信息;The multi-head attention mechanism Multi-Head Attention is used to weight the hidden representation provided by the Encoder module so that the Decoder module can better utilize the information of the input sequence;
前馈神经网络Position-wise Feed-Forward Network,用于对上述两个注意力机制的输出进行加权汇聚,以便生成当前时间步的输出。The feedforward neural network Position-wise Feed-Forward Network is used to weight and aggregate the outputs of the above two attention mechanisms to generate the output of the current time step.
进一步地,所述Encoder模块与Decoder模块的各组件之间均设置有LayerNormalization模块,用于更好地进行信号传输和防止模型训练时的梯度消失问题。Further, a LayerNormalization module is provided between each component of the Encoder module and the Decoder module for better signal transmission and preventing gradient disappearance during model training.
进一步地,步骤二构建模型具体包括如下内容:Further, the second step of building a model specifically includes the following contents:
B1:数据集切分;B1: data set segmentation;
对步骤一中得到的序列数据进行数据切分,将24小时的数据切分成固定长度的时间窗口,每个时间窗口长度为10mins至30mins;并采用70/15/15的比例来划分数据集;将前70%的时间窗口作为训练集,中间的15%作为验证集,剩余的15%作为测试集;Segment the sequence data obtained in step 1, divide the 24-hour data into time windows of fixed length, each time window length is 10mins to 30mins; and use the ratio of 70/15/15 to divide the data set; Use the first 70% of the time window as the training set, the middle 15% as the validation set, and the remaining 15% as the test set;
B2:设定超参数范围;B2: Set the hyperparameter range;
首先,确定输入序列长度,根据数据的采样频率和应用场景,选择从0到24个小时的数据点作为一个输入序列长度,全面的记录环境气体浓度变化值;然后,确定批次大小及隐藏层数,设置批次为32、64或128;确定隐藏层数为5-6层;最后,确定头数为6-8;First, determine the length of the input sequence. According to the sampling frequency of the data and the application scenario, select the data points from 0 to 24 hours as an input sequence length to comprehensively record the change value of the ambient gas concentration; then, determine the batch size and hidden layer Number, set the batch to 32, 64 or 128; determine the number of hidden layers as 5-6 layers; finally, determine the number of heads as 6-8;
B3:网格搜索;B3: grid search;
采用网格搜索方法在超参数范围内搜索最优的超参数组合;Use the grid search method to search for the optimal hyperparameter combination within the hyperparameter range;
B4:随机搜索;B4: random search;
采用随机搜索方法在超参数范围内随机搜索最优的超参数组合;Use the random search method to randomly search for the optimal hyperparameter combination within the hyperparameter range;
B5:贝叶斯优化;B5: Bayesian optimization;
采用贝叶斯优化方法在超参数范围内寻找最优的超参数组合;Use the Bayesian optimization method to find the optimal hyperparameter combination within the hyperparameter range;
B6:评估模型性能;B6: Evaluate model performance;
采用通过上述方法各自得到的最优超参数组合分别训练Transformer神经网络模型,并在测试集上使用均方误差(MSE)、平均绝对误差(MAE)评估出最优的超参数组合,并使用最优参数模型对应的MSE、MAE表征模型性能。The Transformer neural network model is trained separately using the optimal hyperparameter combinations obtained by the above methods, and the optimal hyperparameter combination is evaluated using the mean square error (MSE) and mean absolute error (MAE) on the test set, and the optimal hyperparameter combination is used. The MSE and MAE corresponding to the optimal parameter model represent the performance of the model.
进一步地,步骤三具体如下:Further, step three is specifically as follows:
选取提取的参数值与其对应的时刻值作为transformer神经网络的分析特征量,即Qi=[ti,vi]T,Qi表示在某一时刻下位机发送的数据参数,ti、vi分别是其对应的气体的浓度及时间值;将训练集通过embedding方式导入神经网络,经过一系列穿插、归一化、注意力机制训练处理,得到曲线特征变化之间的参数关系,即模型训练完成。Select the extracted parameter value and its corresponding time value as the analysis feature quantity of the transformer neural network, that is, Q i =[t i , v i ] T , Q i represents the data parameter sent by the lower computer at a certain moment, t i , v i is the concentration and time value of the corresponding gas respectively; the training set is imported into the neural network through embedding, and after a series of interspersed, normalized, and attention mechanism training processes, the parameter relationship between the curve feature changes is obtained, that is, the model Training is complete.
本发明采用基于深度学习transformer神经网络的传感器气体浓度检测方法,相比于传统方法具有以下优点:The present invention uses a sensor gas concentration detection method based on a deep learning transformer neural network, which has the following advantages compared to traditional methods:
1、提高了气体浓度检测的精度和准确性:现有的气体浓度检测方法依赖于传感器测量数据的处理和模型拟合,存在模型复杂度不足、模型泛化能力不足等问题;而transformer神经网络通过大量数据的训练和模型的优化,可以更好地挖掘数据特征,提高检测的精度和准确性;1. Improve the precision and accuracy of gas concentration detection: the existing gas concentration detection methods rely on the processing of sensor measurement data and model fitting, and there are problems such as insufficient model complexity and insufficient model generalization ability; and transformer neural network Through the training of a large amount of data and the optimization of the model, the characteristics of the data can be better mined, and the precision and accuracy of the detection can be improved;
2、能够处理多种气体的浓度检测:现有的气体浓度检测方法针对特定气体进行建模和处理,难以处理多种气体的检测;而transformer神经网络方法不依赖于特定的物理模型,可以处理多种气体的检测,具有更好的通用性和扩展性;2. Can handle the concentration detection of multiple gases: the existing gas concentration detection method is modeled and processed for specific gases, and it is difficult to handle the detection of multiple gases; while the transformer neural network method does not depend on a specific physical model, it can handle The detection of multiple gases has better versatility and scalability;
3、能够适应不同的环境和条件:现有的气体浓度检测方法对于环境的变化、传感器的漂移等问题较为敏感,难以应对复杂的环境和条件;而transformer神经网络方法通过自适应的学习和优化,能够更好地适应不同的环境和条件,提高检测的鲁棒性和稳定性;3. Can adapt to different environments and conditions: the existing gas concentration detection method is sensitive to environmental changes, sensor drift and other issues, and it is difficult to deal with complex environments and conditions; while the transformer neural network method is self-adaptive learning and optimization , can better adapt to different environments and conditions, and improve the robustness and stability of detection;
4、可以实现在线检测和实时监测:现有的气体浓度检测方法需要离线处理和模型训练,不能实现实时监测和在线检测;而深度学习方法具有较快的训练速度和较低的计算复杂度,可以实现实时监测和在线检测,具有更好的实用性和应用价值;4. Online detection and real-time monitoring can be realized: the existing gas concentration detection method requires offline processing and model training, and cannot realize real-time monitoring and online detection; while the deep learning method has faster training speed and lower computational complexity, Real-time monitoring and online detection can be realized, which has better practicality and application value;
综上所述,本发明基于深度学习transformer神经网络的传感器气体浓度检测方法具有精度高、通用性强、鲁棒性好、实时性强等优点,可以克服传统方法存在的问题,实现更好的气体浓度检测效果。In summary, the sensor gas concentration detection method based on the deep learning transformer neural network of the present invention has the advantages of high precision, strong versatility, good robustness, and strong real-time performance. It can overcome the problems existing in traditional methods and achieve better Gas concentration detection effect.
附图说明Description of drawings
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍。在所有附图中,类似的元件或部分一般由类似的附图标记标识。附图中,各元件或部分并不一定按照实际的比例绘制。In order to more clearly illustrate the specific embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the specific embodiments or the prior art. Throughout the drawings, similar elements or parts are generally identified by similar reference numerals. In the drawings, elements or parts are not necessarily drawn in actual scale.
图1:气体传感器对1μLLSO2F2的响应曲线图;Figure 1: The response curve of the gas sensor to 1 μL SO2F2;
图2:本发明中基于transformer模型进行序列处理的整体架构图;Fig. 2: In the present invention, the overall architecture diagram of sequence processing based on the transformer model;
图3:本发明叙述的传感器测试时间及电压变化曲线图;Fig. 3: the sensor testing time of the present invention narration and voltage change graph;
图4:本发明中序列信息的二维图。Figure 4: Two-dimensional diagram of sequence information in the present invention.
具体实施方式Detailed ways
为清楚、完整地描述本发明所述技术方案及其具体工作过程,结合说明书附图,本发明的具体实施方式如下:In order to clearly and completely describe the technical solution of the present invention and its specific working process, in conjunction with the accompanying drawings, the specific implementation of the present invention is as follows:
实施例1Example 1
本实施例提供的一种基于深度学习transformer神经网络的传感器气体浓度检测方法,具体包括如下步骤:A sensor gas concentration detection method based on a deep learning transformer neural network provided in this embodiment specifically includes the following steps:
步骤一:数据集准备及预处理;Step 1: Data set preparation and preprocessing;
采集气体传感器的数据,并对数据进行预处理,所述预处理包括清理、去噪及标准化,经预处理后得到带有时间维度的气体浓度序列数据;Collect the data of the gas sensor, and preprocess the data, the preprocessing includes cleaning, denoising and standardization, and obtain the gas concentration series data with time dimension after preprocessing;
在本实施例中,具体地数据处理包括如下步骤:In this embodiment, specifically data processing includes the following steps:
A、将时间序列离散化:将连续的时间序列数据离散化成固定为10mins时间间隔的数据;A. Discretize time series: discretize continuous time series data into data with a fixed time interval of 10mins;
B、序列标准化:对离散化后的时间序列进行均值归一化处理,使其具有相似的统计特征;B. Sequence standardization: perform mean normalization processing on the discretized time series to make them have similar statistical characteristics;
C、构建输入序列:将均值归一化后的时间序列数据转化为输入序列,即将一段固定时间长度的数据作为一个序列输入到Transformer模型中;C. Construct the input sequence: convert the mean-normalized time series data into an input sequence, that is, input data of a fixed length of time as a sequence into the Transformer model;
D、批量化和填充:对于输入序列长度不足的情况,进行填充操作来保证输入序列长度的一致性;D. Batching and filling: For the case where the length of the input sequence is insufficient, a filling operation is performed to ensure the consistency of the length of the input sequence;
步骤二:Transformer神经网络模型的构建;Step 2: Construction of the Transformer neural network model;
切分数据集,带入训练集于嵌入层(embedding),调节模型中编码器(Encoder)和解码器(Decoder)模块超参数,参考均方误差(MSE)、平均绝对误差(MAE)指标使用网格搜索等优化函数评估出最优的超参数组合,并使用最佳MSE、MAE表征模型性能;Segment the data set, bring the training set into the embedding layer (embedding), adjust the hyperparameters of the encoder (Encoder) and decoder (Decoder) modules in the model, and refer to the mean square error (MSE) and mean absolute error (MAE) indicators. Optimization functions such as grid search evaluate the optimal combination of hyperparameters, and use the best MSE and MAE to characterize model performance;
对于存在不同特性的气体,其检测参数存在较大差异,在本实施例中,以二氧化氮气体的模型构建为例,具体步骤如下:For gases with different characteristics, the detection parameters are quite different. In this embodiment, the model construction of nitrogen dioxide gas is taken as an example. The specific steps are as follows:
①数据切分,对步骤一中得到的二维时序数据训练Transformer神经网络模型,进行数据切分,将24小时的数据切分成固定长度的时间窗口,每个时间窗口长度为10mins至30mins;并采用70/15/15的比例来划分数据集;将前70%的时间窗口作为训练集,中间的15%作为验证集,剩余的15%作为测试集;① Data segmentation, train the Transformer neural network model on the two-dimensional time series data obtained in step 1, and perform data segmentation, and divide the 24-hour data into fixed-length time windows, each time window length is 10mins to 30mins; and Use the ratio of 70/15/15 to divide the data set; use the first 70% of the time window as the training set, the middle 15% as the verification set, and the remaining 15% as the test set;
②设定超参数范围,首先,确定输入序列长度,根据数据的采样频率和应用场景,选择从0到24个小时的数据点作为一个输入序列长度,全面的记录环境气体浓度变化值;接着,确定批次大小,根据GPU或者TPU的计算能力以避免显存不足等问题,设置批次为32、64或128;接着,确定隐藏层数,根据数据复杂性和计算资源的限制,选择最优的隐藏层数,选择5-6层的隐藏层数,提高模型的表达能力;最后,确定头数,多头自注意力机制是transformer网络的关键组件之一,为使得模型能够更好地捕捉不同时间步之间的依赖关系,设置头数为6-8;②Set the hyperparameter range. First, determine the length of the input sequence. According to the sampling frequency of the data and the application scenario, select the data points from 0 to 24 hours as an input sequence length to comprehensively record the change value of the ambient gas concentration; then, Determine the batch size, and set the batch size to 32, 64, or 128 according to the computing power of the GPU or TPU to avoid problems such as insufficient video memory; then, determine the number of hidden layers, and select the optimal one according to the complexity of the data and the limitation of computing resources The number of hidden layers, choose the number of hidden layers of 5-6 layers to improve the expression ability of the model; finally, determine the number of heads, the multi-head self-attention mechanism is one of the key components of the transformer network, in order to enable the model to better capture different time Dependency between steps, set the number of heads to 6-8;
③网格搜索,使用网格搜索方法,在超参数范围内搜索最优的超参数组合;网格搜索方法是一种穷举搜索方法,会遍历所有可能的参数组合,并选择最优的参数组合;网格搜索方法使用Python中的GridSearchCV库来实现;③ Grid search, use the grid search method to search for the optimal hyperparameter combination within the hyperparameter range; the grid search method is an exhaustive search method that traverses all possible parameter combinations and selects the optimal parameter combination; the grid search method is implemented using the GridSearchCV library in Python;
④随机搜索,使用随机搜索方法,在超参数范围内随机搜索最优的超参数组合。随机搜索方法会在超参数范围内随机选择一些参数组合,并选择最优的参数组合。随机搜索方法可以使用Python中的RandomizedSearchCV库来实现;④ Random search, using the random search method to randomly search for the optimal hyperparameter combination within the hyperparameter range. The random search method randomly selects some parameter combinations within the range of hyperparameters and selects the optimal parameter combination. The random search method can be implemented using the RandomizedSearchCV library in Python;
⑤贝叶斯优化,使用贝叶斯优化方法,在超参数范围内寻找最优的超参数组合。贝叶斯优化方法通过建立超参数的先验分布和模型的后验分布之间的联系,来搜索最优的超参数组合。贝叶斯优化方法使用Python中的BayesianOptimization库来实现;⑤Bayesian optimization, using the Bayesian optimization method to find the optimal hyperparameter combination within the hyperparameter range. Bayesian optimization methods search for the optimal combination of hyperparameters by establishing the connection between the prior distribution of hyperparameters and the posterior distribution of the model. The Bayesian optimization method is implemented using the BayesianOptimization library in Python;
⑥评估模型性能:使用通过上述方法各自得到的最优超参数组合训练模型,并在测试集上使用均方误差(MSE)、平均绝对误差(MAE)评估出最优的超参数组合,并使用最优参数模型对应的MSE、MAE表征模型性能。⑥Evaluate model performance: Use the optimal hyperparameter combinations obtained by the above methods to train the model, and use the mean square error (MSE) and mean absolute error (MAE) to evaluate the optimal hyperparameter combination on the test set, and use The MSE and MAE corresponding to the optimal parameter model represent the performance of the model.
所述在气体传感器中使用Transformer神经网络进行时序处理时,通常使用Encoder-Decoder模型,其中Encoder模块负责将输入序列转换为一组隐藏表示,而Decoder模块则负责生成输出序列。方案整体框架如附图2所示,时间-幅值序列由下位机提供,直接通过embedding模块进入transformer模型,经Encoder、Decoder模块处理,输出气体浓度值。When using the Transformer neural network for time series processing in the gas sensor, the Encoder-Decoder model is usually used, where the Encoder module is responsible for converting the input sequence into a set of hidden representations, and the Decoder module is responsible for generating the output sequence. The overall framework of the scheme is shown in Figure 2. The time-amplitude sequence is provided by the lower computer, directly enters the transformer model through the embedding module, and is processed by the Encoder and Decoder modules to output the gas concentration value.
所述embedding模块,嵌入层(embedding)主要用于将传感器采集到的数据(如时序信号)转换为神经网络可以处理的向量形式,也就是将传感器采集到的数据表示为向量形式,以便于神经网络进行处理和学习。具体来说,嵌入层将气体传感器上位机输入的离散数值映射为连续的向量表示。在Transformer神经网络中,嵌入层通常由两个部分组成:位置编码器(position encoder)和输入嵌入(input embedding)。其中,位置编码器用于为每个时间点的输入数据添加位置信息,以便于模型学习时间序列的顺序。输入嵌入则用于将每个时间点的输入数据转换为固定维度的向量表示,以便于后续的注意力机制(attentionmechanism)、编码器(encoder)和解码器(decoder)进行处理。The embedding module, the embedding layer (embedding) is mainly used to convert the data collected by the sensor (such as a time series signal) into a vector form that can be processed by the neural network, that is, the data collected by the sensor is represented as a vector form, so that the neural network network for processing and learning. Specifically, the embedding layer maps the discrete values input by the host computer of the gas sensor to a continuous vector representation. In the Transformer neural network, the embedding layer usually consists of two parts: position encoder and input embedding. Among them, the position encoder is used to add position information to the input data at each time point, so that the model can learn the order of the time series. The input embedding is used to convert the input data at each time point into a fixed-dimensional vector representation for subsequent processing by the attention mechanism, encoder and decoder.
所述Encoder模块,Encoder模块的作用是将输入序列转换为一组隐藏表示,以便Decoder模块能够更好地利用输入序列的信息。在气体传感器应用中,Encoder模块可以用于将当前时间步的气体浓度转换为一组隐藏表示,供Decoder模块使用。Encoder模块通常由以下几个组件组成:The Encoder module, the function of the Encoder module is to convert the input sequence into a set of hidden representations, so that the Decoder module can better utilize the information of the input sequence. In gas sensor applications, the Encoder module can be used to convert the gas concentration at the current time step into a set of hidden representations for use by the Decoder module. The Encoder module usually consists of the following components:
①Multi-Head Attention:Encoder模块的第一个组件是多头注意力机制,用于对输入序列进行加权汇聚,以便Encoder模块能够更好地利用输入序列的信息。①Multi-Head Attention: The first component of the Encoder module is the multi-head attention mechanism, which is used to weight the input sequence so that the Encoder module can better utilize the information of the input sequence.
②Position-wise Feed-Forward Network:Encoder模块的第二个组件是前馈神经网络,用于对上述多头注意力机制的输出进行加权汇聚,以便生成一组隐藏表示。②Position-wise Feed-Forward Network: The second component of the Encoder module is a feed-forward neural network, which is used to weight and aggregate the output of the above-mentioned multi-head attention mechanism in order to generate a set of hidden representations.
③Layer Normalization:Encoder模块的每个组件之间都添加了一个LayerNormalization模块,以便更好地进行信号传输和防止模型训练时的梯度消失问题。③Layer Normalization: A LayerNormalization module is added between each component of the Encoder module to better perform signal transmission and prevent the gradient disappearance problem during model training.
Encoder模块中各个组件之间的信号传输关系如下:The signal transmission relationship between the various components in the Encoder module is as follows:
输入序列→Multi-Head Attention→Position-wise Feed-Forward Network→隐藏表示序列Input sequence→Multi-Head Attention→Position-wise Feed-Forward Network→Hidden representation sequence
Encoder模块中的Multi-Head Attention组件用于从输入序列中提取信息,并生成一组隐藏表示,供Decoder模块使用。Position-wise Feed-Forward Network组件用于进一步处理这些信息并生成最终的隐藏表示序列。最后,隐藏表示序列可以被用于Decoder模块中生成输出序列。The Multi-Head Attention component in the Encoder module is used to extract information from the input sequence and generate a set of hidden representations for use by the Decoder module. The Position-wise Feed-Forward Network component is used to further process this information and generate the final sequence of hidden representations. Finally, the hidden representation sequence can be used in the Decoder module to generate the output sequence.
所述Decoder模块,Decoder模块的作用是根据Encoder模块提供的隐藏表示和先前生成的输出,生成当前时间步的输出。在气体传感器应用中,Decoder模块用于预测下一个时间步的气体浓度。Decoder模块通常由以下几个组件组成:The Decoder module, the function of the Decoder module is to generate the output of the current time step according to the hidden representation provided by the Encoder module and the previously generated output. In a gas sensor application, the Decoder module is used to predict the gas concentration at the next time step. The Decoder module usually consists of the following components:
①Masked Multi-Head Attention:Decoder模块的第一个组件是自注意力机制,用于计算当前时间步的输出与先前生成的输出之间的关系,以及将Encoder模块提供的隐藏表示与当前时间步的输出进行交互;①Masked Multi-Head Attention: The first component of the Decoder module is the self-attention mechanism, which is used to calculate the relationship between the output of the current time step and the previously generated output, and to combine the hidden representation provided by the Encoder module with the current time step. output for interaction;
②Multi-Head Attention:Decoder模块的第二个组件是多头注意力机制,用于对Encoder模块提供的隐藏表示进行加权汇聚,以便Decoder模块能够更好地利用输入序列的信息;②Multi-Head Attention: The second component of the Decoder module is the multi-head attention mechanism, which is used to weight the hidden representation provided by the Encoder module so that the Decoder module can better utilize the information of the input sequence;
③Position-wise Feed-Forward Network:Decoder模块的第三个组件是前馈神经网络,用于对上述两个注意力机制的输出进行加权汇聚,以便生成当前时间步的输出;③Position-wise Feed-Forward Network: The third component of the Decoder module is the feed-forward neural network, which is used to weight the outputs of the above two attention mechanisms to generate the output of the current time step;
④Layer Normalization:Decoder模块的每个组件之间都添加了一个LayerNormalization模块,以便更好地进行信号传输和防止模型训练时的梯度消失问题。④Layer Normalization: A LayerNormalization module is added between each component of the Decoder module for better signal transmission and to prevent the gradient disappearance problem during model training.
Decoder模块中各个组件之间的信号传输关系如下:The signal transmission relationship between the various components in the Decoder module is as follows:
输入序列→Masked Multi-Head Attention→Multi-Head Attention→Position-wise Feed-Forward Network→输出序列Decoder模块中的Masked Multi-HeadAttention和Multi-Head Attention组件用于从Encoder模块提供的隐藏表示中提取信息,并与当前时间步的输出进行交互,以生成当前时间步的输出。Position-wise Feed-Forward Network组件用于进一步处理这些信息并生成最终的输出。最后,输出序列可以被用于预测下一个时间步的气体浓度。Input sequence→Masked Multi-Head Attention→Multi-Head Attention→Position-wise Feed-Forward Network→Output sequence The Masked Multi-HeadAttention and Multi-Head Attention components in the Decoder module are used to extract information from the hidden representation provided by the Encoder module , and interact with the output of the current time step to generate the output of the current time step. The Position-wise Feed-Forward Network component is used to further process this information and generate the final output. Finally, the output sequence can be used to predict the gas concentration for the next time step.
步骤三:Transformer神经网络模型的训练;Step 3: Training of the Transformer neural network model;
选取提取的参数值与其对应的时刻值作为transformer神经网络的分析特征量,如附图3、附图4所示,即Qi=[ti,vi]T,Qi表示在某一时刻下位机发送的数据参数,ti、vi分别是其对应的气体的浓度及时间值;获取时间序列曲线特征参数作为训练集,通过embedding方式导入神经网络,经过一系列穿插、归一化、注意力机制训练处理,得到曲线特征变化之间的参数关系,即模型训练完成;Select the extracted parameter value and its corresponding time value as the analysis feature quantity of the transformer neural network, as shown in accompanying drawing 3 and accompanying drawing 4, that is, Q i =[t i , v i ] T , and Q i means that at a certain time The data parameters sent by the lower computer, t i and v i are the concentration and time value of the corresponding gas respectively; the characteristic parameters of the time series curve are obtained as the training set, which is imported into the neural network through embedding, and after a series of interspersed, normalized, The attention mechanism training process obtains the parameter relationship between the curve feature changes, that is, the model training is completed;
步骤四:用上述训练好的Transformer神经网络模型估算出实际环境参数值与气体浓度值。Step 4: Use the trained Transformer neural network model to estimate the actual environmental parameter values and gas concentration values.
实施例2Example 2
气体传感器使用Transformer模型来检测二氧化氮气体浓度的实际运用例子:A practical example of gas sensors using the Transformer model to detect the concentration of nitrogen dioxide gas:
首先,采集二氧化氮气体样本数据,将气体传感器放置在监测区域并采集标准浓度二氧化氮气体浓度数据,所述数据包括二氧化氮气体的浓度和时间值的序列值;First, collecting nitrogen dioxide gas sample data, placing the gas sensor in the monitoring area and collecting standard concentration nitrogen dioxide gas concentration data, the data includes the concentration of nitrogen dioxide gas and sequence values of time values;
接下来,对采集到的数据进行预处理。对数据进行清理、去噪和标准化;使用滑动平均法来平滑数据,并使用最小二乘法来去除噪音。Next, preprocess the collected data. Clean, denoise, and normalize the data; use a moving average to smooth the data and least squares to remove noise.
然后,将处理后的数据输入到Transformer模型中进行训练。在训练过程中,模型将学习如何将输入的气体浓度数据与时序相关联,并建立模型的权重参数;Then, the processed data is fed into the Transformer model for training. During the training process, the model will learn how to associate the input gas concentration data with the time series, and establish the weight parameters of the model;
训练完成后,将模型部署到实际系统中;当需要检测二氧化氮气体浓度时,传感器只需要将少量采集到的传感器时序数据传输到Transformer模型进行推理;After the training is completed, deploy the model to the actual system; when the concentration of nitrogen dioxide gas needs to be detected, the sensor only needs to transmit a small amount of collected sensor timing data to the Transformer model for inference;
最后,根据模型的输出结果快速评估二氧化氮气体的浓度;如果浓度超过了安全阈值,系统可以触发警报或采取其他必要的行动。Finally, the NO2 gas concentration is quickly assessed based on the model's output; if the concentration exceeds a safe threshold, the system can trigger an alarm or take other necessary actions.
以上结合附图详细描述了本发明的优选实施方式,但是,本发明并不限于上述实施方式中的具体细节,在本发明的技术构思范围内,可以对本发明的技术方案进行多种简单变型,这些简单变型均属于本发明的保护范围。The preferred embodiment of the present invention has been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the specific details of the above embodiment, within the scope of the technical concept of the present invention, various simple modifications can be made to the technical solution of the present invention, These simple modifications all belong to the protection scope of the present invention.
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本发明对各种可能的组合方式不再另行说明。In addition, it should be noted that the various specific technical features described in the above specific embodiments can be combined in any suitable way if there is no contradiction. The combination method will not be described separately.
此外,本发明的各种不同的实施方式之间也可以进行任意组合,只要其不违背本发明的思想,其同样应当视为本发明所公开的内容。In addition, various combinations of different embodiments of the present invention can also be combined arbitrarily, as long as they do not violate the idea of the present invention, they should also be regarded as the disclosed content of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310311018.2A CN116304912A (en) | 2023-03-28 | 2023-03-28 | Sensor gas concentration detection method based on deep learning transducer neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310311018.2A CN116304912A (en) | 2023-03-28 | 2023-03-28 | Sensor gas concentration detection method based on deep learning transducer neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116304912A true CN116304912A (en) | 2023-06-23 |
Family
ID=86828639
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310311018.2A Pending CN116304912A (en) | 2023-03-28 | 2023-03-28 | Sensor gas concentration detection method based on deep learning transducer neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116304912A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116559681A (en) * | 2023-07-12 | 2023-08-08 | 安徽国麒科技有限公司 | Retired battery capacity prediction method and device based on deep learning time sequence algorithm |
CN117091799A (en) * | 2023-10-17 | 2023-11-21 | 湖南一特医疗股份有限公司 | Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center |
CN117768207A (en) * | 2023-12-24 | 2024-03-26 | 中国人民解放军61660部队 | Network flow unsupervised anomaly detection method based on improved transducer reconstruction model |
CN118098443A (en) * | 2024-04-29 | 2024-05-28 | 四川希尔得科技有限公司 | Online upgrading system and method for infrared gas sensor |
CN118538315A (en) * | 2024-07-25 | 2024-08-23 | 自然资源部第一海洋研究所 | Deep learning-based ocean subsurface chlorophyll a concentration prediction method |
CN118706932A (en) * | 2024-08-28 | 2024-09-27 | 成都益清源科技有限公司 | VOCs/TVOC detection method based on photoionization sensor |
CN120064583A (en) * | 2025-04-28 | 2025-05-30 | 国网(西安)环保技术中心有限公司 | High voltage cable fault diagnosis method and device |
-
2023
- 2023-03-28 CN CN202310311018.2A patent/CN116304912A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116559681A (en) * | 2023-07-12 | 2023-08-08 | 安徽国麒科技有限公司 | Retired battery capacity prediction method and device based on deep learning time sequence algorithm |
CN117091799A (en) * | 2023-10-17 | 2023-11-21 | 湖南一特医疗股份有限公司 | Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center |
CN117091799B (en) * | 2023-10-17 | 2024-01-02 | 湖南一特医疗股份有限公司 | Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center |
CN117768207A (en) * | 2023-12-24 | 2024-03-26 | 中国人民解放军61660部队 | Network flow unsupervised anomaly detection method based on improved transducer reconstruction model |
CN117768207B (en) * | 2023-12-24 | 2024-10-18 | 中国人民解放军61660部队 | Network flow unsupervised anomaly detection method based on improved transducer reconstruction model |
CN118098443A (en) * | 2024-04-29 | 2024-05-28 | 四川希尔得科技有限公司 | Online upgrading system and method for infrared gas sensor |
CN118538315A (en) * | 2024-07-25 | 2024-08-23 | 自然资源部第一海洋研究所 | Deep learning-based ocean subsurface chlorophyll a concentration prediction method |
CN118538315B (en) * | 2024-07-25 | 2024-10-15 | 自然资源部第一海洋研究所 | Deep learning-based ocean subsurface chlorophyll a concentration prediction method |
CN118706932A (en) * | 2024-08-28 | 2024-09-27 | 成都益清源科技有限公司 | VOCs/TVOC detection method based on photoionization sensor |
CN118706932B (en) * | 2024-08-28 | 2024-11-22 | 成都益清源科技有限公司 | VOCs/TVOC detection method based on photoionization sensor |
CN120064583A (en) * | 2025-04-28 | 2025-05-30 | 国网(西安)环保技术中心有限公司 | High voltage cable fault diagnosis method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116304912A (en) | Sensor gas concentration detection method based on deep learning transducer neural network | |
CN110632572B (en) | Radar radiation source individual identification method and device based on unintentional phase modulation characteristics | |
CN113076975A (en) | Dam safety monitoring data anomaly detection method based on unsupervised learning | |
CN113988210B (en) | Method, device and storage medium for repairing distorted data of structural monitoring sensor network | |
CN117371543A (en) | Enhanced soft measurement method based on time sequence diffusion probability model | |
CN116611018A (en) | Multi-source data fusion-based equipment system health management and fault diagnosis method | |
Lu et al. | GAN-LSTM predictor for failure prognostics of rolling element bearings | |
CN111860839A (en) | Crane fault monitoring method based on multi-signal fusion and Adam optimization algorithm | |
Huang et al. | False phasor data detection under time synchronization attacks: A neural network approach | |
Peng et al. | An aero-engine RUL prediction method based on VAE-GAN | |
CN108896456B (en) | Aerosol extinction coefficient inversion method based on feedback type RBF neural network | |
CN115392381A (en) | Anomaly Detection Method for Time Series Based on Unscented Kalman Filter | |
CN116340881A (en) | Self-adaptive post-fusion detection method for gas sensor array | |
CN118171771A (en) | Carbon emission detection and early warning method and system thereof | |
CN103389360A (en) | Probabilistic principal component regression model-based method for soft sensing of butane content of debutanizer | |
Zhang et al. | Gas sensor array dynamic measurement uncertainty evaluation and optimization algorithm | |
CN119471389A (en) | A power battery fault detection method based on bidirectional Mamba architecture | |
CN119270085A (en) | A method and system for monitoring safety status of lead-acid battery for backup power supply of substation | |
CN117688496A (en) | Abnormal diagnosis methods, systems and equipment for satellite telemetry multi-dimensional time series data | |
CN116340882A (en) | Self-adaptive pre-fusion detection method for gas sensor array | |
CN118033525A (en) | Industrial instrument fault detection method based on data correlation | |
CN116244596A (en) | Industrial time sequence data anomaly detection method based on TCN and attention mechanism | |
CN115688865A (en) | Industrial soft measurement method for long and short term memory network for flue gas of desulfurization process | |
CN116467671A (en) | Adaptive parameter detection method of array sensor based on deep learning and MASK operator | |
CN116796615A (en) | Structural modal parameter identification method based on random subspace deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |