CN116304912A - Sensor gas concentration detection method based on deep learning transducer neural network - Google Patents

Sensor gas concentration detection method based on deep learning transducer neural network Download PDF

Info

Publication number
CN116304912A
CN116304912A CN202310311018.2A CN202310311018A CN116304912A CN 116304912 A CN116304912 A CN 116304912A CN 202310311018 A CN202310311018 A CN 202310311018A CN 116304912 A CN116304912 A CN 116304912A
Authority
CN
China
Prior art keywords
neural network
transducer
data
sequence
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310311018.2A
Other languages
Chinese (zh)
Inventor
胡小龙
岳文强
卢革宇
马忠嘉
郭帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310311018.2A priority Critical patent/CN116304912A/en
Publication of CN116304912A publication Critical patent/CN116304912A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Investigating Or Analyzing Materials By The Use Of Electric Means (AREA)

Abstract

The invention discloses a sensor gas concentration detection method based on a deep learning transformer neural network, which belongs to the technical field of intelligent detection of sensors and comprises data set preparation and pretreatment; constructing a transducer neural network model; training a transducer neural network model; and estimating the actual environment parameter value and the gas concentration value by using the trained transducer neural network model. The algorithm realizes the rapid detection of the gas concentration in the environment under the condition of rapid and few samples by using the model trained by the existing data set; the method has the advantages of high precision, strong universality, good robustness, strong real-time performance and the like, can overcome the problems of the traditional method, and realizes better gas concentration detection effect.

Description

Sensor gas concentration detection method based on deep learning transducer neural network
Technical Field
The invention belongs to the technical field of intelligent detection of sensors, and particularly relates to a sensor gas concentration detection method based on a deep learning transducer neural network.
Background
Detecting a gas concentration in an environment using a gas sensor in a sealed scene in which a certain amount of gas has been injected; the gas sensor basically has a preheating step, the gas-sensitive unit of the gas sensor and the gas generate self-property change through full chemical or physical reaction, and a detection circuit is designed through different characteristics of the gas-sensitive element, so that a physical or chemical signal is finally converted into an electric signal; from the above reaction characteristics, it can be seen that the measurement period of the gas sensor is slow, and that the accuracy assurance is very difficult.
The traditional gas sensor measuring mode is to collect the amplitude of an electric signal sent by a gas sensor circuit by using a lower computer, and set a stable value or peak value in an amplitude time sequence as a current ambient gas concentration value; as shown in FIG. 1, SOF is measured in a carbon nanotube gas sensor 2 And SO 2 F 2 The variation amplitude value of the conductance amplitude caused by the gas concentration can be seen that the irregular variation phenomenon of the peak value exists in the whole reaction process. The experimental process lasts for 85 hours, errors can be reduced only by infinitely prolonging the experimental period in the traditional method for collecting the time sequence amplitude peak value, and the problems of time consumption, experimental result deviation and the like exist in the process.
A method for measuring amplitude peak value based on mathematical characteristics such as time sequence derivative is proposed on a traditional measurement scheme, and the method is to analyze a first derivative value according to an amplitude-time sequence curve to estimate an actual gas concentration value. The original amplitude peak estimation mode is changed, and the actual value is further estimated by adopting the measured speed value. The method shortens the experimental period to a certain extent, but discards the integrity of the time sequence curve, and cannot accurately estimate a continuous change curve of a time sequence acceleration value. In addition, during dynamic measurement, gas species concentrations are highly susceptible to historical concentration conditions. The method has the advantages that the test gas is injected in the state of existing gas in a closed state, the process derivative of a concentration measurement curve is influenced in multiple aspects, and the phenomenon that the same concentration is different in process derivative or similar in process derivative but not matched in actual concentration can occur, so that the method still has certain problems and limitations.
Disclosure of Invention
In order to solve the problems of long detection time period, large influence of history, complex data processing, inaccurate experimental results, low detection precision and the like existing in the current detection field of gas sensors, the invention provides a sensor gas concentration detection method based on a deep learning transformer neural network.
The invention is realized by the following technical scheme:
a sensor gas concentration detection method based on a deep learning transducer neural network specifically comprises the following steps:
step one: preparing and preprocessing a data set;
collecting data of a gas sensor, and preprocessing the data, wherein the preprocessing comprises cleaning, denoising and standardization, and the preprocessing is performed to obtain gas concentration sequence data with a time dimension;
step two: constructing a transducer neural network model;
cutting the data set, introducing the data set into an embedding layer (embedding), adjusting super parameters of an Encoder (Encoder) and a Decoder (Decoder) module in the model, evaluating optimal super parameter combinations by using optimization functions such as grid search and the like according to Mean Square Error (MSE) and Mean Absolute Error (MAE) indexes, and characterizing the performance of the model by using the optimal MSE and MAE;
step three: training a transducer neural network model;
step four: and estimating the actual environment parameter value and the gas concentration value by using the trained transducer neural network model.
Further, in the first step, the data includes a sequence of values of concentration and time values of the gas, and the sequence of values including concentration and time in the sensor is converted into a format for a transducer model, so as to perform a model training task, which specifically includes the following contents:
a1, discretizing a time sequence: discretizing the continuous time series data into data fixed at time intervals of 10 min;
a2, sequence standardization: carrying out mean value normalization processing on the discretized time sequence to enable the discretized time sequence to have similar statistical characteristics;
a3, constructing an input sequence: converting the time sequence data after mean normalization into an input sequence, namely, taking a fixed time length of data as a sequence and inputting the sequence into a transducer neural network model;
a4, batch and filling: and for the condition that the input sequence length is insufficient, performing filling operation to ensure the consistency of the input sequence length.
Further, in the second step, the data are processed in a time sequence by adopting an Encoder-Decoder model of a transducer neural network and an embedded layer; the embedded layer is used for converting data acquired by the sensor into a vector form which can be processed by the neural network; the Encoder module is configured to convert an input sequence into a set of hidden representations; the Decoder module is used for generating the output of the current time step according to the hidden representation provided by the Encoder module and the output generated before.
Further, the embedded layer is composed of a position encoder and input embedding, wherein the position encoder is used for adding position information to input data of each time point so as to facilitate model learning of the sequence of the time sequence; input embedding is used to convert the input data at each point in time into a vector representation of fixed dimensions for subsequent attention mechanisms, encoders and decoders to handle.
Further, the Encoder module includes:
the Multi-Head Attention mechanism is used for carrying out weighted convergence on the input sequence so that the Encoder module can better utilize the information of the input sequence;
the Feed Forward neural Network Position-wise Feed-Forward Network is used to weight aggregate the outputs of the multi-headed attention mechanism described above to generate a set of hidden representations.
Further, the Decoder module includes:
a self-Attention mechanism mask Multi-Head Attention for calculating a relationship between the output of the current time step and the previously generated output, and for interacting the hidden representation provided by the Encoder module with the output of the current time step;
the Multi-Head Attention mechanism is used for carrying out weighted aggregation on the hidden representations provided by the Encoder module so that the Decoder module can better utilize the information of the input sequence;
the feedforward neural Network Position-wise Feed-Forward Network is used for carrying out weighted aggregation on the outputs of the two attention mechanisms so as to generate the output of the current time step.
Furthermore, layer Normalization modules are arranged between the Encoder module and each component of the Decode module, and are used for better signal transmission and preventing gradient disappearance problem during model training.
Further, the second building model specifically includes the following contents:
b1: cutting the data set;
data segmentation is carried out on the sequence data obtained in the step one, 24 hours of data are segmented into time windows with fixed lengths, and the length of each time window is 10-30 mins; and dividing the data set by a ratio of 70/15/15; taking the first 70% of time window as a training set, 15% in the middle as a verification set and the remaining 15% as a test set;
b2: setting an over-parameter range;
firstly, determining the length of an input sequence, selecting data points from 0 to 24 hours as one input sequence length according to the sampling frequency of data and an application scene, and comprehensively recording the concentration change value of the ambient gas; then, determining the batch size and the number of hidden layers, and setting the batch as 32, 64 or 128; determining the number of hidden layers to be 5-6; finally, determining the number of heads to be 6-8;
b3: searching grids;
searching an optimal super-parameter combination in a super-parameter range by adopting a grid searching method;
b4: randomly searching;
randomly searching the optimal super-parameter combination in the super-parameter range by adopting a random searching method;
b5: bayesian optimization;
searching an optimal super-parameter combination in a super-parameter range by adopting a Bayes optimization method;
b6: evaluating the performance of the model;
and respectively training a transducer neural network model by adopting the optimal super-parameter combinations obtained by the method, evaluating the optimal super-parameter combinations on a test set by using Mean Square Error (MSE) and Mean Absolute Error (MAE), and characterizing the model performance by using MSE and MAE corresponding to the optimal parameter models.
Further, the third step is specifically as follows:
selecting the extracted parameter value and the corresponding time value as analysis characteristic quantity of a transducer neural network, namely Q i =[t i ,v i ] T ,Q i Representing data parameters, t, sent by a lower computer at a certain moment i 、v i The concentration and time value of the corresponding gas; the training set is led into the nerve network in an ebadd mode, and is subjected to a series of interpenetration, normalization and attention mechanism training processes to obtain the parameter relation between curve characteristic changes, namely model training is completed.
The sensor gas concentration detection method based on the deep learning transducer neural network has the following advantages compared with the traditional method:
1. the precision and the accuracy of gas concentration detection are improved: the existing gas concentration detection method relies on the processing of sensor measurement data and model fitting, and has the problems of insufficient model complexity, insufficient model generalization capability and the like; the transducer neural network can better mine data characteristics through training of a large amount of data and optimization of a model, and the detection precision and accuracy are improved;
2. concentration detection capable of handling multiple gases: the existing gas concentration detection method models and processes specific gases, so that detection of various gases is difficult to process; the transformer neural network method does not depend on a specific physical model, can process detection of various gases, and has better universality and expansibility;
3. can adapt to different environments and conditions: the existing gas concentration detection method is sensitive to the problems of environmental change, sensor drift and the like, and is difficult to cope with complex environments and conditions; the transform neural network method can better adapt to different environments and conditions through self-adaptive learning and optimization, and the robustness and stability of detection are improved;
4. the on-line detection and real-time monitoring can be realized: the existing gas concentration detection method needs offline treatment and model training, and cannot realize real-time monitoring and online detection; the deep learning method has higher training speed and lower calculation complexity, can realize real-time monitoring and online detection, and has better practicability and application value;
in conclusion, the sensor gas concentration detection method based on the deep learning transducer neural network has the advantages of high precision, strong universality, good robustness, strong instantaneity and the like, can overcome the problems of the traditional method, and achieves better gas concentration detection effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
Fig. 1: a response plot of the gas sensor to 1 μllso2F 2;
fig. 2: in the invention, the whole structure diagram for carrying out sequence processing based on a transducer model is provided;
fig. 3: the invention describes a sensor test time and voltage change curve graph;
fig. 4: two-dimensional map of sequence information in the present invention.
Detailed Description
For a clear and complete description of the technical scheme and the specific working process thereof, the following specific embodiments of the invention are provided with reference to the accompanying drawings in the specification:
example 1
The method for detecting the gas concentration of the sensor based on the deep learning transducer neural network provided by the embodiment specifically comprises the following steps:
step one: preparing and preprocessing a data set;
collecting data of a gas sensor, and preprocessing the data, wherein the preprocessing comprises cleaning, denoising and standardization, and the preprocessing is performed to obtain gas concentration sequence data with a time dimension;
in this embodiment, specifically, the data processing includes the steps of:
A. discretizing the time series: discretizing the continuous time series data into data fixed at 10mins time intervals;
B. sequence normalization: carrying out mean value normalization processing on the discretized time sequence to enable the discretized time sequence to have similar statistical characteristics;
C. constructing an input sequence: converting the time sequence data after mean normalization into an input sequence, namely, inputting a section of data with fixed time length into a transducer model as a sequence;
D. batch and fill: for the condition of insufficient length of the input sequence, filling operation is carried out to ensure the consistency of the length of the input sequence;
step two: constructing a transducer neural network model;
cutting the data set, introducing the data set into an embedding layer (embedding), adjusting super parameters of an Encoder (Encoder) and a Decoder (Decoder) module in the model, evaluating optimal super parameter combinations by using optimization functions such as grid search and the like according to Mean Square Error (MSE) and Mean Absolute Error (MAE) indexes, and characterizing the performance of the model by using the optimal MSE and MAE;
for gases with different characteristics, the detection parameters of the gases have large differences, and in the embodiment, the model construction of the nitrogen dioxide gas is taken as an example, and the specific steps are as follows:
(1) training a transducer neural network model for the two-dimensional time sequence data obtained in the first step, carrying out data segmentation, and segmenting the data for 24 hours into time windows with fixed lengths, wherein the length of each time window is 10-30 mins; and dividing the data set by a ratio of 70/15/15; taking the first 70% of time window as a training set, 15% in the middle as a verification set and the remaining 15% as a test set;
(2) setting an over-parameter range, firstly, determining the length of an input sequence, selecting data points from 0 to 24 hours as one input sequence length according to the sampling frequency and the application scene of data, and comprehensively recording the concentration change value of the environmental gas; then, determining the batch size, and setting the batch as 32, 64 or 128 according to the computing capability of the GPU or the TPU to avoid the problems of insufficient display memory and the like; then, determining the hidden layer number, selecting the optimal hidden layer number according to the data complexity and the limitation of calculation resources, selecting the hidden layer number of 5-6 layers, and improving the expression capacity of the model; finally, determining the number of heads, wherein the multi-head self-attention mechanism is one of key components of a transducer network, and setting the number of heads to be 6-8 in order to enable a model to better capture the dependency relationship between different time steps;
(3) grid searching, namely searching an optimal super-parameter combination in a super-parameter range by using a grid searching method; the grid search method is an exhaustive search method, and can traverse all possible parameter combinations and select the optimal parameter combination; the grid search method is implemented using the gridsetchcv library in Python;
(4) and randomly searching, namely randomly searching the optimal super-parameter combination in the super-parameter range by using a random searching method. The random search method randomly selects some parameter combinations in the super parameter range and selects the optimal parameter combination. The random search method can be implemented using a random search CV library in Python;
(5) and (3) Bayesian optimization, namely searching for an optimal super-parameter combination in a super-parameter range by using a Bayesian optimization method. The Bayesian optimization method searches for the optimal hyper-parameter combination by establishing a relation between the prior distribution of the hyper-parameters and the posterior distribution of the model. The Bayesian optimization method is realized by using a Bayesian optimization library in Python;
(6) evaluating model performance: and training a model by using the optimal super-parameter combinations obtained by the method, evaluating the optimal super-parameter combinations on a test set by using Mean Square Error (MSE) and Mean Absolute Error (MAE), and characterizing the performance of the model by using MSE and MAE corresponding to the optimal parameter models.
When using a transducer neural network for timing processing in a gas sensor, an Encoder-Decoder model is typically used, where the Encoder module is responsible for converting the input sequence into a set of hidden representations and the Decoder module is responsible for generating the output sequence. The whole frame of the scheme is shown in figure 2, a time-amplitude sequence is provided by a lower computer, directly enters a transducer model through an embellishing module, and is processed through a Encoder, decoder module to output a gas concentration value.
The embedding module (embedding) is mainly used for converting data (such as time sequence signals) acquired by the sensor into a vector form which can be processed by the neural network, namely, the data acquired by the sensor is expressed into the vector form so as to be convenient for the neural network to process and learn. Specifically, the embedded layer maps discrete values entered by the gas sensor host computer into a continuous vector representation. In a Transformer neural network, the embedding layer is typically composed of two parts: a position encoder (position encoder) and an input embedding (input embedding). Wherein the position encoder is configured to add position information to the input data at each point in time so that the model learns the order of the time series. Input embedding is used to convert the input data at each point in time into a vector representation of fixed dimensions for subsequent processing by the attention mechanism (attention mechanism), encoder (decoder) and decoder (decoder).
The Encoder module functions to convert the input sequence into a set of hidden representations so that the Decoder module can better utilize the information of the input sequence. In a gas sensor application, the Encoder module may be used to convert the gas concentration at the current time step into a set of hidden representations for use by the Encoder module. An Encoder module is typically composed of several components:
(1) the first component of the Multi-Head Attention, the Encoder module is a Multi-Head Attention mechanism for weighted aggregation of the input sequences so that the Encoder module can better utilize the information of the input sequences.
(2) The second component of the Position-wise Feed-Forward Network-Encoder module is a Feed-Forward neural Network for weighted aggregation of the outputs of the multi-headed attention mechanism described above to generate a set of hidden representations.
(3) Layer Normalization an Layer Normalization module is added between each component of the Encoder module to better signal transmission and prevent gradient vanishing problems during model training.
The signaling relationships between the various components in the Encoder module are as follows:
input sequence- & gt Multi-Head Position- & gt Position-wise Feed-Forward Network- & gt hidden representation sequence
The Multi-Head content component in the Encoder module is used to extract information from the input sequence and generate a set of hidden representations for use by the Encoder module. The Position-wise Feed-Forward Network component is used to further process this information and generate the final hidden representation sequence. Finally, the hidden representation sequence may be used in a Decoder module to generate an output sequence.
The Decoder module is operative to generate an output of the current time step based on the hidden representation provided by the Encoder module and the previously generated output. In gas sensor applications, the Decoder module is used to predict the gas concentration for the next time step. The Decoder module is typically composed of several components:
(1) the first component of the Masked Multi-Head Attention is a self-Attention mechanism for calculating the relationship between the output of the current time step and the previously generated output, and for interacting the hidden representation provided by the Encoder module with the output of the current time step;
(2) the second component of the Decoder module is a Multi-Head Attention mechanism for weighting and gathering the hidden representations provided by the Encoder module so that the Decoder module can better utilize the information of the input sequence;
(3) the third component of the Decoder module is a feedforward neural Network for carrying out weighted convergence on the outputs of the two attention mechanisms so as to generate the output of the current time step;
(4) layer Normalization A Layer Normalization module is added between each component of the Decoder module to better signal transmission and prevent gradient vanishing problem during model training.
The signal transmission relationship between each component in the Decoder module is as follows:
the input sequence- & gt modulated Multi-Head Attention- & gt Position-wise Feed-Forward Network- & gt the modulated Multi-Head Attention and Multi-Head Attention components in the output sequence Decoder module are used for extracting information from the hidden representation provided by the Encoder module and interacting with the output of the current time step to generate the output of the current time step. The Position-wise Feed-Forward Network component is used to further process this information and generate the final output. Finally, the output sequence can be used to predict the gas concentration for the next time step.
Step three: training a transducer neural network model;
selecting the extracted parameter value and its corresponding time value as analysis characteristic quantity of transducer neural network, as shown in figure 3 and figure 4, namely Q i =[t i ,v i ] T ,Q i Representing data parameters, t, sent by a lower computer at a certain moment i 、v i The concentration and time value of the corresponding gas; acquiring characteristic parameters of a time sequence curve as a training set, introducing the characteristic parameters into a nerve network in an ebedding mode, and performing a series of training treatments of interpenetration, normalization and attention mechanisms to obtain a parameter relationship between curve characteristic changes, namely a modelTraining is completed;
step four: and estimating the actual environment parameter value and the gas concentration value by using the trained transducer neural network model.
Example 2
Practical examples of the use of a transducer model for detecting the concentration of nitrogen dioxide gas in a gas sensor:
firstly, collecting nitrogen dioxide gas sample data, placing a gas sensor in a monitoring area, and collecting standard concentration nitrogen dioxide gas concentration data, wherein the data comprise sequence values of concentration and time values of the nitrogen dioxide gas;
next, the collected data is preprocessed. Cleaning, denoising and standardizing the data; the data is smoothed using a moving average method, and noise is removed using a least square method.
The processed data is then input into a transducer model for training. In the training process, the model learns how to correlate the input gas concentration data with the time sequence and establishes weight parameters of the model;
after training is completed, deploying the model into an actual system; when the concentration of the nitrogen dioxide gas is required to be detected, the sensor only needs to transmit a small amount of acquired time sequence data of the sensor to a transducer model for reasoning;
finally, rapidly evaluating the concentration of the nitrogen dioxide gas according to the output result of the model; if the concentration exceeds the safety threshold, the system may trigger an alarm or take other necessary action.
The preferred embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the simple modifications belong to the protection scope of the present invention.
In addition, the specific features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various possible combinations are not described further.
Moreover, any combination of the various embodiments of the invention can be made without departing from the spirit of the invention, which should also be considered as disclosed herein.

Claims (9)

1. The sensor gas concentration detection method based on the deep learning transducer neural network is characterized by comprising the following steps of:
step one: preparing and preprocessing a data set;
collecting data of a gas sensor, and preprocessing the data, wherein the preprocessing comprises cleaning, denoising and standardization, and the preprocessing is performed to obtain gas concentration sequence data with a time dimension;
step two: constructing a transducer neural network model;
cutting the data set, taking the training set into an embedded layer embedding, adjusting super parameters of an Encoder Encoder module and a Decoder module in the model, evaluating an optimal super parameter combination by using optimization functions such as grid search and the like with reference to mean square error and average absolute error indexes, and characterizing the performance of the model by using optimal MSE and MAE;
step three: training a transducer neural network model;
step four: and estimating the actual environment parameter value and the gas concentration value by using the trained transducer neural network model.
2. The method for detecting the concentration of a sensor gas based on a deep learning transducer neural network according to claim 1, wherein in the first step, the data includes a sequence of values of concentration and time values of the gas, and the sequence of values including concentration and time in the sensor is converted into a format for a transducer model, thereby performing a model training task, specifically including the following:
a1, discretizing a time sequence: discretizing the continuous time series data into data fixed at time intervals of 10 min;
a2, sequence standardization: carrying out mean value normalization processing on the discretized time sequence to enable the discretized time sequence to have similar statistical characteristics;
a3, constructing an input sequence: converting the time sequence data after mean normalization into an input sequence, namely, taking a fixed time length of data as a sequence and inputting the sequence into a transducer neural network model;
a4, batch and filling: and for the condition that the input sequence length is insufficient, performing filling operation to ensure the consistency of the input sequence length.
3. The method for detecting the concentration of the sensor gas based on the deep learning transducer neural network according to claim 1, wherein in the second step, the data are processed in a time sequence by adopting an Encoder-Decoder model and an embedding layer of the transducer neural network; the embedded layer is used for converting data acquired by the sensor into a vector form which can be processed by the neural network; the Encoder module is configured to convert an input sequence into a set of hidden representations; the Decoder module is used for generating the output of the current time step according to the hidden representation provided by the Encoder module and the output generated before.
4. A method for detecting sensor gas concentration based on deep learning transducer neural network according to claim 3, wherein the embedding layer is composed of a position encoder and an input embedding, the position encoder is used for adding position information to the input data at each time point so as to facilitate model learning of the sequence of time series; input embedding is used to convert the input data at each point in time into a vector representation of fixed dimensions for subsequent attention mechanisms, encoders and decoders to handle.
5. The sensor gas concentration detection method based on deep learning transducer neural network of claim 3, wherein the Encoder module comprises:
the Multi-Head Attention mechanism is used for carrying out weighted convergence on the input sequence so that the Encoder module can better utilize the information of the input sequence;
the Feed Forward neural Network Position-wise Feed-Forward Network is used to weight aggregate the outputs of the multi-headed attention mechanism described above to generate a set of hidden representations.
6. The sensor gas concentration detection method based on deep learning transducer neural network according to claim 3, wherein the Decoder module comprises:
a self-Attention mechanism mask Multi-Head Attention for calculating a relationship between the output of the current time step and the previously generated output, and for interacting the hidden representation provided by the Encoder module with the output of the current time step;
the Multi-Head Attention mechanism is used for carrying out weighted aggregation on the hidden representations provided by the Encoder module so that the Decoder module can better utilize the information of the input sequence;
the feedforward neural Network Position-wise Feed-Forward Network is used for carrying out weighted aggregation on the outputs of the two attention mechanisms so as to generate the output of the current time step.
7. The method for detecting the concentration of the sensor gas based on the deep learning transducer neural network according to claim 3, wherein Layer Normalization modules are arranged between the components of the Encoder module and the Decoder module and are used for better signal transmission and preventing gradient disappearance problems during model training.
8. The method for detecting the concentration of the sensor gas based on the deep learning transducer neural network as claimed in claim 1, wherein the construction model in the second step specifically comprises the following steps:
b1: cutting the data set;
data segmentation is carried out on the sequence data obtained in the step one, 24 hours of data are segmented into time windows with fixed lengths, and the length of each time window is 10-30 mins; and dividing the data set by a ratio of 70/15/15; taking the first 70% of time window as a training set, 15% in the middle as a verification set and the remaining 15% as a test set;
b2: setting an over-parameter range;
firstly, determining the length of an input sequence, selecting data points from 0 to 24 hours as one input sequence length according to the sampling frequency of data and an application scene, and comprehensively recording the concentration change value of the ambient gas; then, determining the batch size and the number of hidden layers, and setting the batch as 32, 64 or 128; determining the number of hidden layers to be 5-6; finally, determining the number of heads to be 6-8;
b3: searching grids;
searching an optimal super-parameter combination in a super-parameter range by adopting a grid searching method;
b4: randomly searching;
randomly searching the optimal super-parameter combination in the super-parameter range by adopting a random searching method;
b5: bayesian optimization;
searching an optimal super-parameter combination in a super-parameter range by adopting a Bayes optimization method;
b6: evaluating the performance of the model;
respectively training a transducer neural network model by adopting the optimal super-parameter combinations obtained by the method, evaluating the optimal super-parameter combinations on a test set by using mean square error and average absolute error, and characterizing the model performance by using MSE and MAE corresponding to the optimal parameter models.
9. The method for detecting the concentration of the sensor gas based on the deep learning transducer neural network as claimed in claim 1, wherein the third step is as follows:
selecting the extracted parameter value and the corresponding time value as analysis characteristic quantity of a transducer neural network, namely Q i =[t i ,v i ] T ,Q i Representing data parameters, t, sent by a lower computer at a certain moment i 、v i The concentration and time value of the corresponding gas; will trainThe training set is led into the nerve network in an ebedding mode, and is subjected to a series of interpenetration, normalization and attention mechanism training processes to obtain the parameter relation between curve characteristic changes, namely model training is completed.
CN202310311018.2A 2023-03-28 2023-03-28 Sensor gas concentration detection method based on deep learning transducer neural network Pending CN116304912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310311018.2A CN116304912A (en) 2023-03-28 2023-03-28 Sensor gas concentration detection method based on deep learning transducer neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310311018.2A CN116304912A (en) 2023-03-28 2023-03-28 Sensor gas concentration detection method based on deep learning transducer neural network

Publications (1)

Publication Number Publication Date
CN116304912A true CN116304912A (en) 2023-06-23

Family

ID=86828639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310311018.2A Pending CN116304912A (en) 2023-03-28 2023-03-28 Sensor gas concentration detection method based on deep learning transducer neural network

Country Status (1)

Country Link
CN (1) CN116304912A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559681A (en) * 2023-07-12 2023-08-08 安徽国麒科技有限公司 Retired battery capacity prediction method and device based on deep learning time sequence algorithm
CN117091799A (en) * 2023-10-17 2023-11-21 湖南一特医疗股份有限公司 Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center
CN117768207A (en) * 2023-12-24 2024-03-26 中国人民解放军61660部队 Network flow unsupervised anomaly detection method based on improved transducer reconstruction model
CN118098443A (en) * 2024-04-29 2024-05-28 四川希尔得科技有限公司 Online upgrading system and method for infrared gas sensor
CN118538315A (en) * 2024-07-25 2024-08-23 自然资源部第一海洋研究所 Deep learning-based ocean subsurface chlorophyll a concentration prediction method
CN118706932A (en) * 2024-08-28 2024-09-27 成都益清源科技有限公司 VOCs/TVOC detection method based on photoionization sensor

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559681A (en) * 2023-07-12 2023-08-08 安徽国麒科技有限公司 Retired battery capacity prediction method and device based on deep learning time sequence algorithm
CN117091799A (en) * 2023-10-17 2023-11-21 湖南一特医疗股份有限公司 Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center
CN117091799B (en) * 2023-10-17 2024-01-02 湖南一特医疗股份有限公司 Intelligent three-dimensional monitoring method and system for oxygen supply safety of medical center
CN117768207A (en) * 2023-12-24 2024-03-26 中国人民解放军61660部队 Network flow unsupervised anomaly detection method based on improved transducer reconstruction model
CN117768207B (en) * 2023-12-24 2024-10-18 中国人民解放军61660部队 Network flow unsupervised anomaly detection method based on improved transducer reconstruction model
CN118098443A (en) * 2024-04-29 2024-05-28 四川希尔得科技有限公司 Online upgrading system and method for infrared gas sensor
CN118538315A (en) * 2024-07-25 2024-08-23 自然资源部第一海洋研究所 Deep learning-based ocean subsurface chlorophyll a concentration prediction method
CN118538315B (en) * 2024-07-25 2024-10-15 自然资源部第一海洋研究所 Deep learning-based ocean subsurface chlorophyll a concentration prediction method
CN118706932A (en) * 2024-08-28 2024-09-27 成都益清源科技有限公司 VOCs/TVOC detection method based on photoionization sensor

Similar Documents

Publication Publication Date Title
CN116304912A (en) Sensor gas concentration detection method based on deep learning transducer neural network
CN110059357B (en) Intelligent ammeter fault classification detection method and system based on self-coding network
CN111504676B (en) Equipment fault diagnosis method, device and system based on multi-source monitoring data fusion
CN110632572A (en) Radar radiation source individual identification method and device based on unintentional phase modulation characteristics
Shajihan et al. CNN based data anomaly detection using multi-channel imagery for structural health monitoring
CN115184054B (en) Mechanical equipment semi-supervised fault detection and analysis method, device, terminal and medium
CN116340881A (en) Self-adaptive post-fusion detection method for gas sensor array
CN111815561B (en) Pipeline defect and pipeline assembly detection method based on depth space-time characteristics
CN111753877B (en) Product quality detection method based on deep neural network migration learning
CN115169430A (en) Cloud network end resource multidimensional time sequence anomaly detection method based on multi-scale decoding
CN114679310A (en) Network information security detection method
CN116204770A (en) Training method and device for detecting abnormality of bridge health monitoring data
Shen et al. SSCT-Net: A semisupervised circular teacher network for defect detection with limited labeled multiview MFL samples
Eo et al. Deep learning framework with essential pre-processing techniques for improving mixed-gas concentration prediction
CN111031064A (en) Method for detecting power grid false data injection attack
CN113326744B (en) Method and system for detecting on-orbit state abnormity of spacecraft
CN117056865B (en) Method and device for diagnosing operation faults of machine pump equipment based on feature fusion
CN112069621B (en) Method for predicting residual service life of rolling bearing based on linear reliability index
CN117608959A (en) Domain countermeasure migration network-based flight control system state monitoring method
CN107689015A (en) A kind of improved power system bad data recognition method
CN117055527A (en) Industrial control system abnormality detection method based on variation self-encoder
CN116702005A (en) Neural network-based data anomaly diagnosis method and electronic equipment
CN110287924A (en) A kind of soil parameters classification method based on GRU-RNN model
Zhao et al. Research on Transformer Oil Multi-frequency Ultrasonic Monitoring Technology Based on Convolutional Neural Network
CN113657149A (en) Electric energy quality analysis and identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination