CN113780138B - Self-adaptive robustness VOCs gas leakage detection method, system and storage medium - Google Patents

Self-adaptive robustness VOCs gas leakage detection method, system and storage medium Download PDF

Info

Publication number
CN113780138B
CN113780138B CN202111013939.8A CN202111013939A CN113780138B CN 113780138 B CN113780138 B CN 113780138B CN 202111013939 A CN202111013939 A CN 202111013939A CN 113780138 B CN113780138 B CN 113780138B
Authority
CN
China
Prior art keywords
data
training
dimensional
neural network
vocs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111013939.8A
Other languages
Chinese (zh)
Other versions
CN113780138A (en
Inventor
曹洋
谭几方
康宇
夏秀山
许镇义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Technology University of Science and Technology of China
Original Assignee
Institute of Advanced Technology University of Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Technology University of Science and Technology of China filed Critical Institute of Advanced Technology University of Science and Technology of China
Priority to CN202111013939.8A priority Critical patent/CN113780138B/en
Publication of CN113780138A publication Critical patent/CN113780138A/en
Application granted granted Critical
Publication of CN113780138B publication Critical patent/CN113780138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Examining Or Testing Airtightness (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a self-adaptive robustness VOCs gas leakage detection method, a system and a storage medium, which comprises the following steps of acquiring infrared video data and carrying out preprocessing operation; extracting one-dimensional time sequence characteristic data of pixel points with a certain length from infrared video data, and training a one-dimensional convolutional neural network classifier; training parameter alpha of prior gamma distribution by using one-dimensional convolutional neural network classifier and EVT algorithm with output value led into Bayesian framework 0 And beta 0 (ii) a And inputting related parameters, adjusting a threshold value through a self-adaptive algorithm, and outputting a prediction result. The method fully utilizes the time-space characteristics of pixel points in VOCs gas regions in infrared video data to carry out pre-screening on the infrared video image by using the convolutional neural network, optimizes and adjusts the screening threshold value through an extreme value theory in a Bayesian framework, approaches the right tail part of a probability density function of a fraction by using index distribution, and uses gamma conjugate prior learned from training data, so that the variability of error rate can be reduced and the overall performance can be improved.

Description

Self-adaptive robustness VOCs gas leakage detection method, system and storage medium
Technical Field
The invention relates to the technical field of VOCs gas leakage detection in the field of environmental detection, in particular to an extremum theory-based adaptive robustness VOCs gas leakage detection method, system and storage medium.
Background
In recent years, with the rapid development of petrochemical industry, the production safety problem is more and more important. The leakage of Volatile Organic Compounds (VOCs) can lead to human health problems such as cancer, birth defects and reproductive effects. VOCs also contribute to the formation of ozone, which is a major source of smog and one of the major causes of respiratory diseases in urban areas and areas near oil refineries and chemical plants, and therefore detection and management of VOCs has become a focus of current air treatment problems.
The color of the VOCs leakage region on the infrared video data is darker than the surrounding region (white heat mode) in consideration of the absorption of infrared light by VOCs gas. The infrared video data is affected by factors such as illumination, temperature, humidity and climate. The complex imaging conditions make detection of VOCs gas leaks less reliable. In summary, we propose an adaptive robust VOCs gas leakage detection method based on the extremum theory, which extracts pixel space-time information in infrared video data for leakage pre-screening, optimally adjusts the screening threshold value using the extremum theory (EVT) in the bayesian framework, and can reduce the variability of error rate and improve the overall performance by approximating the right tail of the probability density function of the fraction with exponential distribution (special case of generalized pareto distribution) and using gamma conjugate priors learned from training data. The method aims to realize the robustness detection of the VOCs gas leakage, thereby being suitable for the VOCs gas leakage detection under various imaging conditions.
Disclosure of Invention
The adaptive robustness VOCs gas leakage detection method based on the extreme value theory can realize reliable VOCs leakage detection under the condition of complex imaging conditions.
In order to achieve the purpose, the invention adopts the following technical scheme:
comprises the following steps of (a) carrying out,
step 1: acquiring data of VOCs leakage areas and non-leakage areas in infrared video data to carry out preprocessing operation;
step 2: extracting one-dimensional time sequence characteristic data of pixel points with a certain length from infrared video data, and training a one-dimensional convolutional neural network classifier;
and 3, step 3: sampling the space-time characteristics of a plurality of pixel points from infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading output values into an EVT algorithm in a Bayesian framework, and training prior gamma distributionParameter α of 0 And beta 0
And 4, step 4: inputting relevant parameters, adjusting a threshold value through a self-adaptive algorithm, and outputting a prediction result;
wherein the step 2: extracting one-dimensional time sequence characteristic data of pixel points with a certain length from infrared video data, and training a one-dimensional convolutional neural network classifier; the method specifically comprises the following subdivision steps S21-S23:
step S21: extracting one pixel from each 8X 8 or 16X 16 block of VOCs gas area of segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L L 1), L is the number of scene frames, where 1 represents that the data comes from the area where VOCs are leaking, and X is L =[x′ 1 x′ 2 ... x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from VOCs gas regions in segmentation scenes without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region;
step S22: firstly, extracting the obtained one-dimensional time sequence data X of the pixel points L Carrying out numerical value normalization to enable one-dimensional time sequence data X of pixel points L Each element satisfying 0 ≦ x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Carrying out zero equalization; then, the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data;
step S23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of the one-dimensional convolutional neural network classifier is pixel point one-dimensional time sequence data X L The output is D (X) L ) In which D (X) L ) E (0, 1), stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%, thereby obtaining a one-dimensional convolution neural network classification model;
the step 3: sampling the space-time characteristics of several pixel points from the infrared video data for many times, using one-dimensional convolution neural network classifier, and importing the output value into the EVT calculation in Bayes frameMethod, training parameter alpha of prior gamma distribution 0 And beta 0 Specifically, the method includes the following steps S31 to S34:
step S31: randomly extracting one pixel from each 8 × 8 or 16 × 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with length L L Sending the data to a one-stage one-dimensional convolution neural network classifier to obtain an output D (X) L ) Wherein D (X) L )∈(0,1);
Step S32: mixing K X L =[x′ 1 x′ 2 ... x′ L ] T As data X, K D (X) L ) Constructing an EVT training algorithm data set T for the corresponding label sequence y;
step S33: picking out negative samples g ═ x from the dataset T i |y i 0, according to the right tail probability p u Searching an upper limit threshold u of the negative sample g, taking out the right tail t through the upper limit threshold, and updating sufficient statistics n and s of data which are not marked as abnormal;
step S34: adjusting a parameter α of a prior gamma distribution 0 And beta 0 Is shown as
α 0 =1+w 0
Figure GDA0003749844670000031
Wherein, w 0 Is the weight assigned to the sample count of the training set;
the step 4: inputting relevant parameters, adjusting a threshold value through an adaptive algorithm, and outputting a prediction result, wherein the method specifically comprises the following subdivision steps S41 to S43:
step S41: constructing a data set by using a one-dimensional convolutional neural network classifier on a segmented scene video frame to be detected; according to the mode of the step S33, an upper threshold value of the data set is found, all samples of the right tail t1 are taken out, and a KolmogorovSmirnov test is carried out to find and eliminate the abnormity, wherein the KolmogorovSmirnov test method is that
Figure GDA0003749844670000041
Wherein
Figure GDA0003749844670000042
Figure GDA0003749844670000043
Figure GDA0003749844670000044
Calculating to obtain D n Then, the largest sample is removed and Dn is calculated using the remaining samples; continuously circulating, and finally selecting D n The label value of the smallest sample is noted
Figure GDA0003749844670000045
Step S42: after removing the anomaly, selecting to
Figure GDA0003749844670000046
For the threshold, the right tail is extracted, the a priori estimate during training is used to calculate the a posteriori of the whole sequence, updated to α 1 And beta 1
Step S43: the posterior obtained by calculation in S42 is used as a prior, and a sample x is set j A window W as a center, denoted by
Figure GDA0003749844670000047
According to the right tail probability p u Searching the upper limit threshold u2 of the window, extracting the right tail part, and adjusting alpha, beta and
Figure GDA0003749844670000051
by calculating to obtain y j Is shown as
Figure GDA0003749844670000052
Wherein p is f Continuously and circularly obtaining the label values y of all samples for the target error rate j
Further, the step 1: acquiring infrared video data with and without leakage of VOCs and preprocessing the data, wherein the method specifically comprises the following subdivision steps S11-S12:
step S11: acquiring infrared video data with and without leakage of VOCs;
step S12: and carrying out preprocessing operations of random rotation, frame size normalization and scene segmentation on the infrared video data.
On the other hand, the invention also discloses an adaptive robustness VOCs gas leakage detection system based on the extreme value theory, which comprises the following units,
the data acquisition and processing unit is used for acquiring data of VOCs leakage areas and non-leakage areas in the infrared video data to carry out preprocessing operation;
the one-dimensional network structure training unit is used for extracting one-dimensional time sequence characteristic data of pixel points with a certain length from the infrared video data and training a one-dimensional convolutional neural network classifier;
a parameter determination unit for sampling the space-time characteristics of a plurality of pixel points from the infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of the prior gamma distribution 0 And beta 0
The prediction unit is used for inputting relevant parameters, adjusting a threshold value through a self-adaptive algorithm and outputting a prediction result;
the one-dimensional network structure training unit comprises the following specific processing steps:
step S21: extracting one pixel from each 8X 8 or 16X 16 block of VOCs gas area of segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L L 1), L is the number of scene frames, where 1 represents that the data comes from the area where VOCs are leaking, andX L =[x′ 1 x′ 2 ... x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from VOCs gas regions in segmentation scenes without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region;
step S22: firstly, extracting the obtained one-dimensional time sequence data X of the pixel points L Carrying out numerical value normalization to enable one-dimensional time sequence data X of pixel points L Each element satisfies 0 ≦ x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Carrying out zero equalization; then, the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data;
step S23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of the one-dimensional convolutional neural network classifier is pixel point one-dimensional time sequence data X L The output is D (X) L ) Wherein D (X) L ) E (0, 1), stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%, thereby obtaining a one-dimensional convolution neural network classification model;
the parameter determination unit comprises the following specific processing steps:
step S31: randomly extracting one pixel from each 8 × 8 or 16 × 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with length L L Sending the data into a one-dimensional convolutional neural network classifier to obtain an output D (X) L ) Wherein D (X) L )∈(0,1);
Step S32: mixing K X L =[x′ 1 x′ 2 ... x′ L ] T As data X, K D (X) L ) Constructing an EVT training algorithm data set T for the corresponding label sequence y;
step S33: picking out negative samples g ═ x from the dataset T i |y i 0, according to the right tail probability p u Searching an upper limit threshold u of the negative sample g, taking out the right tail t through the upper limit threshold, and updating the sufficiency of the data which are not marked as abnormalThe statistics of n and s are such that,
step S34: adjusting a parameter alpha of a prior gamma distribution 0 And beta 0 Is shown as
α 0 =1+w 0
Figure GDA0003749844670000071
Wherein w 0 Is the weight assigned to the sample count of the training set;
the prediction unit comprises the following specific processing steps:
step S41: constructing a data set by using a one-dimensional convolution neural network classifier on a to-be-detected segmented scene video frame; according to the mode of the step S33, an upper threshold value of the data set is found, all samples of the right tail t1 are taken out, and a KolmogorovSmirnov test is carried out to find and eliminate the abnormity, wherein the KolmogorovSmirnov test method is that
Figure GDA0003749844670000072
Wherein
Figure GDA0003749844670000073
Figure GDA0003749844670000074
Figure GDA0003749844670000081
Is calculated to obtain D n Then, the largest sample is removed and Dn is calculated using the remaining samples; continuously circulating, and finally selecting D n The label value of the smallest sample is noted
Figure GDA0003749844670000082
Step S42: after removing the anomaly, selecting to
Figure GDA0003749844670000083
For the threshold, the right tail is extracted, the a priori estimate during training is used to calculate the a posteriori of the whole sequence, updated to α 1 And beta 1
Step S43: using the posterior obtained by calculation in S42 as a prior, setting a sample x j A window W in the center, denoted as
Figure GDA0003749844670000084
According to the right tail probability p u Searching the upper limit threshold u2 of the window, extracting the right tail part, and adjusting alpha, beta and
Figure GDA0003749844670000085
by calculating to obtain y j Is represented as
Figure GDA0003749844670000086
Wherein p is f Continuously and circularly obtaining the label values y of all samples for the target error rate j
In yet another aspect, the present invention further discloses a computer readable storage medium storing a computer program, which when executed by a processor, causes the processor to perform the steps of the above-mentioned adaptive robust VOCs gas leak detection method based on extremum theory.
According to the technical scheme, the adaptive robustness VOCs gas leakage detection method and system based on the extreme value theory overcome the defects of the existing method, the spatial-temporal characteristics of pixel points in VOCs gas regions in infrared video data are fully utilized, the convolution neural network is used for pre-screening the infrared video images, the screening threshold value is optimized and adjusted through the Extreme Value Theory (EVT) in a Bayesian framework, the right tail of a probability density function of a fraction is approximated through exponential distribution (special conditions of generalized pareto distribution), and gamma conjugate prior learned from training data is used, so that the variability of the error rate can be reduced, and the overall performance can be improved. Therefore, the robustness detection of the VOCs leakage is realized.
Drawings
FIG. 1 is a schematic diagram of an overall network model of the method of the present invention;
FIG. 2 is a graph showing the results of the experiment according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
As shown in fig. 1, the adaptive robust VOCs gas leakage detection method based on the extremum theory in this embodiment includes the following steps:
step 1: acquiring data of VOCs leakage areas and non-leakage areas in infrared video data to carry out preprocessing operation;
and 2, step: and extracting one-dimensional time sequence characteristic data of pixel points with a certain length from the infrared video data, and training a one-dimensional convolutional neural network classifier.
And step 3: sampling the space-time characteristics of a plurality of pixel points from infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of prior gamma distribution 0 And beta 0
And 4, step 4: and inputting related parameters, adjusting a threshold value through a self-adaptive algorithm, and outputting a prediction result.
The following is a detailed description:
further, the step 1: and acquiring infrared video data with and without VOCs leakage and preprocessing the data. The method specifically comprises the following subdivision steps S11-S12:
s11: and acquiring infrared video data with VOCs leakage and no leakage.
S12: and carrying out preprocessing operations such as random rotation, frame size normalization, scene segmentation and the like on the infrared video data.
Further, the step 2: and extracting one-dimensional time sequence characteristic data of pixel points with a certain length from the infrared video data, and training a one-dimensional convolutional neural network classifier. The method specifically comprises the following subdivision steps S21-S23:
s21: extracting a pixel from each 8X 8 or 16X 16 block of the dark part (VOCs gas region) of the segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L (the number of scene frames, L is 160 in the invention) L 1), where 1 denotes that the data comes from the presence of a leaking region of VOCs, and X L =[x′ 1 x′ 2 ... x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from the dark part in the segmentation scene without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region.
S22: firstly, extracting the obtained one-dimensional time sequence data X of the pixel points L Carrying out numerical value normalization so that each element of the elements satisfies 0 to x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Zero equalization is performed. And then the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data.
S23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of a one-stage classifier is pixel point one-dimensional time sequence data X L The output is to obtain an output D (X) L ) Wherein D (X) L ) And e (0, 1), and stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%. Thereby obtaining a stage classification model such as Table 1;
Table 1
one-stage network architecture
Figure GDA0003749844670000101
Figure GDA0003749844670000111
Further, the step 3: sampling the space-time characteristics of a plurality of pixel points from infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of prior gamma distribution 0 And beta 0 . EVT training algorithm processes such as Table 2;
the method specifically comprises the following subdivision steps S31-S34:
s31: randomly extracting one pixel from each 8 × 8 or 16 × 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with length L (the number of the scene frames) L Sending the data into a one-stage one-dimensional convolution neural network to obtain an output D (X) L ) Wherein D (X) L )∈(0,1)。
S32: mixing K X L =[x′ 1 x′ 2 ... x′ L ] T As data X, K D (X) L ) An EVT training algorithm data set T is constructed for the corresponding tag sequence y.
S33: picking out negative samples g ═ x from the dataset T i |y i 0, and substituted into equation { g } i >u}=gp u The formula is expressed in terms of the right tail probability p u Finding the upper threshold u of the negative example g, where the parameter p u Is the probability of the right tail. The right tail t is taken out through an upper threshold, and sufficient statistics n and s of data which are not marked as abnormal are updated and can be expressed as
n j+1 =n j +t
s j+1 =s j +∑t
Wherein n is 0 And s 0 The initialization is 0.
S34: adjusting a parameter alpha of a prior gamma distribution 0 And beta 0 Can be represented as
α 0 =1+w 0
Figure GDA0003749844670000121
Wherein, w 0 Is the weight assigned to the sample count of the training set.
Table 2
EVT training algorithm details
Figure GDA0003749844670000122
Further, the step 4: inputting relevant parameters, adjusting a threshold value through an adaptive algorithm, and outputting a prediction result, wherein the details of the EVT adaptive threshold algorithm are shown as Table 3;
the method specifically comprises the following subdivision steps S41-S43:
s41: according to step S33, finding the upper threshold, all samples of the tail are taken and a series of kolmogorov smirnov (ks) tests are performed to find and eliminate anomalies. Constructing a data set by using a one-dimensional convolution neural network classifier for a to-be-detected segmented scene video frame; according to the mode of the step S33, an upper threshold value of the data set is found, all samples of the right tail t1 are taken out, and a KS test is performed, wherein the KS test method is as follows
Figure GDA0003749844670000131
Wherein
Figure GDA0003749844670000132
Figure GDA0003749844670000133
Figure GDA0003749844670000134
Is calculated to obtain D n Or called D n,1 . Then, the largest sample is removedAnd using the remaining samples to calculate D n,2 . Continuously iterating until obtaining
Figure GDA0003749844670000135
Finally, select D n,i The minimum value of i is noted
Figure GDA0003749844670000136
S42: after removing the anomaly, selecting to
Figure GDA0003749844670000137
For the threshold, according to step S33, the right tail is extracted, the a priori estimate during training is used to calculate the a posteriori of the entire sequence, updated to α 1 And beta 1
S43: the resulting posterior calculated in S42 is taken as a prior. Setting a window centered on the sample, denoted as
Figure GDA0003749844670000138
According to { w i >u}=wp u Searching an upper limit threshold u2, extracting a right tail part according to S33, and adjusting alpha, beta and
Figure GDA0003749844670000139
by calculating to obtain y j Is shown as
Figure GDA0003749844670000141
Wherein p is f The target error rate. The adjusted fraction y can be obtained by continuous iteration j I.e. an adaptive threshold is implemented.
Table 3
EVT adaptive threshold algorithm details
Figure GDA0003749844670000142
FIG. 2 shows a frame with VOCs leakage in IR video, which is convolved with a neural function in one dimensionThe frames in the network that are not identified as VOCs leakage are extracted by EVT threshold adaptive algorithm to obtain the right tail computation y j Where the red box is the area of leakage. It can be seen that the method of the present invention can effectively detect the gas leakage of VOCs, and the model with EVT algorithm has higher detection performance than single-dimensional CNN under different scenes.
In summary, the adaptive robust detection method for the gas leakage of the VOCs based on the extreme value theory of the invention uses the one-dimensional convolutional neural network to realize the rapid pre-detection of the gas leakage of the VOCs and simultaneously reduce the calculated amount, so that the algorithm can be adapted to the equipment with limited performance; by adopting an EVT threshold value self-adaptive mode, the robustness of the detection of the infrared video data VOCs leakage can be improved, and the influence of error identification caused by factors such as illumination, temperature and climate can be reduced.
On the other hand, the invention also discloses an adaptive robustness VOCs gas leakage detection system based on extreme value theory, which comprises the following units,
the data acquisition and processing unit is used for acquiring data of VOCs leakage areas and non-leakage areas in the infrared video data to carry out preprocessing operation;
the one-dimensional network structure training unit is used for extracting one-dimensional time sequence characteristic data of pixel points with a certain length from the infrared video data and training a one-dimensional convolutional neural network classifier;
a parameter determination unit for sampling the space-time characteristics of a plurality of pixel points from the infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of the prior gamma distribution 0 And beta 0
And the prediction unit is used for inputting the related parameters, adjusting the threshold value through a self-adaptive algorithm and outputting a prediction result.
Further, the one-dimensional network structure training unit specifically comprises the following processing steps:
s21: extracting one pixel from each 8X 8 or 16X 16 block of VOCs gas area of segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L L 1), L is the number of scene frames, where 1 represents that the data comes from the area where VOCs are leaking, and X is L =[x′ 1 x′ 2 ... x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from VOCs gas regions in segmentation scenes without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region;
s22: firstly, extracting the obtained one-dimensional time sequence data X of the pixel points L Carrying out numerical value normalization to enable one-dimensional time sequence data X of pixel points L Each element satisfies 0 ≦ x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Carrying out zero equalization; then, the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data;
s23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of a one-stage classifier is pixel point one-dimensional time sequence data X L The output is D (X) L ) In which D (X) L ) Belonging to (0, 1), stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%, and thus obtaining a one-stage classification model.
Further, the specific processing steps of the parameter determination unit are as follows:
s31: randomly extracting one pixel from each 8X 8 or 16X 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with the length L L Sending the data into a one-stage one-dimensional convolution neural network to obtain an output D (X) L ) Wherein D (X) L )∈(0,1);
S32: mixing K X L =[x′ 1 x′ 2 ... x′ L ] T As data X, K D (X) L ) Constructing an EVT training algorithm data set T for the corresponding label sequence y;
s33: selecting a negative sample g ═ x from the data set T i |y i 0, and substituted into the formula g i >u}=gp u Finding an upper threshold u, where the parameter p u Is the probability of the right tail, the right tail t is extracted through an upper threshold value, and sufficient statistics n and s of data which are not marked as abnormal are updated and are expressed as
n j+1 =n j +t
s j+1 =s j +∑t
Wherein n is 0 And s 0 Initialization is 0;
s34: adjusting a parameter alpha of a prior gamma distribution 0 And beta 0 Is shown as
α 0 =1+w 0
Figure GDA0003749844670000171
Wherein, w 0 Is the weight assigned to the sample count of the training set.
Further, the prediction unit specifically comprises the following processing steps:
s41: according to the step S33, finding out an upper limit threshold, taking out all samples at the tail part to be detected to segment the scene video frame, and constructing a data set by using a one-dimensional convolution neural network; in the manner of step S33, the upper threshold of the data set is found and all samples of the right tail t1 are taken to perform a series of KolmogorovSmirnov (KS) tests to find and eliminate anomalies, KS being
Figure GDA0003749844670000172
Wherein
Figure GDA0003749844670000173
Figure GDA0003749844670000174
Figure GDA0003749844670000175
Is calculated to obtain D n Or called D n,1 Then, the largest sample is removed and D is calculated using the remaining samples n,2 Continuously iterating until obtaining
Figure GDA0003749844670000181
Finally, select D n,i The minimum value of i is noted
Figure GDA0003749844670000182
S42: after removing the anomaly, selecting to
Figure GDA0003749844670000183
For threshold, extract the right tail, use the prior estimate during training to calculate the posterior of the whole sequence, update to α 1 And beta 1
S43: the posterior obtained by calculation in S42 is used as a prior, and a sample x is set j A window W as a center, denoted by
Figure GDA0003749844670000184
According to the right tail probability p u Searching the upper limit threshold u2 of the window, extracting the right tail part, and adjusting alpha, beta and
Figure GDA0003749844670000185
by calculating to obtain y j Is shown as
Figure GDA0003749844670000186
Wherein p is f Obtaining an adjusted score y for the target error rate by iterating continuously j I.e. an adaptive threshold is implemented.
The invention also discloses a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, causes the processor to carry out the steps of the method as described above.
It is understood that the system provided by the embodiment of the present invention corresponds to the method provided by the embodiment of the present invention, and the explanation, the example and the beneficial effects of the related contents can refer to the corresponding parts in the method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. A self-adaptive robustness VOCs gas leakage detection method based on an extreme value theory is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step 1: acquiring data of VOCs leakage areas and non-leakage areas in infrared video data to carry out preprocessing operation;
step 2: extracting one-dimensional time sequence characteristic data of pixel points with a certain length from infrared video data, and training a one-dimensional convolutional neural network classifier;
and step 3: sampling the space-time characteristics of a plurality of pixel points from the infrared video data for multiple times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of prior gamma distribution 0 And beta 0
And 4, step 4: inputting relevant parameters, adjusting a threshold value through a self-adaptive algorithm, and outputting a prediction result;
wherein the step 2: extracting one-dimensional time sequence characteristic data of pixel points with a certain length from infrared video data, and training a one-dimensional convolutional neural network classifier; the method specifically comprises the following subdivision steps S21-S23:
step S21: extracting one pixel from each 8X 8 or 16X 16 block of VOCs gas area of segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L L 1), L is the number of scene frames, where 1 represents that the data comes from the area where VOCs are leaking, and X is L =[x′ 1 x′ 2 ...x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from VOCs gas regions in segmentation scenes without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region;
step S22: firstly, one-dimensional time sequence data X of pixel points obtained by extraction is subjected to L Carrying out numerical value normalization to enable one-dimensional time sequence data X of pixel points L Each element satisfies 0 ≦ x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Carrying out zero equalization; then, the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data;
step S23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of the one-dimensional convolutional neural network classifier is pixel point one-dimensional time sequence data X L The output is D (X) L ) Wherein D (X) L ) E (0, 1), stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%, thereby obtaining a one-dimensional convolution neural network classification model;
the step 3: sampling the space-time characteristics of a plurality of pixel points from infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of prior gamma distribution 0 And beta 0 Specifically, the method comprises the following subdivision steps S31 to S34:
step S31: randomly extracting one pixel from each 8 × 8 or 16 × 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with length L L Sending the data to a one-stage one-dimensional convolution neural network classifier to obtain an output D (X) L ) Wherein D (X) L )∈(0,1);
Step S32: mixing K X L =[x′ 1 x′ 2 ...x′ L ] T As data X, K D (X) L ) Is corresponding toConstructing an EVT training algorithm data set T by using a label sequence y;
step S33: picking out negative samples g ═ x from the dataset T i |y i 0, according to the right tail probability p u Searching an upper limit threshold u of the negative sample g, taking out the right tail t through the upper limit threshold, and updating sufficient statistics n and s of data which are not marked as abnormal;
step S34: adjusting a parameter alpha of a prior gamma distribution 0 And beta 0 Is shown as
α 0 =1+w 0
Figure FDA0003749844660000031
Wherein, w 0 Is the weight assigned to the sample count of the training set;
the step 4: inputting relevant parameters, adjusting a threshold value through an adaptive algorithm, and outputting a prediction result, wherein the method specifically comprises the following subdivision steps S41-S43:
step S41: constructing a data set by using a one-dimensional convolution neural network classifier on a to-be-detected segmented scene video frame; according to the mode of step S33, an upper threshold value of the data set is found, all samples of the right tail t1 are taken out to carry out a KolmogorovSmirnov test to find and eliminate the abnormity, and the KolmogorovSmirnov test method is that
Figure FDA0003749844660000032
Wherein
Figure FDA0003749844660000033
Figure FDA0003749844660000034
Figure FDA0003749844660000035
Is calculated to obtain D n Then, the largest sample is removed and Dn is calculated using the remaining samples; continuously circulating, and finally selecting D n The label value of the smallest sample is noted
Figure FDA0003749844660000036
Step S42: after removing the anomaly, selecting to
Figure FDA0003749844660000041
For the threshold, the right tail is extracted, the a priori estimate during training is used to calculate the a posteriori of the whole sequence, updated to α 1 And beta 1
Step S43: the posterior obtained by calculation in S42 is used as a prior, and a sample x is set j A window W as a center, denoted by
Figure FDA0003749844660000042
According to the right tail probability p u Searching the upper limit threshold u2 of the window, extracting the right tail part, and adjusting alpha, beta and
Figure FDA0003749844660000043
by calculating to obtain y j Is shown as
Figure FDA0003749844660000044
Wherein p is f Continuously and circularly obtaining the label values y of all samples for the target error rate j
2. The extremum theory-based adaptive robust VOCs gas leak detection method of claim 1, wherein: the step 1: acquiring infrared video data with and without leakage of VOCs and preprocessing the data, wherein the method specifically comprises the following subdivision steps S11-S12:
step S11: acquiring infrared video data with VOCs leakage and no leakage;
step S12: and carrying out preprocessing operations of random rotation, frame size normalization and scene segmentation on the infrared video data.
3. The utility model provides a self-adaptation robustness VOCs gas leak detection system based on extreme value theory which characterized in that: comprises the following units which are connected with each other,
the data acquisition and processing unit is used for acquiring data of VOCs leakage areas and non-leakage areas in the infrared video data to carry out preprocessing operation;
the one-dimensional network structure training unit is used for extracting one-dimensional time sequence characteristic data of pixel points with a certain length from the infrared video data and training a one-dimensional convolutional neural network classifier;
a parameter determination unit for sampling the space-time characteristics of a plurality of pixel points from the infrared video data for a plurality of times, using a one-dimensional convolution neural network classifier, leading the output value into an EVT algorithm in a Bayesian framework, and training the parameter alpha of the prior gamma distribution 0 And beta 0
The prediction unit is used for inputting relevant parameters, adjusting a threshold value through a self-adaptive algorithm and outputting a prediction result;
the one-dimensional network structure training unit comprises the following specific processing steps:
step S21: extracting one pixel from each 8X 8 or 16X 16 block of VOCs gas area of segmented scene video frame with VOCs leakage to form a plurality of pixel point one-dimensional time sequence leakage data (X) with length L L 1), L is the number of scene frames, where 1 represents that the data comes from the area where VOCs are leaking, and X is L =[x′ 1 x′ 2 ...x′ L ] T (ii) a Meanwhile, a plurality of pixel point one-dimensional time sequence normal data (X) with the same length are extracted from VOCs gas regions in segmentation scenes without VOCs leakage in the same way L 0), where 0 represents that the data is from a normal region;
step S22: firstly, extracting the obtained one-dimensional time sequence data X of the pixel points L Carrying out numerical value normalization to enable one-dimensional time sequence data X of pixel points L Each element satisfies 0 ≦ x' i 1, 2, L followed by x 'for each one-dimensional time series data element' i Carrying out zero equalization; then, the two types of data after the processing are respectively segmented, wherein 80% of the data are used as training data, and 20% of the data are used as verification data;
step S23: training a one-dimensional convolutional neural network classifier by using the processed training data, wherein the input of the one-dimensional convolutional neural network classifier is pixel point one-dimensional time sequence data X L The output is D (X) L ) Wherein D (X) L ) E (0, 1), stopping training when the classification accuracy of the classifier on the verification data set reaches more than 98%, thereby obtaining a one-dimensional convolution neural network classification model;
the parameter determination unit comprises the following specific processing steps:
step S31: randomly extracting one pixel from each 8X 8 or 16X 16 block of the dark part of the segmented scene video frame to be detected to obtain K pixel point one-dimensional time sequence data X with the length L L Sending the data into a one-dimensional convolutional neural network classifier to obtain an output D (X) L ) Wherein D (X) L )∈(0,1);
Step S32: mixing K X L =[x′ 1 x′ 2 ...x′ L ] T As data X, K D (X) L ) Constructing an EVT training algorithm data set T for the corresponding label sequence y;
step S33: picking out negative samples g ═ x from the dataset T i |y i 0, according to the right tail probability p u Searching an upper threshold u of the negative sample g, taking out a right tail t through the upper threshold, updating sufficient statistics n and s of data which are not marked as abnormal,
step S34: adjusting a parameter alpha of a prior gamma distribution 0 And beta 0 Is shown as
α 0 =1+w 0
Figure FDA0003749844660000061
Wherein, w 0 Is the weight assigned to the sample count of the training set;
the prediction unit comprises the following specific processing steps:
step S41: constructing a data set by using a one-dimensional convolution neural network classifier on a to-be-detected segmented scene video frame; according to the mode of the step S33, an upper threshold value of the data set is found, all samples of the right tail t1 are taken out, and a KolmogorovSmirnov test is carried out to find and eliminate the abnormity, wherein the KolmogorovSmirnov test method is that
Figure FDA0003749844660000071
Wherein
Figure FDA0003749844660000072
Figure FDA0003749844660000073
Figure FDA0003749844660000074
Is calculated to obtain D n Then, the largest sample is removed and Dn is calculated using the remaining samples; continuously circulating, and finally selecting D n The label value of the smallest sample is noted
Figure FDA0003749844660000079
Step S42: after removing the anomaly, selecting to
Figure FDA0003749844660000078
For the threshold, the right tail is extracted, the a priori estimate during training is used to calculate the a posteriori of the whole sequence, updated to α 1 And beta 1
Step S43: the posterior obtained by calculation in S42 is used as a prior, and a sample x is set j A window W as a center, denoted by
Figure FDA0003749844660000075
According to the right tail probability p u Searching the upper limit threshold u2 of the window, extracting the right tail part, and adjusting alpha, beta and
Figure FDA0003749844660000076
by calculating to obtain y j Is shown as
Figure FDA0003749844660000077
Wherein p is f Continuously and circularly obtaining the label values y of all samples for the target error rate j
4. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the extremum theory based adaptive robust VOCs gas leak detection method of claim 1 or 2.
CN202111013939.8A 2021-08-31 2021-08-31 Self-adaptive robustness VOCs gas leakage detection method, system and storage medium Active CN113780138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111013939.8A CN113780138B (en) 2021-08-31 2021-08-31 Self-adaptive robustness VOCs gas leakage detection method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111013939.8A CN113780138B (en) 2021-08-31 2021-08-31 Self-adaptive robustness VOCs gas leakage detection method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113780138A CN113780138A (en) 2021-12-10
CN113780138B true CN113780138B (en) 2022-09-13

Family

ID=78840513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111013939.8A Active CN113780138B (en) 2021-08-31 2021-08-31 Self-adaptive robustness VOCs gas leakage detection method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113780138B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830731B (en) * 2024-01-02 2024-06-28 北京蓝耘科技股份有限公司 Multidimensional parallel scheduling method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599864B (en) * 2016-12-21 2020-01-07 中国科学院光电技术研究所 Deep face recognition method based on extreme value theory
CN110672278B (en) * 2019-10-16 2021-04-30 北京工业大学 Method for quantitatively measuring VOCs leakage of production device based on infrared imaging
CN111242404B (en) * 2019-11-12 2022-08-12 中国水利水电科学研究院 Extreme evaluation method and system for heavy rainfall induced flood incident
CN113008470B (en) * 2020-07-22 2024-02-13 威盛电子股份有限公司 Gas leakage detection device and gas leakage detection method
CN112163624A (en) * 2020-09-30 2021-01-01 上海交通大学 Data abnormity judgment method and system based on deep learning and extreme value theory

Also Published As

Publication number Publication date
CN113780138A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111768432B (en) Moving target segmentation method and system based on twin deep neural network
Chen et al. Deep one-class classification via interpolated gaussian descriptor
US11450012B2 (en) BBP assisted defect detection flow for SEM images
Postels et al. On the practicality of deterministic epistemic uncertainty
US11087452B2 (en) False alarm reduction system for automatic manufacturing quality control
Desiani et al. Bi-path Architecture of CNN Segmentation and Classification Method for Cervical Cancer Disorders Based on Pap-smear Images.
CN111798440A (en) Medical image artifact automatic identification method, system and storage medium
CN117015796A (en) Method for processing tissue images and system for processing tissue images
CN113284122B (en) Roll paper packaging defect detection method and device based on deep learning and storage medium
CN113283282A (en) Weak supervision time sequence action detection method based on time domain semantic features
CN113673482B (en) Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution
CN115424093B (en) Method and device for identifying cells in fundus image
CN114049581A (en) Weak supervision behavior positioning method and device based on action fragment sequencing
CN116778482B (en) Embryo image blastomere target detection method, computer equipment and storage medium
CN113780138B (en) Self-adaptive robustness VOCs gas leakage detection method, system and storage medium
CN115661860A (en) Method, device and system for dog behavior and action recognition technology and storage medium
CN116977844A (en) Lightweight underwater target real-time detection method
Bouni et al. Impact of pretrained deep neural networks for tomato leaf disease prediction
CN110675382A (en) Aluminum electrolysis superheat degree identification method based on CNN-LapseLM
Saranya et al. FBCNN-TSA: An optimal deep learning model for banana ripening stages classification
CN114463269A (en) Chip defect detection method based on deep learning method
Ayomide et al. Improving Brain Tumor Segmentation in MRI Images through Enhanced Convolutional Neural Networks
CN117456272A (en) Self-supervision abnormality detection method based on contrast learning
Sema et al. Automatic Detection and Classification of Mango Disease Using Convolutional Neural Network and Histogram Oriented Gradients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant