CN117783051A - Methane gas leakage detection method based on multi-sensor data fusion - Google Patents

Methane gas leakage detection method based on multi-sensor data fusion Download PDF

Info

Publication number
CN117783051A
CN117783051A CN202410218143.3A CN202410218143A CN117783051A CN 117783051 A CN117783051 A CN 117783051A CN 202410218143 A CN202410218143 A CN 202410218143A CN 117783051 A CN117783051 A CN 117783051A
Authority
CN
China
Prior art keywords
image
data
fusion
frequency
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410218143.3A
Other languages
Chinese (zh)
Other versions
CN117783051B (en
Inventor
张俊杨
李强
赵世睿
李晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Shangzhan Information Technology Co ltd
Original Assignee
Xi'an Shangzhan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Shangzhan Information Technology Co ltd filed Critical Xi'an Shangzhan Information Technology Co ltd
Priority to CN202410218143.3A priority Critical patent/CN117783051B/en
Publication of CN117783051A publication Critical patent/CN117783051A/en
Application granted granted Critical
Publication of CN117783051B publication Critical patent/CN117783051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a methane gas leakage detection method based on multi-sensor data fusion, which is characterized in that various information of different dimensionalities, different modes and different expression forms such as laser methane detection sensor data, visible light video monitoring data, infrared temperature measurement sensor data and infrared video monitoring data are subjected to centralized fusion processing and analysis, so that the ambiguity of the information is reduced, and the reliability and stability of a system are enhanced.

Description

Methane gas leakage detection method based on multi-sensor data fusion
Technical Field
The invention belongs to the technical field of gas leakage detection, and particularly relates to a methane gas leakage detection method based on multi-sensor data fusion.
Background
The existing methane gas leakage detection technology mainly utilizes physical and chemical properties of leakage gas and various signals generated during leakage, and adopts means such as catalytic combustion, electrochemistry, semiconductor type, infrared absorption spectrum, laser spectrum absorption, pressure monitoring and the like to realize leakage detection.
The existing methane gas leakage detection system basically only adopts a gas sensor to detect gas leakage no matter adopting a fixed point type, a correlation type or a cloud platform type, and the problems are gradually highlighted in an actual methane gas detection scene due to limited information obtained by data acquisition and influence of quality and performance of the sensor based on the limitations of factors such as a sensor self detection principle and a gas sensitive material.
Therefore, the research and development of the methane gas leakage detection method with multiple sensors has great market prospect.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a methane gas leakage detection method based on multi-sensor data fusion.
In order to solve the technical problems, the technical method of the invention comprises the following steps: a methane gas leakage detection method based on multi-sensor data fusion comprises the following steps:
step 1: data acquisition of a region to be measured; collecting methane concentration data of a region to be detected through a laser methane detection sensor, collecting video data of the region to be detected through visible light video monitoring, collecting temperature data of the region to be detected through an infrared temperature measurement sensor, and collecting infrared video data of the region to be detected through infrared video monitoring;
step 2: all the data acquired in the step 1 are transmitted into a fusion processing center for data fusion analysis;
step 2-1: carrying out fusion processing on video data of visible light video monitoring and infrared video data of infrared video monitoring to obtain a fusion image, wherein the fusion image comprises scene detail information in the visible light image and target feature information in the infrared image, and reflects leakage conditions of methane gas, wherein the leakage conditions comprise leakage points, leakage gas contours and leakage gas drifting tracks;
Step 2-2: the data fused in the step 2-1 is used as data together with methane concentration data of a laser methane detection sensor and temperature data of an infrared temperature measurement sensor to be input, normalization processing is carried out, an RBM (radial basis function) network is used for training, a feature vector is output by the RBM network, a BP (back propagation) neural network receives the feature vector output by the RBM network, and the BP neural network realizes a classifier function to obtain a final judgment result;
step 3: and outputting a judging result.
Preferably, the collecting methane concentration data of the area to be measured by the laser methane detection sensor in the step 1 specifically includes: by using the TDLAS technology and utilizing the narrow bandwidth characteristic of the tunable semiconductor laser, the specific absorption spectrum line of the measured gas in the region to be measured is scanned, the characteristic parameters of the substance to be measured are calculated through the absorption spectrum of gas molecules, and the concentration data of the measured gas are collected.
Preferably, the step 2-1 includes the steps of:
step S101: NSST transformation and decomposition are respectively carried out on video data of visible light video monitoring and infrared video data of infrared video monitoring, and a low-frequency sub-band image and a high-frequency sub-band image are respectively obtained;
step S102: performing fusion processing on the low-frequency subband image and the high-frequency subband image;
Step S103: then performing NSST inverse transformation;
step S104: and obtaining a fusion image.
Preferably, the step S101 specifically includes: convolving all filters required for generating each low-frequency or high-frequency image by scale decomposition of a source image, wherein the source image comprises video data monitored by visible light and infrared video data monitored by infrared video, directly filtering the source image by using the filter obtained by convolution, designing a low-pass filter and a high-pass filter, and sampling the low-pass filter and the high-pass filter according to the scale decomposition layer number of the source image to obtain a non-downsampling pyramid filter bank;
the source image comprises a source visible light image A and a source infrared image B, the source visible light image A and the source infrared image B are subjected to three-layer scale decomposition to generate low-frequency subband images A0 and B0, high-frequency subband images A1, A2 and A3 and B1, B2 and B3, and the scale filter of the first layer is thatWherein->=0 denotes a low pass filter, +.>=1 denotes a high-pass filter, and the filters of the second layer and the third layer are +.>And->Wherein->For a 2-order identity matrix, convoluting different low-pass and high-pass filter groups to form scale decomposition filters under different scales, convoluting a source image with the scale decomposition filters to obtain low-frequency and a series of high-frequency subband images, and convoluting the scale filters- >、/>And->Convolution forms the final scale-decomposition filter +.>The following formula is shown:
is directly usedThe source image is filtered to obtain a low frequency subband image and a high frequency subband image.
Preferably, the low-frequency subband images A0 and B0 generated by the decomposition are fused, and the fusion processing method comprises the following steps:
let the source visible light image be A and the source infrared image be B, the low frequency sub-band images of A and B be A0 and B0 respectively, the fusion image be F, NSST decomposition is carried out on A and B to obtain NSST coefficients of A0 and B0 respectivelyAnd,/>wherein->And->Low frequency coefficients of a and B, respectively, +.>Andrespectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the local standard deviation of the a and B low frequency subbands is then calculated as follows:
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula:
second, normalizeAnd->The formula is as follows:
finally, defining the matching degree of the normalized local standard deviation of the low-frequency sub-bands of the two source imagesAs shown in the following formula, the matching threshold value +.>(0.5</><1);
If it is The fusion result is shown in the following formula:
if it isAnd the fusion result takes the coefficients of the visible light image as shown in the following formula:
preferably, the high-frequency subband images A1, A2, A3, B1, B2 and B3 generated by the decomposition are fused, and the high-frequency coefficients are divided into the highest-frequency coefficients and other high-frequency coefficients;
the highest frequency coefficient fusion processing method comprises the following steps:
the source visible light image is A and the source infrared image is B, the high-frequency sub-bands generated by A decomposition are A1, A2 and A3, the high-frequency sub-bands generated by B decomposition are B1, B2 and B3, wherein A1, A2 and A3 are divided into the highest frequency coefficient and other high frequency coefficients, B1, B2 and B3 are divided into the highest frequency coefficient and other high frequency coefficients, the highest frequency coefficients in A1, A2 and A3 are fused with the highest frequency coefficients in B1, B2 and B3, the other high frequency coefficients in A1, A2 and A3 are fused with the other high frequency coefficients in B1, B2 and B3 to generate a fused image F, and NSST coefficients obtained by NSST decomposition of A and B are respectivelyAnd,/>wherein->And->Representing the low frequency coefficients of a and B respectively,and->Respectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the highest frequency coefficient fusion adopts a fusion method with large regional energy, and the highest frequency coefficients of A and B Andthe area energy of (2) is shown in the following formula:
wherein:
、/>represents A and B at->Layer, in->Local area energy being the center;
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Is a weight coefficient;
fusion method with large regional energy consumption is adopted, and fusion image F is at the highest layerFusion result of->The following formula is shown:
other high-frequency coefficients are fused by 2 methods, wherein the 1 st method adopts a method combining standard deviation and gradient characteristics, and the rule is as follows:
the local standard deviation of the a and B high frequency subbands is calculated as follows:
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula:
the high frequency sub-bands corresponding to A and B are calculated by the following formulaAnd->Gradient values of (2);
wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
and->Respectively represent A and B in the pixel +.>Gradient values at;
defining an edge matching function of a pixel point for A and B respectivelyAnd->The larger the function value, the greater the likelihood that the point is at the edge of the image; the matching function is as follows:
Wherein:
(/>) And->Respectively represent A and B in the scale +.>And direction->A matching function below;
and the high-frequency coefficients are fused in a weighting mode, and the following formula is adopted:
wherein:
and->The matching functions A and B are indicated respectively +.>(/>) And->Average value of (2);
the 2 nd method of the fusion of other high-frequency coefficients adopts the method of combining Laplace energy and gradient characteristics for fusion, and the method is as follows:
computing high frequency sub-bands corresponding to A and BAnd->The formula is as follows:
wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
and->Respectively represent A and B in the pixel +.>Gradient values at;
the Laplace operator ML and the Laplace energy and the SML are defined as the following formulas, wherein the SML represents the marginalization degree of the image, can reflect the gray level change of the pixel, and further reflect the space detail, and the larger the SML value at a certain point is, the richer the detail information contained at the point is, and the greater the possibility that the pixel is positioned in an edge contour area is;
wherein:
to take the following measuresA local area as the center, the size is
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
indicating that the high frequency subband picture is located +.>A gray value of a pixel;
The following formula is used for fusion in combination with improving the Laplace energy sum and gradient characteristics:
preferably, the step 2-2 includes the steps of:
step S201: the data fused in the step 2-1, methane concentration data of the laser methane detection sensor and temperature data of the infrared temperature measurement sensor are used together as data to be input;
step S202: after loading the data, carrying out normalization processing on the sample data;
step S203: training the RBM network by using a CD-K method after data normalization, and outputting a feature vector by the RBM network;
step S204: the BP neural network receives the feature vector output by the RBM as an input vector, and achieves the function of the classifier to obtain a final judgment result.
Preferably, the normalization process in step S202 uses a normalization function mapmin max, and the mathematical expression is:
wherein:
inputting a vector for a sample;
is the minimum in the sample;
is the maximum value in the sample;
is the converted output vector.
Preferably, the training of the RBM network using the CD-K method in step S203 is specifically: initializing network parameters to enable,/>Is a weight matrix>Bias for visual layer->For hiding layer bias, setting parameters such as maximum iteration number maxepoch, learning rate lr, batch size Batchsize and the like, and continuously adjusting +. >,/>And gradually reducing the reconstruction error until the maximum training times are reached, and outputting the reconstruction data from the hidden layer.
Preferably, the step S204 specifically includes: firstly, setting training parameters of a BP neural network, wherein the training parameters comprise a learning rate lr and a maximum learning frequency maxepoch; training to obtain a BP network net by using a train function, and simulating the network net and training input by using a sim function; and finally, inversely normalizing the output data to obtain output data including whether leakage exists, gas concentration, time and position information, namely judging results.
Compared with the prior art, the invention has the advantages that:
(1) The invention constructs a multi-sensor data fusion processing model, and performs centralized fusion processing and analysis on various information with different dimensions, different modes and different expression forms such as laser methane detection sensor data, visible light video monitoring data, infrared temperature measurement sensor data, infrared video monitoring data and the like, so that the ambiguity of the information is reduced, and the reliability and stability of the system are enhanced;
(2) The invention combines the advantages and disadvantages of the visible light video monitoring data and the infrared video monitoring data, and combines the two together, so that the information of the image can be enriched, the resolution of the image can be improved, the spectrum information of the image can be enhanced, the image can be comprehensively and clearly expressed, and the defect of a single sensor on the scene expression surface can be overcome; in the fusion processing link, the invention adopts a rapid non-downsampling shear wave transformation fusion algorithm to fuse the visible light video monitoring image and the infrared video monitoring image, so that the scene detail information in the visible light image and the target feature information in the infrared image are included in one monitoring picture, in addition, the decomposition and reconstruction efficiency of the image is further improved by improving the fusion algorithm, and the contour texture information of the image is more obvious;
(3) In the fusion processing algorithm, the RBM and the BP neural network are combined, the DBN with complementary advantages is used for carrying out multi-sensor data fusion processing, the image obtained by fusing the visible light image and the infrared image, the laser methane detection sensor data and the infrared temperature measurement sensor data are used as input data, the input data enter a visual layer of the RBM, the RBM carries out preprocessing and pre-training on the data, the reconstruction data are output from the hidden of the RBM and enter an inputtable layer of the BP neural network, the hidden layer of the BP neural network is used for training, and finally a judgment result is output from an output layer of the BP neural network, so that the network training speed and the training effect are further improved, and the accuracy of output result classification is improved;
(4) According to the invention, the environment is measured through a plurality of different sensors, different multi-sensor data resources are fully utilized, data fusion processing is carried out, the information ambiguity is reduced, and the reliability and stability of the system are enhanced; the invention applies the data fusion technology to the detection of toxic and harmful gases such as methane, eliminates the problem of cross interference between gas sensors, and realizes the accurate measurement of the concentration value of methane gas under complex conditions.
Drawings
FIG. 1 is a flow chart of a methane gas leakage detection method based on multi-sensor data fusion;
FIG. 2 is a flow chart of a process of integrating video data of visible light video monitoring and infrared video data of infrared video monitoring based on a methane gas leakage detection method based on multi-sensor data integration;
fig. 3 is a flow chart of a multi-sensor data fusion process of the DBN network based on the multi-sensor data fusion methane gas leakage detection method.
Detailed Description
The following describes specific embodiments of the present invention with reference to examples:
it should be understood that the drawings described in the specification are for understanding and reading only the principles, features and content disclosed in the specification, and are not intended to limit the invention to the specific embodiments and should not be considered as limiting the invention to the full scope of the invention.
As shown in fig. 1, the invention discloses a methane gas leakage detection method based on multi-sensor data fusion, which comprises the following steps:
Step 1: data acquisition of a region to be measured; collecting methane concentration data of a region to be detected through a laser methane detection sensor, collecting video data of the region to be detected through visible light video monitoring, collecting temperature data of the region to be detected through an infrared temperature measurement sensor, and collecting infrared video data of the region to be detected through infrared video monitoring;
step 2: all the data acquired in the step 1 are transmitted into a fusion processing center for data fusion analysis;
step 2-1: carrying out fusion processing on video data of visible light video monitoring and infrared video data of infrared video monitoring to obtain a fusion image, wherein the fusion image comprises scene detail information in the visible light image and target feature information in the infrared image, and reflects leakage conditions of methane gas, wherein the leakage conditions comprise leakage points, leakage gas contours and leakage gas drifting tracks;
step 2-2: the data fused in the step 2-1 is used as data together with methane concentration data of a laser methane detection sensor and temperature data of an infrared temperature measurement sensor to be input, normalization processing is carried out, an RBM (radial basis function) network is used for training, a feature vector is output by the RBM network, a BP (back propagation) neural network receives the feature vector output by the RBM network, and the BP neural network realizes a classifier function to obtain a final judgment result;
Step 3: and outputting a judging result.
Preferably, the collecting methane concentration data of the area to be measured by the laser methane detection sensor in the step 1 specifically includes: by using the TDLAS technology and utilizing the narrow bandwidth characteristic of the tunable semiconductor laser, the specific absorption spectrum line of the measured gas in the region to be measured is scanned, the characteristic parameters of the substance to be measured are calculated through the absorption spectrum of gas molecules, and the concentration data of the measured gas are collected.
Preferably, the step 2-1 includes the steps of:
step S101: NSST transformation and decomposition are respectively carried out on video data of visible light video monitoring and infrared video data of infrared video monitoring, and a low-frequency sub-band image and a high-frequency sub-band image are respectively obtained;
step S102: performing fusion processing on the low-frequency subband image and the high-frequency subband image;
step S103: then performing NSST inverse transformation;
step S104: and obtaining a fusion image.
Preferably, the step S101 specifically includes: convolving all filters required for generating each low-frequency or high-frequency image by scale decomposition of a source image, wherein the source image comprises video data monitored by visible light and infrared video data monitored by infrared video, directly filtering the source image by using the filter obtained by convolution, designing a low-pass filter and a high-pass filter, and sampling the low-pass filter and the high-pass filter according to the scale decomposition layer number of the source image to obtain a non-downsampling pyramid filter bank;
The source image comprises a source visible light image A and a source infrared image B, the source visible light image A and the source infrared image B are subjected to three-layer scale decomposition to generate low-frequency subband images A0 and B0, high-frequency subband images A1, A2 and A3 and B1, B2 and B3, and the scale filter of the first layer is thatWherein->=0 denotes a low pass filter, +.>=1 denotes a high-pass filter, and the filters of the second layer and the third layer are +.>And->Wherein->For a 2-order identity matrix, convoluting different low-pass and high-pass filter groups to form scale decomposition filters under different scales, convoluting a source image with the scale decomposition filters to obtain low-frequency and a series of high-frequency subband images, and convoluting the scale filters->、/>And->Convolution forms the final scale-decomposition filter +.>The following formula is shown:
is directly usedThe source image is filtered to obtain a low frequency subband image and a high frequency subband image.
Preferably, the low-frequency subband images A0 and B0 generated by the decomposition are fused, and the fusion processing method comprises the following steps:
let the source visible light image be A and the source infrared image be B, the low frequency sub-band images of A and B be A0 and B0 respectively, the fusion image be F, NSST decomposition is carried out on A and B to obtain NSST coefficients of A0 and B0 respectivelyAnd,/>wherein- >And->Low frequency coefficients of a and B, respectively, +.>Andrespectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the local standard deviation of the a and B low frequency subbands is then calculated as follows:
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula:
second, normalizeAnd->The formula is as follows:
finally, defining the matching degree of the normalized local standard deviation of the low-frequency sub-bands of the two source imagesAs shown in the following formula, the matching threshold value +.>(0.5</><1);
If it isThe fusion result is shown in the following formula:
if it isAnd the fusion result takes the coefficients of the visible light image as shown in the following formula:
preferably, the high-frequency subband images A1, A2, A3, B1, B2 and B3 generated by the decomposition are fused, and the high-frequency coefficients are divided into the highest-frequency coefficients and other high-frequency coefficients;
the highest frequency coefficient fusion processing method comprises the following steps:
let the source visible light image be A and the source infrared image beB, the high-frequency sub-bands generated by A decomposition are A1, A2 and A3, the high-frequency sub-bands generated by B decomposition are B1, B2 and B3, wherein A1, A2 and A3 are divided into the highest frequency coefficient and other high frequency coefficients, B1, B2 and B3 are divided into the highest frequency coefficient and other high frequency coefficients, the highest frequency coefficients in A1, A2 and A3 are fused with the highest frequency coefficients in B1, B2 and B3, the other high frequency coefficients in A1, A2 and A3 are fused with the other high frequency coefficients in B1, B2 and B3 to generate a fused image F, and NSST coefficients obtained by NSST decomposition of A and B are respectively And,/>wherein->And->Low frequency coefficients of a and B, respectively, +.>Andrespectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the highest frequency coefficient fusion adopts a fusion method with large regional energy, and the highest frequency coefficients of A and BAndthe area energy of (2) is shown in the following formula:
wherein:
、/>represents A and B at->Layer, in->Local area energy being the center;
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Is a weight coefficient;
fusion method with large regional energy consumption is adopted, and fusion image F is at the highest layerFusion result of->The following formula is shown:
other high-frequency coefficients are fused by 2 methods, wherein the 1 st method adopts a method combining standard deviation and gradient characteristics, and the rule is as follows:
the local standard deviation of the a and B high frequency subbands is calculated as follows:
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula: />
The high frequency sub-bands corresponding to A and B are calculated by the following formulaAnd->Gradient values of (2);
wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
And->Respectively represent A and B in the pixel +.>Gradient values at;
defining an edge matching function of a pixel point for A and B respectivelyAnd->The larger the function value, the greater the likelihood that the point is at the edge of the image; the matching function is as follows:
wherein:
(/>) And->Respectively represent A and B in the scale +.>And direction->A matching function below;
and the high-frequency coefficients are fused in a weighting mode, and the following formula is adopted:
wherein:
and->The matching functions A and B are indicated respectively +.>(/>) And->Average value of (2);
the 2 nd method of the fusion of other high-frequency coefficients adopts the method of combining Laplace energy and gradient characteristics for fusion, and the method is as follows:
computing high frequency sub-bands corresponding to A and BAnd->The formula is as follows: />
Wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
and->Respectively represent A and B in the pixel +.>Gradient values at;
the Laplace operator ML and the Laplace energy and the SML are defined as the following formulas, wherein the SML represents the marginalization degree of the image, can reflect the gray level change of the pixel, and further reflect the space detail, and the larger the SML value at a certain point is, the richer the detail information contained at the point is, and the greater the possibility that the pixel is positioned in an edge contour area is;
Wherein:
to take the following measuresA local area as the center, the size is
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
indicating that the high frequency subband picture is located +.>A gray value of a pixel;
the following formula is used for fusion in combination with improving the Laplace energy sum and gradient characteristics:
preferably, the step 2-2 includes the steps of:
step S201: the data fused in the step 2-1, methane concentration data of the laser methane detection sensor and temperature data of the infrared temperature measurement sensor are used together as data to be input;
step S202: after loading the data, carrying out normalization processing on the sample data;
step S203: training the RBM network by using a CD-K method after data normalization, and outputting a feature vector by the RBM network;
step S204: the BP neural network receives the feature vector output by the RBM as an input vector, and achieves the function of the classifier to obtain a final judgment result.
Preferably, the normalization process in step S202 uses a normalization function mapmin max, and the mathematical expression is:
wherein:
inputting a vector for a sample;
is the minimum in the sample;
is the maximum value in the sample;
is the converted output vector.
Preferably, the training of the RBM network using the CD-K method in step S203 is specifically: initializing network parameters to enable,/>Is a weight matrix>Bias for visual layer->For hiding layer bias, setting parameters such as maximum iteration number maxepoch, learning rate lr, batch size Batchsize and the like, and continuously adjusting +.>,/>And gradually reducing the reconstruction error until the maximum training times are reached, and outputting the reconstruction data from the hidden layer.
Preferably, the step S204 specifically includes: firstly, setting training parameters of a BP neural network, wherein the training parameters comprise a learning rate lr and a maximum learning frequency maxepoch; training to obtain a BP network net by using a train function, and simulating the network net and training input by using a sim function; and finally, inversely normalizing the output data to obtain output data including whether leakage exists, gas concentration, time and position information, namely judging results.
Example 1
In this embodiment, a multi-sensor data fusion processing model is constructed as shown in fig. 1, and includes:
the laser methane detection sensor is used for scanning a specific absorption spectrum line of the detected gas by utilizing the narrow bandwidth characteristic of the tunable semiconductor laser and adopting the TDLAS technology, and the characteristic parameter of the substance to be detected is calculated by the absorption spectrum of gas molecules so as to acquire methane concentration data.
Video data of a region to be detected (gas leakage region) is collected through visible light video monitoring.
And acquiring temperature data of the region to be detected by an infrared temperature measuring sensor.
And acquiring infrared video data of the region to be detected through infrared video monitoring.
The multi-sensor data fusion processing model adopts a centralized structure, all the data acquired by each sensor are transmitted into a fusion processing center, and after the data fusion analysis is carried out by the fusion processing center, the result is output.
Example 2
In this embodiment, in the data fusion processing, the present embodiment performs fusion processing on a visible light video monitoring image (video data of visible light video monitoring) and an infrared video monitoring image (infrared video data of infrared video monitoring) to obtain a fusion image, where the fusion image includes scene detail information in the visible light image and target feature information in the infrared image, the fusion image reflects a leakage condition of methane gas, the leakage condition includes a leakage point, a leakage gas profile, and a leakage gas drift track, and the video data of the visible light video monitoring and the infrared video data of the infrared video monitoring are source images, as shown in fig. 2, and specific steps are as follows:
In step S101, the improved fast NSST decomposition method provided in this embodiment convolves all filters required for generating each low-frequency or high-frequency image by scale decomposition of the source image, and then directly filters the source image by using the filter obtained by convolution, thereby avoiding an iterative filtering process in the scale decomposition and improving the decomposition speed of the image.
Designing a low-pass filter and a high-pass filter, and sampling the low-pass filter and the high-pass filter according to the scale decomposition layer number of the source image to obtain a non-downsampling pyramid filter bank;
the source image comprises a source visible light image A and a source infrared image B, the source visible light image A and the source infrared image B are subjected to three-layer scale decomposition to generate low-frequency subband images A0 and B0, high-frequency subband images A1, A2 and A3 and B1, B2 and B3, and the scale filter of the first layer is thatWherein->=0 denotes a low pass filter, +.>=1 denotes a high-pass filter, and the filters of the second layer and the third layer are +.>And->Wherein->Is a 2-order identity matrix.
The improved fast algorithm adopted in the embodiment considers that the convolution of the filter is more efficient than the sequential use of the filter, and the scale filter is used for the combination of the source image and the scale decomposition filter 、/>And->Convolution forms the final scale-decomposition filter +.>As shown in formula (1). Direct use +.>Filtering the source image to obtain a low-frequency sub-band image and a high-frequency sub-band image.
(1)
When NSST direction decomposition is carried out, firstly, pseudo polarization grids are mapped to Cartesian coordinates, and then, a high-frequency image and a shear wave filter are subjected to convolution operation to obtain direction sub-bands. To further avoid intermediate results of the scale decomposition,i.e. high frequency subband imagesDirect pair->Fourier transform is performed and the next calculation is performed in an integrated form.
In step S102, the low-frequency subband images A0 and B0 generated by the decomposition are fused, and the fusion processing method is as follows:
let the source images be A and B respectively, the fusion image be F, NSST coefficients obtained by NSST decomposition of the source images A and B be respectivelyAnd->,/>Wherein->Representing low frequency coefficients, < >>Expressed in the scale +.>And direction->The following high frequency coefficients.
The low frequency subband image contains most of gray scale and contour information of gray scale distribution, main gray scale contrast, main observation object and the like of the source image. The fusion result of the low frequency sub-bands directly relates to the quality of the image fusion. In the embodiment, a region characteristic operator based on standard deviation is adopted to fuse the low-frequency sub-bands.
First, the local standard deviation of the low frequency subbands of the source images a and B is calculated as shown in formula (2).
(2)
Wherein,MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
and->The average gray scale of all pixels in the partial region is expressed as shown in formula (3).
(3)
Next, as shown in the following equation (4), normalizationAnd->。/>
(4)
Finally, defining the matching degree of the normalized local standard deviation of the low-frequency sub-bands of the two imagesAs shown in the formula (5), a matching threshold T (0.5<T<1)。
(5)
If it isThe two images are shown to be very different in standard deviation, and in the low-frequency image, the point may be located in a contour or texture area, and the fusion result is shown as a formula (6).
(6)
If it isThe standard deviation of the low-frequency sub-bands of the two images at the same position is not greatly different, and in order to enable the fusion result to contain scene information of the visible light images as much as possible, the fusion result of the invention takes coefficients of the visible light images as shown in a formula (7).
(7)
Through the fusion rule, when the standard deviation matching degree of the low-frequency coefficients of two images at a certain pixel point is larger, the pixel point is shown to be positioned in an edge area in one low-frequency sub-band image, and then a pixel value with a large standard deviation is given to a fusion result by adopting a method of taking the large standard deviation; when the standard deviation matching degree at a certain pixel point is smaller than the threshold value, the standard deviation of the two images at the same pixel point is relatively close, and the two images are most likely to be located in a flat area, so that the low-frequency sub-bands of the visible light images are assigned to the fusion result in order to retain more visible light image information.
The high-frequency subband image represents the detail information such as the edge, texture and the like of the image, and the fusion coefficient directly influences the visual perception of an observation target and the background information thereof, so that the selection of a proper high-frequency coefficient fusion rule is important. In order to make the texture details in the fused image more prominent, the present embodiment divides the high frequency coefficient into the highest frequency coefficient and other high frequency coefficients, and the highest frequency coefficient fusion adopts a fusion rule with large area energy, because larger energy generally corresponds to clearer brightness change. For other high-frequency coefficients, the invention provides two high-frequency coefficient fusion rules, and the area standard deviation of the high-frequency subband image and the improved Laplace energy sum are respectively combined with gradient characteristics. By both methods, edge details are extracted from the high frequency subband image, and edges and textures of the high frequency subband image can be maintained and enhanced to some extent.
The high-frequency subband images A1, A2, A3, B1, B2 and B3 generated by the decomposition are fused, and the fusion processing method comprises the following steps:
let the source images be A and B respectively, the highest frequency coefficients of the source images A and BAnd->The area energy of (2) is shown in formula (8).
(8)/>
Is shown in the firstA layer ofLocal area energy being the center; MNRepresenting the number of local area pixels, the size of which is selectedOr (b)Is a weight coefficient. And a fusion method with large regional energy is adopted, and the fusion result of the highest layer L is shown in a formula (9).
(9)
Representing the fusion result of the highest layer L;
other high frequency coefficients are fused in 2 ways.
1 st method: the method adopting the combination of standard deviation and gradient characteristics comprises the following steps:
other high frequency coefficients for source images A and BAnd->And fusion is carried out by adopting a method combining standard deviation and gradient characteristics.
The local standard deviation of the image reflects the discrete condition of the pixel value in the area, the larger the standard deviation is, the more the gray level distribution is scattered, the larger the fluctuation of the gray level is, and the detail information of the image can be reflected to a certain extent. The local standard deviation of the high frequency sub-band of the calculated image is shown in formula (10).
(10)
Wherein,MNrepresenting the number of local area pixels, the size of which is selectedOr (b)
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
and->Representing the average gray level of all pixels in the local area.
The high frequency subband image has a larger gradient value at a certain point, which indicates that the detail information at the point is rich, and the high frequency subband image is likely to be in an edge contour area. Calculating corresponding high frequency direction sub-bands by formula (11) And->Is a gradient map of (3).
(11)
Wherein,,/>respectively representing pixels +.>First order difference in x, y direction, +.>Representing pixel +.>Ladder at the positionAnd (5) a degree value.
In order to better determine whether a certain point in an image is located at an edge in the image, the present embodiment defines an edge matching function of a certain pixel pointAs shown in equation (12), the larger the function value, the greater the likelihood that the point is at the edge of the image. />
(12)
The high frequency coefficients are fused in a weighted manner as shown in equation (13).
(13)
The corresponding weights are determined by an S function, as shown in equation (14), for the background portion of the high frequency subband imageValue and average->The values are not greatly different, so that the weight of the background part is close to 0.5, i.e. the fused high-frequency coefficient can be approximately regarded asAnd->Average of (d). Whereas the edge pixels in the high frequency subband image correspond +.>Value and average->The larger the deviation is, the closer the corresponding weighting coefficient is to 1, and the fusion result fully retains the high-frequency information at the position, so that the edge information in the image is effectively highlighted.
(14)
Wherein:
and->The matching functions A and B are indicated respectively +.>(/>) And->Average value of (2);
the 2 nd method adopts the Laplace energy and the method combined with gradient characteristics for fusion.
Other high frequency coefficients for source images A and BAnd->The fusion is performed using a combination of Laplace energy and gradient properties.
Calculating corresponding high frequency direction sub-bands by equation (15)And->Is a gradient map of (3).
(15)
Wherein,,/>respectively representing pixels +.>The first difference in the x, y directions,representing pixel +.>Gradient values at.
The Modified Laplace energy Sum (SML) of the image represents the marginalization degree of the image, and can reflect the gray level change of the pixels, and further reflect the space detail. The larger the SML value at a point, the more rich the detail information contained at that point, and the greater the likelihood that the pixel will be in the edge contour region.Indicating that the high frequency subband picture is located +.>Gray value of pixel, improved Laplacian thereof(ML)Laplace energy sum [ ]SML)The definition is shown in equation (16).
(16)
MNTo take the following measuresA local area centered on, typically of the size
When the high frequency subband image has a large somewhereSMLWhen the value is the value, the gray level change of the position is severe, and if the position has a larger gradient value,indicating that the location detail information is rich, then the pixel is likely to be in the edge contour region. To highlight edge detail information in the fusion result, a fusion rule as shown in formula (17) is adopted in combination with improving the Laplace energy and the gradient characteristics.
(17)
In the middle ofSatisfy formula (18),>as opposed to the meaning represented by equation 14.
(18)
Representing pixel +.>Gradient values at;
SML A representing the marginalization degree of a certain point in the image A;
SML B indicating that a point in image B is at the marginalization level of the image.
Step S103, performing NSST inverse transformation on the processed image to generate a fused image.
Example 3
As shown in fig. 3, the image after fusing the video data of the video monitoring of the visible light and the infrared video data of the infrared video monitoring is input together with the laser methane detection sensor data and the infrared temperature measurement sensor data, and the data fusion processing is performed, specifically comprising the following steps:
step S201: network parameters are initialized, and then the methane concentration data of the laser methane detection sensor and the temperature data of the infrared temperature measurement sensor which are fused with the images are input together as data.
Step S202: carrying out data normalization; after the data are loaded, the normalization processing of the data can reduce the adverse effect of dimension on the test result and simultaneously facilitate the operation of the RBM algorithm at the place where the sample data are normalized. The present embodiment uses a normalization function mapmamax, which is a linear function variation, with a mathematical expression of (19).
(19)
Wherein,input vector for sample, ++>For the minimum in the sample, +.>Y is the converted output vector, which is the maximum value in the samples.
Step S203: training the RBM network by using a CD-K method after data normalization, and outputting a feature vector by the RBM network; initializing network parameters to enable,/>Is a weight matrix>Bias for visual layer->For hiding layer bias, setting parameters such as maximum iteration number maxepoch, learning rate lr, batch size Batchsize and the like, and continuously adjusting +.>,/>,/>And gradually reducing the reconstruction error until the maximum training times are reached, and outputting the reconstruction data from the hidden layer.
The input of the embodiment involves three data, namely methane gas concentration, temperature data and fusion image data, so that the sample of the input layer is a 3-dimensional vector, and the hidden layer is firstly assumed to beDimension vector, hence weight matrix of RBMIs one ofVisual layer bias of matrix of (2)Is thatIs a matrix of hidden layer biasIs thatIs a matrix of (a) in the matrix.
In step S204, the BP neural network receives the reconstructed data output by the RBM as an input vector, and implements the function of the classifier to obtain a final judgment result.
Firstly, setting training parameters of a BP neural network, wherein the training parameters comprise a learning rate lr and a maximum learning frequency maxepoch; training to obtain a BP network net by using a train function, and simulating the network net and training input by using a sim function; and finally, inversely normalizing the output data to obtain output data and outputting a result, wherein the output data comprises whether leakage exists, warning information, gas concentration, time, position and the like.
The invention adopts the DBN network (the DBN combines RBM and BP) to carry out fusion processing on the multi-sensor data, thereby reducing the ambiguity of information and enhancing the reliability, stability and gas concentration measurement accuracy of the system. In order to verify the effect of the fusion detection algorithm, the identification accuracy of the fusion processing method provided by the invention is tested, and meanwhile, the identification accuracy is compared with the identification accuracy of the BP network algorithm.
Setting the maximum training period maxepcoh=1000, the learning rate lr=0.10, the training target goal=0.02, 150 groups of sample data, randomly selecting 100 groups of the training data and 50 groups of the training data as test data, and performing 8 times of test on the method, wherein the test accuracy rate results are shown in table 1:
TABLE 1 accuracy of DBN network model identification of the present invention
Number of experiments Accuracy (%) Error Rate (%)
1 96.0 4.0
2 94.0 6.0
3 96.0 4.0
4 92.0 8.0
5 90.0 10.0
6 92.0 8.0
7 95.0 5.0
8 96.0 4.0
Average of 93.9 6.1
The accuracy of the 8 tests is averaged, and the accuracy of the DBN can reach 93.9 percent. A further set of tests was performed, this time using the 150 sets of data as training data, adding a certain noise to the 150 sets of data as test data, and 10 tests were performed, the test results being shown in table 2:
TABLE 2 recognition accuracy after adding noise to DBN network model of the present invention
Number of experiments Accuracy (%) Error Rate (%)
1 93.3 6.7
2 91.5 8.5
3 90.0 10.0
4 92.3 7.7
5 91.5 8.5
6 93.0 7.0
7 94.2 5.8
8 95.8 4.2
9 93.9 6.1
10 92.8 7.2
Average of 92.8 7.2
The average value of 10 tests is taken, and the accuracy reaches 92.8%, so that the DBN network adopted by the invention has good measurement accuracy.
To verify the superiority of the present invention, compared with the accuracy of identification using BP network algorithm, corresponding 120 sets of data were randomly selected from 190 sets of data for training data, and the remaining 70 sets of data were used as test data, and the comparison test results are shown in table 3.
Table 3 inventive DBN network and BP network contrast test
Number of experiments BP network accuracy (%) DBN network accuracy (%)
1 85.29 89.14
2 88.24 91.43
3 89.71 91.5
4 86.76 91.43
5 88.24 91.69
Average of 87.65 91.0
By comparing the test results for 5 times, the effect of the DBN is higher than the accuracy of the BP neural network which is singly used, so that the accuracy of multi-sensor data fusion gas detection by adopting the DBN is better than that of the BP neural network which singly uses a single sensor. Compared with the method for judging methane gas leakage by using a single BP neural network, the method for detecting methane gas leakage by using the multi-sensor data fusion method has the advantages of higher detection accuracy and better system reliability.
The methane gas leakage detection method based on multi-sensor data fusion improves the recognition accuracy and recognition efficiency, is favorable for realizing accurate detection in complex scenes, is suitable for practical application, and has important significance for safety detection of industrial gases such as methane and the like.
The principle of the invention is as follows:
as shown in fig. 1-3, the invention provides a methane gas leakage detection method based on multi-sensor data fusion, which is based on a data fusion technology, builds a multi-sensor data fusion processing model, and performs centralized fusion processing and analysis on various information with different dimensions, different modes and different expression forms such as visible light video monitoring data, laser methane detection sensor data, infrared temperature measurement sensor data, infrared video monitoring data and the like, so as to eliminate possible contradiction and redundancy information complementation between sensors, and realize more accurate description on various information in the environment; the invention carries out image fusion processing on the visible light image and the infrared image and outputs a fusion image; and then carrying out centralized fusion processing and analysis on the fusion image, the laser methane detection sensing data and the infrared temperature measurement data, and finally outputting information such as whether leakage exists, methane concentration data and the like. The invention utilizes the advantage of the combined action of a plurality of sensor data, improves the stability, the effectiveness and the credibility of the whole system, and reduces the limitation of single sensor data.
The invention constructs a multi-sensor data fusion processing model, and performs centralized fusion processing and analysis on various information with different dimensions, different modes and different expression forms such as laser methane detection sensor data, visible light video monitoring data, infrared temperature measurement sensor data, infrared video monitoring data and the like, so that the ambiguity of the information is reduced, and the reliability and stability of the system are enhanced.
The invention combines the advantages and disadvantages of the visible light video monitoring data and the infrared video monitoring data, and combines the two together, so that the information of the image can be enriched, the resolution of the image can be improved, the spectrum information of the image can be enhanced, the image can be comprehensively and clearly expressed, and the defect of a single sensor on the scene expression surface can be overcome; in the fusion processing link, the invention adopts the rapid non-downsampling shear wave transformation fusion algorithm to fuse the visible light video monitoring image and the infrared video monitoring image, so that the scene detail information in the visible light image and the target feature information in the infrared image are included in one monitoring picture, in addition, the decomposition and reconstruction efficiency of the image is further improved by improving the fusion algorithm, and the contour texture information of the image is more obvious.
In the fusion processing algorithm, the RBM and BP neural network are combined, the DBN with complementary advantages is used for carrying out multi-sensor data fusion processing, the image obtained by fusing the visible light image and the infrared image, the laser methane detection sensor data and the infrared temperature measurement sensor data are used as input data, the input data enter the visible layer of the RBM, the RBM carries out preprocessing and pre-training on the data, the reconstruction data are output from the hidden of the RBM and enter the inputtable layer of the BP neural network, the hidden layer of the BP neural network is used for training, finally, the judgment result is output from the output layer of the BP neural network, the network training speed and the training effect are further improved, and the accuracy of the classification of the output result is improved.
The measurement of a single sensor has the defect of incomplete measurement, and the multi-sensor data fusion can overcome the defect and reduce the performance requirement of the single sensor; according to the invention, the environment is measured through a plurality of different sensors, different multi-sensor data resources are fully utilized, data fusion processing is carried out, the information ambiguity is reduced, and the reliability and stability of the system are enhanced; the invention applies the data fusion technology to the detection of toxic and harmful gases such as methane, eliminates the problem of cross interference between gas sensors, and realizes the accurate measurement of the concentration value of methane gas under complex conditions.
The invention adopts the DBN network to carry out fusion processing on the multi-sensor data, reduces the ambiguity of information and enhances the reliability, stability and gas concentration measurement accuracy of the system.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes may be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Many other changes and modifications may be made without departing from the spirit and scope of the invention. It is to be understood that the invention is not to be limited to the specific embodiments, but only by the scope of the appended claims.

Claims (10)

1. The methane gas leakage detection method based on multi-sensor data fusion is characterized by comprising the following steps of:
step 1: data acquisition of a region to be measured; collecting methane concentration data of a region to be detected through a laser methane detection sensor, collecting video data of the region to be detected through visible light video monitoring, collecting temperature data of the region to be detected through an infrared temperature measurement sensor, and collecting infrared video data of the region to be detected through infrared video monitoring;
step 2: all the data acquired in the step 1 are transmitted into a fusion processing center for data fusion analysis;
step 2-1: carrying out fusion processing on video data of visible light video monitoring and infrared video data of infrared video monitoring to obtain a fusion image, wherein the fusion image comprises scene detail information in the visible light image and target feature information in the infrared image, and reflects leakage conditions of methane gas, wherein the leakage conditions comprise leakage points, leakage gas contours and leakage gas drifting tracks;
step 2-2: the fused image data obtained in the step 2-1, methane concentration data of a laser methane detection sensor and temperature data of an infrared temperature measurement sensor are used together as data input, normalization processing is carried out, an RBM (radial basis function) network is used for training, a feature vector is output by the RBM network, a BP (back propagation) neural network receives the feature vector output by the RBM network, and a classifier function is realized by the BP neural network to obtain a final judgment result;
Step 3: and outputting a judging result.
2. The methane gas leakage detection method based on multi-sensor data fusion according to claim 1, wherein the step 1 of collecting methane concentration data of the area to be detected by the laser methane detection sensor specifically comprises: by using the TDLAS technology and utilizing the narrow bandwidth characteristic of the tunable semiconductor laser, the specific absorption spectrum line of the measured gas in the region to be measured is scanned, the characteristic parameters of the substance to be measured are calculated through the absorption spectrum of gas molecules, and the concentration data of the measured gas are collected.
3. The methane gas leakage detection method based on multi-sensor data fusion according to claim 1, wherein the step 2-1 comprises the steps of:
step S101: NSST transformation and decomposition are respectively carried out on video data of visible light video monitoring and infrared video data of infrared video monitoring, and a low-frequency sub-band image and a high-frequency sub-band image are respectively obtained;
step S102: performing fusion processing on the low-frequency subband image and the high-frequency subband image;
step S103: then performing NSST inverse transformation;
step S104: and obtaining a fusion image.
4. The methane gas leakage detection method based on multi-sensor data fusion according to claim 3, wherein the step S101 specifically comprises: convolving all filters required for generating each low-frequency or high-frequency image by scale decomposition of a source image, wherein the source image comprises video data monitored by visible light and infrared video data monitored by infrared video, directly filtering the source image by using the filter obtained by convolution, designing a low-pass filter and a high-pass filter, and sampling the low-pass filter and the high-pass filter according to the scale decomposition layer number of the source image to obtain a non-downsampling pyramid filter bank;
The source image comprises a source visible light image A and a source infrared image B, the source visible light image A and the source infrared image B are subjected to three-layer scale decomposition to generate low-frequency subband images A0 and B0, high-frequency subband images A1, A2 and A3 and B1, B2 and B3, and the scale filter of the first layer is thatWherein->=0 denotes a low pass filter, +.>=1 denotes a high-pass filter, and the filters of the second layer and the third layer are +.>And->Wherein->For a 2-order identity matrix, convoluting different low-pass and high-pass filter groups to form scale decomposition filters under different scales, convoluting a source image with the scale decomposition filters to obtain low-frequency and a series of high-frequency subband images, and convoluting the scale filters->、/>And->Convolution forms the final scale-decomposition filter +.>The following formula is shown: />
Is directly usedThe source image is filtered to obtain a low frequency subband image and a high frequency subband image.
5. The methane gas leakage detection method based on multi-sensor data fusion according to claim 4, wherein the low-frequency subband images A0 and B0 generated by the decomposition are fused, and the fusion processing method comprises the following steps:
let the source visible light image be A and the source infrared image be B, the low frequency sub-band images of A and B be A0 and B0 respectively, the fusion image be F, NSST decomposition is carried out on A and B to obtain NSST coefficients of A0 and B0 respectively And,/>wherein->And->Low frequency coefficients of a and B, respectively, +.>Andrespectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the local standard deviation of the a and B low frequency subbands is then calculated as follows:
;
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr->
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula:
second, normalizeAnd->The formula is as follows:
finally, defining the matching degree of the normalized local standard deviation of the low-frequency sub-bands of the two source imagesAs shown in the following formula, the matching threshold value +.>(0.5</><1);
If it isThe fusion result is shown in the following formula:
if it isAnd the fusion result takes the coefficients of the visible light image as shown in the following formula:
6. the methane gas leakage detection method based on multi-sensor data fusion according to claim 5, wherein the high-frequency subband images A1, A2, A3, B1, B2 and B3 generated by the decomposition are fused, and the high-frequency coefficients are divided into the highest-frequency coefficients and other high-frequency coefficients;
the highest frequency coefficient fusion processing method comprises the following steps:
the source visible light image is A and the source infrared image is B, the high-frequency sub-bands generated by A decomposition are A1, A2 and A3, the high-frequency sub-bands generated by B decomposition are B1, B2 and B3, wherein A1, A2 and A3 are divided into the highest frequency coefficient and other high frequency coefficients, B1, B2 and B3 are divided into the highest frequency coefficient and other high frequency coefficients, the highest frequency coefficient in A1, A2 and A3 is fused with the highest frequency coefficient in B1, B2 and B3, and the other high frequency coefficients in A1, A2 and A3 are fused with the highest frequency coefficient in B1, B2 and B3 The high-frequency coefficients are fused to generate a fused image F, NSST coefficients obtained by NSST decomposition of A and B are respectivelyAnd,/>wherein->And->Low frequency coefficients of a and B, respectively, +.>And->Respectively represent A and B in the scale +.>And direction->The lower high frequency coefficient;
the highest frequency coefficient fusion adopts a fusion method with large regional energy, and the highest frequency coefficients of A and BAndthe area energy of (2) is shown in the following formula:
wherein:
、/>represents A and B at->Layer, in->Local area energy being the center;
MNrepresenting the number of local area pixels, the size of which is selectedOr->
Is a weight coefficient;
fusion method with large regional energy consumption is adopted, and fusion image F is at the highest layerFusion result of->The following formula is shown:
other high-frequency coefficients are fused by 2 methods, wherein the 1 st method adopts a method combining standard deviation and gradient characteristics, and the rule is as follows:
the local standard deviation of the a and B high frequency subbands is calculated as follows:
wherein:
MNrepresenting the number of local area pixels, the size of which is selectedOr->
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
(/>) And->(/>) The average gray level of all pixels in the local area is represented by the following calculation formula:
the high frequency sub-bands corresponding to A and B are calculated by the following formula And->Gradient values of (2);
wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
and->Respectively represent A and B in the pixel +.>Gradient values at;
defining an edge matching function of a pixel point for A and B respectivelyAnd->The larger the function value, the greater the likelihood that the point is at the edge of the image; the matching function is as follows:
wherein:
(/>) And->Respectively represent A and B in the scale +.>And direction->A matching function below;
and the high-frequency coefficients are fused in a weighting mode, and the following formula is adopted:
wherein:
and->The matching functions A and B are indicated respectively +.>(/>) And->Average value of (2);
the 2 nd method of the fusion of other high-frequency coefficients adopts the method of combining Laplace energy and gradient characteristics for fusion, and the method is as follows:
computing high frequency sub-bands corresponding to A and BAnd->The formula is as follows:
wherein:
,/>respectively represent pixels->At->,/>First order difference in direction;
and->Respectively represent A and B in the pixel +.>Gradient values at;
the Laplace operator ML and the Laplace energy and the SML are defined as the following formulas, wherein the SML represents the marginalization degree of the image, can reflect the gray level change of the pixel, and further reflect the space detail, and the larger the SML value at a certain point is, the richer the detail information contained at the point is, and the greater the possibility that the pixel is positioned in an edge contour area is;
Wherein:
to->A local area as center, size +.>
Representation ofMA specific value of the pixel length;
representation ofNA specific value of the pixel width;
indicating that the high frequency subband picture is located +.>A gray value of a pixel;
the following formula is used for fusion in combination with improving the Laplace energy sum and gradient characteristics:
7. the methane gas leakage detection method based on multi-sensor data fusion according to claim 1, wherein the step 2-2 comprises the steps of:
step S201: the data fused in the step 2-1, methane concentration data of the laser methane detection sensor and temperature data of the infrared temperature measurement sensor are used together as data to be input;
step S202: after loading the data, carrying out normalization processing on the sample data;
step S203: training the RBM network by using a CD-K method after data normalization, and outputting a feature vector by the RBM network;
step S204: the BP neural network receives the feature vector output by the RBM as an input vector, and achieves the function of the classifier to obtain a final judgment result.
8. The methane gas leakage detection method based on multi-sensor data fusion according to claim 7, wherein the normalization process in step S202 uses a normalization function mapmin max, and the mathematical expression is:
Wherein:
inputting a vector for a sample;
is the minimum in the sample;
is the maximum value in the sample;
is the converted output vector.
9. The method for detecting methane gas leakage based on multi-sensor data fusion according to claim 7, wherein the training of the RBM network using the CD-K method in step S203 is specifically as follows: initializing network parameters to enable,/>Is a weight matrix>Bias for visual layer->For hiding layer bias, setting parameters such as maximum iteration number maxepoch, learning rate lr, batch size Batchsize and the like, and continuously adjusting +.>And gradually reducing the reconstruction error until the maximum training times are reached, and outputting the reconstruction data from the hidden layer.
10. The method for detecting methane gas leakage based on multi-sensor data fusion according to claim 7, wherein the step S204 specifically comprises: firstly, setting training parameters of a BP neural network, wherein the training parameters comprise a learning rate lr and a maximum learning frequency maxepoch; training to obtain a BP network net by using a train function, and simulating the network net and training input by using a sim function; and finally, inversely normalizing the output data to obtain output data including whether leakage exists, gas concentration, time and position information, namely judging results.
CN202410218143.3A 2024-02-28 2024-02-28 Methane gas leakage detection method based on multi-sensor data fusion Active CN117783051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410218143.3A CN117783051B (en) 2024-02-28 2024-02-28 Methane gas leakage detection method based on multi-sensor data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410218143.3A CN117783051B (en) 2024-02-28 2024-02-28 Methane gas leakage detection method based on multi-sensor data fusion

Publications (2)

Publication Number Publication Date
CN117783051A true CN117783051A (en) 2024-03-29
CN117783051B CN117783051B (en) 2024-06-14

Family

ID=90383847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410218143.3A Active CN117783051B (en) 2024-02-28 2024-02-28 Methane gas leakage detection method based on multi-sensor data fusion

Country Status (1)

Country Link
CN (1) CN117783051B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105203465A (en) * 2015-09-16 2015-12-30 湖北久之洋红外系统股份有限公司 Hyperspectral infrared imaging gas monitoring device and monitoring method thereof
KR20180036299A (en) * 2016-09-30 2018-04-09 한국가스안전공사 A real time monitoring apparatus for inspecting remote gas leaks and appearance of pipelines using a drone
CN107992857A (en) * 2017-12-25 2018-05-04 深圳钰湖电力有限公司 A kind of high-temperature steam leakage automatic detecting recognition methods and identifying system
US20180341859A1 (en) * 2017-05-24 2018-11-29 Southwest Research Institute Detection of Hazardous Leaks from Pipelines Using Optical Imaging and Neural Network
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN110766676A (en) * 2019-10-24 2020-02-07 中国科学院长春光学精密机械与物理研究所 Target detection method based on multi-source sensor fusion
CN110889455A (en) * 2019-12-02 2020-03-17 西安科技大学 Fault detection positioning and safety assessment method for chemical industry park inspection robot
CN112819158A (en) * 2021-02-05 2021-05-18 凌坤(南通)智能科技有限公司 Gas identification method based on optimized BP neural network
WO2022218139A1 (en) * 2021-04-14 2022-10-20 江苏科技大学 Personalized search method and search system combined with attention mechanism
CN115793673A (en) * 2023-01-10 2023-03-14 北京飞渡科技股份有限公司 Natural gas station robot inspection method and device based on VR technology
CN116339337A (en) * 2023-03-29 2023-06-27 上海无线电设备研究所 Target intelligent positioning control system and method based on infrared imaging, laser radar and sound directional detection
CN116626238A (en) * 2023-05-25 2023-08-22 哈尔滨理工大学 Dual-channel detection compass system for sensing air flow and air combination, air leakage detection method, data fusion and tracking method
US20230290239A1 (en) * 2021-12-30 2023-09-14 Safety Scan Usa, Inc. System and Method for Residential Methane Detection
WO2024037408A1 (en) * 2022-08-16 2024-02-22 天地(常州)自动化股份有限公司 Underground coal mine pedestrian detection method based on image fusion and feature enhancement

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105203465A (en) * 2015-09-16 2015-12-30 湖北久之洋红外系统股份有限公司 Hyperspectral infrared imaging gas monitoring device and monitoring method thereof
KR20180036299A (en) * 2016-09-30 2018-04-09 한국가스안전공사 A real time monitoring apparatus for inspecting remote gas leaks and appearance of pipelines using a drone
US20180341859A1 (en) * 2017-05-24 2018-11-29 Southwest Research Institute Detection of Hazardous Leaks from Pipelines Using Optical Imaging and Neural Network
CN107992857A (en) * 2017-12-25 2018-05-04 深圳钰湖电力有限公司 A kind of high-temperature steam leakage automatic detecting recognition methods and identifying system
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN110766676A (en) * 2019-10-24 2020-02-07 中国科学院长春光学精密机械与物理研究所 Target detection method based on multi-source sensor fusion
CN110889455A (en) * 2019-12-02 2020-03-17 西安科技大学 Fault detection positioning and safety assessment method for chemical industry park inspection robot
CN112819158A (en) * 2021-02-05 2021-05-18 凌坤(南通)智能科技有限公司 Gas identification method based on optimized BP neural network
WO2022218139A1 (en) * 2021-04-14 2022-10-20 江苏科技大学 Personalized search method and search system combined with attention mechanism
US20230290239A1 (en) * 2021-12-30 2023-09-14 Safety Scan Usa, Inc. System and Method for Residential Methane Detection
WO2024037408A1 (en) * 2022-08-16 2024-02-22 天地(常州)自动化股份有限公司 Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN115793673A (en) * 2023-01-10 2023-03-14 北京飞渡科技股份有限公司 Natural gas station robot inspection method and device based on VR technology
CN116339337A (en) * 2023-03-29 2023-06-27 上海无线电设备研究所 Target intelligent positioning control system and method based on infrared imaging, laser radar and sound directional detection
CN116626238A (en) * 2023-05-25 2023-08-22 哈尔滨理工大学 Dual-channel detection compass system for sensing air flow and air combination, air leakage detection method, data fusion and tracking method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴永忠 等: "一种新型的红外甲烷测量定量分析模型", 《煤炭学报》, no. 02, 15 February 2009 (2009-02-15) *
方静 等: "改进快速NSST 的热烟羽红外与可见光图像融合", 《激光与红外》, vol. 47, no. 7, 31 July 2017 (2017-07-31), pages 914 - 920 *
艾莉: "《智能算法及其应用研究》》", 28 February 2022, 吉林大学出版社, pages: 113 *
韩晓云 等: "基于数据融合的可燃气体燃爆状态监测系统", 《电子设计工程》, vol. 27, no. 8, 20 April 2019 (2019-04-20), pages 6 - 10 *

Also Published As

Publication number Publication date
CN117783051B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN102096921B (en) SAR (Synthetic Aperture Radar) image change detection method based on neighborhood logarithm specific value and anisotropic diffusion
CN110108754B (en) Structured sparse decomposition-based light-excitation infrared thermal imaging defect detection method
CN106645384B (en) A kind of adaptive filter method of pipe leakage internal detector data
CN112184693A (en) Intelligent detection method for weld defects of ray industrial negative
CN106815819B (en) More strategy grain worm visible detection methods
CN113160084B (en) Denoising method and device for quantum dot fluorescence image on surface of porous silicon biosensor
CN111695473B (en) Tropical cyclone strength objective monitoring method based on long-short-term memory network model
CN115797335B (en) Euler movement amplification effect evaluation and optimization method for bridge vibration measurement
CN110297041A (en) A kind of 3D woven composite defect inspection method based on FCN and GRU
CN109813772A (en) A kind of stable imaging method of coplanar array capacitor
CN111986162A (en) Hyperspectral abnormal point rapid detection method based on rough positioning and collaborative representation
CN110033434A (en) A kind of detection method of surface flaw based on texture conspicuousness
CN109708861A (en) A kind of automobile exhaust pipe thermal vibration detection method and detection system, computer program
CN111879709B (en) Lake water body spectral reflectivity inspection method and device
CN112013285A (en) Method and device for detecting pipeline leakage point, storage medium and terminal
CN108344795B (en) Oil-gas pipeline defect identification method and device and electronic equipment
CN115359064A (en) Industrial defect detection method and device
CN112464876A (en) Fault diagnosis method and device for power equipment, computer equipment and storage medium
El-Tokhy et al. Classification of welding flaws in gamma radiography images based on multi-scale wavelet packet feature extraction using support vector machine
CN115049026A (en) Regression analysis method of space non-stationarity relation based on GSNNR
CN113607817B (en) Pipeline girth weld detection method, system, electronic equipment and medium
CN117783051B (en) Methane gas leakage detection method based on multi-sensor data fusion
CN111222543B (en) Substance identification method and apparatus, and computer-readable storage medium
CN113607068B (en) Method for establishing and extracting recognition model of photoacoustic measurement signal characteristics
CN107944474A (en) Multiple dimensioned cooperation table based on local auto-adaptive dictionary reaches hyperspectral classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant