Disclosure of Invention
In order to solve the problem of uninterrupted real-time monitoring and accurately identifying the leakage condition of a device, especially under the conditions of dim light, rainy days and haze, the invention provides an intelligent early warning system for monitoring the leakage of the device and an intelligent early warning method for monitoring the leakage of the device, which comprises the following specific technical scheme:
an intelligent early warning system for monitoring leakage of a device comprises a video acquisition module, a video image enhancement module, a data annotation module, a model construction module, a model learning module, a data verification module and a leakage early warning module; the video acquisition module converts an input device site monitoring video into a picture or a video frame; the video image enhancement module is used for processing video images under the conditions of dim light, rain and snow weather and haze weather; the data marking module marks three liquid leakage states of permeation, drip and injection in an image by using a boundary frame according to the picture or video frame of liquid leakage by using an image marking tool; the model construction module constructs a five-layer deep neural network model; the model learning module adopts supervised training to obtain characteristic parameters of the liquid leakage state; the data verification module utilizes the identification capability of the input liquid leakage depth neural network model and optimizes the depth neural network structure; the leakage early warning module diagnoses the leakage state by using the successfully trained liquid leakage depth neural network model and issues a control command through the video control platform.
Preferably, the video image enhancement module specifically processes the video image under the condition of dim light by adopting a self-adaptive enhancement algorithm based on a closed operation to obtain an enhanced video image; processing the video image under the weather conditions of rain and snow by adopting a fuzzy C-means algorithm to obtain an enhanced video image; and processing the video image under the condition of haze weather by adopting a Retinex algorithm to obtain an enhanced video image.
It is also preferred that the adaptive enhancement algorithm based on the closed-loop algorithm processes video images in dim light conditions, where the processing results in an enhanced video image expressed as
The processing method specifically comprises the following steps:
step one: performing low-pass filtering processing on the video image under the condition of dim light to obtain a low-frequency image A (x, y) containing contour information;
step two: acquiring high-frequency information containing video image details, in particular subtracting a low-frequency image A (x, y) from an original image F (x, y), namely F (x, y) ΘA (x, y);
step three: enhancing contrast, in particular by multiplying the high-frequency image by an enhancement factor a, i.e
Step four: conditional image brightness, in particular, a factor b is added that adjusts the image brightness.
It is also preferable that the fuzzy C-means algorithm processes video images in rainy and snowy weather conditions, including the steps of:
step one: the video image is composed of a plurality of pixels, the pixels are set to be I (x, y), and the light intensity sequences I (t) =I (1), I (2), … and I (k) in the image are set, wherein t=1, 2, … and k;
step two: setting two categories of a background category and a raindrop category, and setting the average brightness of the background category as Bmenter through brightness comparison; setting the average brightness of raindrops as Rcenter;
step three: initializing a membership matrix U, so that the sum of the membership satisfied by U is 1;
step four: calculating average brightness of two pixel points, namely a background pixel point and a raindrop pixel point, wherein the average brightness is specifically bcenter=min (I), and rcenter=max (I);
step five: calculating the absolute value of the average brightness of each pixel point I (t) and two categories of the background category and the raindrop category, and setting J B =|I(t)-Bcenter|,J R Comparative, if J B <J R The pixel point I (t) is summarized into the background class, if J B ≥J R The pixel point I (t) is summarized into raindrops;
step six: calculation of
If the value of J is smaller than the set threshold value, the classification is finished; if the value of J is not less than the set valueCalculating a new matrix U according to the threshold value, and starting from the third step;
step seven: and (3) finishing classification, namely respectively carrying out arithmetic average on the brightness of the background class and the raindrop class, and calculating the Center brightness Center of the background class and the raindrop class.
It is further preferable that the fuzzy C-means algorithm sets C as a pixel for estimating background brightness in the process of processing the video image in the rainy or snowy weather, and sets α as a percentage of the total number k in the background class, and sets c=αbcenter+ (1- α) Rcenter.
It is also preferred that the Retinex algorithm processes video images in case of hazy weather, comprising the steps of:
step one: the atmospheric light intensity estimation, the brightness values of all pixel points of the video image are ordered according to descending order, the position of the brightest point with the brightness value reaching the first 0.1% is selected, the brightness value is compared with the original video image, and the maximum value is selected as the value of the atmospheric light intensity;
step two: calculating a transfer diagram, calculating and estimating to obtain an irradiation component, and converting a video image into a gray image to obtain a transfer diagram t (x);
step three: and solving an equation, and solving by using the atmospheric defogging model to obtain a defogging video image.
Preferably, the five-layer deep neural network model constructed by the model construction module specifically comprises a first layer adopting a sparse automatic encoder, a second layer and a third layer adopting universal automatic encoders, a fourth layer adopting a noise reduction automatic encoder, a fifth layer adopting an SVM support vector machine, input data being pictures or video frames converted by the video acquisition module, and output data being device leakage positions and liquid leakage states.
It is also preferable that the model learning module acquires characteristic parameters of the liquid leakage state including a color layout description feature, a color structure description feature, a texture description feature, an edge histogram description feature, and a spectral reflectance feature.
It is further preferable that when the leakage early warning module diagnoses that the leakage state is penetration, the video management and control platform displays general risks, alarms in the platform, automatically pops up leakage images and marks the leakage positions, and displays the leakage positions in a text state; when the leakage early warning module diagnoses that the leakage state is drip, the video management and control platform displays medium risk, alarms in the platform, automatically pops up a leakage image and marks the leakage position, and simultaneously displays the leakage position in a character state; and pushing the message to a responsible person in real time in a short message or WeChat mode; when the leakage early warning module diagnoses that the leakage state is jetting, the video management and control platform displays a major risk, the platform alarms, automatically pops up a leakage image and marks the leakage position, and simultaneously displays the leakage position in a text state; and the message is immediately pushed to a responsible person and a supervisor in a short message or WeChat mode.
In addition, preferably, the early warning method of the intelligent early warning system for monitoring leakage of the device specifically comprises the following steps:
step one: the method comprises the steps that a video is collected, and a video collection module converts a video shot by field monitoring of a device into a picture or a video frame;
step two: the method comprises the steps of carrying out sharpening processing on a video image, wherein a video image enhancement module is used for processing the video image under the condition of dim light by adopting a self-adaptive enhancement algorithm based on closed operation to obtain a sharpened video image; processing the video image under the weather conditions of rain and snow by adopting a fuzzy C-means algorithm to obtain a clear video image; processing the video image under the condition of haze weather by adopting a Retinex algorithm to obtain a clear video image;
step three: marking the video image, namely marking the obtained clear video image by using a boundary box by adopting an image marking tool, and marking three liquid leakage states of permeation, drip and injection in the boundary box;
step four: dividing the video image into a training video image and a verification video image;
step five: constructing a deep learning neural network, and constructing a five-layer deep neural network model, wherein the first layer adopts a sparse automatic encoder, the second layer and the third layer adopt universal automatic encoders, the fourth layer adopts a noise reduction automatic encoder, and the fifth layer adopts an SVM support vector machine;
step six: determining input and output data, selecting a training video image in the fourth step, inputting a picture or a video frame converted by the video acquisition module by an input layer, and outputting a result by an output layer as a device leakage position and a liquid leakage state;
step seven: training a deep learning neural network, and acquiring characteristic parameters of a liquid leakage state by using a model learning module, wherein the characteristic parameters comprise color layout description characteristics, color structure description characteristics, texture description characteristics, edge histogram description characteristics and spectral reflectivity characteristics;
step eight: adjusting the deep neural network, and adjusting a fifth layer of the deep neural network according to the characteristic parameters obtained in the step seven;
step nine: model verification, namely inputting the video image for verification in the fourth step into the deep neural network model obtained in the eighth step, and outputting a result;
step ten: model optimization, namely optimizing a deep neural network model according to the verification result of the video image in the step nine;
step eleven: and (3) leakage early warning, namely accessing a monitoring video into a video acquisition module, inputting a system identification result into a video management and control platform, and respectively early warning three liquid leakage states of permeation, drip leakage and injection by the video management and control platform.
The invention has the beneficial effects that: (1) The system automatic intelligent recognition device combining the video acquisition module, the video image enhancement module, the data annotation module, the model construction module, the model learning module, the data verification module and the leakage early warning module is utilized to leak, so that the problem that the device cannot be discovered at the first time in the leakage process of the video monitoring device is solved, and the problem of misjudgment caused by long-time concentration of personnel is avoided; (2) The video image enhancement module respectively processes video images under the conditions of night, rainy days and haze weather by using a self-adaptive enhancement algorithm, a fuzzy C-means algorithm and a Retinex algorithm based on closed operation, so that a monitoring system can automatically identify video monitoring conditions under different severe weather influences; (3) The intelligent early warning method also has a highly intelligent and automatic monitoring function, and realizes the safe and automatic monitoring of the leakage condition of the device; (4) The invention also has the advantages that the leakage early warning module can prompt or alarm the on-site monitoring condition in time, so as to avoid the expansion of accidents, and the video of the on-site accident occurrence is extracted to be used as an important basis for analyzing the cause of the accident occurrence.
Detailed Description
With reference to fig. 1 and fig. 2, the intelligent early warning system for leakage of the monitoring device and the intelligent early warning method for leakage of the monitoring device of the present invention are further specifically described.
Example 1
Referring to fig. 1, the embodiment is a detailed description of an intelligent early warning system structure and an early warning method for monitoring leakage of a device.
The intelligent early warning system for monitoring device leakage specifically comprises a video acquisition module, a video image enhancement module, a data labeling module, a model construction module, a model learning module, a data verification module and a leakage early warning module, wherein the automatic intelligent recognition device leakage is realized through the combined system of the modules, the problem that the device leakage cannot be discovered at the first time in the process of utilizing the video monitoring device leakage is solved, and the problem that judgment errors are caused by long-time concentrated attention of monitoring operators is avoided.
The video acquisition module is used for realizing real-time transmission of monitoring data between the video acquisition module and the device video monitoring, and the video acquisition module is mainly used for converting an input device field monitoring video into a picture or a video frame. The video image enhancement module processes video images under the conditions of dim light, rainy and snowy weather and haze weather, and specifically processes the video images under the conditions of dim light by adopting a self-adaptive enhancement algorithm based on closed operation to obtain enhanced video images; processing the video image under the weather conditions of rain and snow by adopting a fuzzy C-means algorithm to obtain an enhanced video image; and processing the video image under the condition of haze weather by adopting a Retinex algorithm to obtain an enhanced video image. And the data marking module respectively marks three liquid leakage states of permeation, drip and injection in the image by using a boundary box according to the picture or video frame of liquid leakage by using an image marking tool. The video monitoring conditions under different bad weather influences can be automatically identified by the aid of clear processing of the fuzzy video images, and the clear processing of the video images through image automation has important significance for analysis of leakage reasons.
The model construction module is used for constructing a five-layer deep neural network model, the five-layer deep neural network model constructed by the model construction module is specifically characterized in that a sparse automatic encoder is adopted in a first layer, a universal automatic encoder is adopted in a second layer and a third layer, a noise reduction automatic encoder is adopted in a fourth layer, an SVM (support vector machine) is adopted in a fifth layer, input data are pictures or video frames converted by the video acquisition module, and output data are device leakage positions and liquid leakage states. The model learning module acquires characteristic parameters of the liquid leakage state by adopting supervised training, and the characteristic parameters of the liquid leakage state acquired by the model learning module comprise color layout description features, color structure description features, texture description features, edge histogram description features and spectral reflectivity features. The data verification module utilizes the identification capability of the input liquid leakage deep neural network model and optimizes the deep neural network structure.
The leakage early warning module diagnoses the leakage state by using the successfully trained liquid leakage depth neural network model, issues a control command through the video control platform, adopts different treatment measures according to different leakage conditions, and informs related personnel according to the leakage position. When the leakage early warning module diagnoses that the leakage state is penetration, the video management and control platform displays the general risk, alarms in the platform, automatically pops up a leakage image and marks the leakage position, and simultaneously displays the leakage position in a text state; when the leakage early warning module diagnoses that the leakage state is drip, the video management and control platform displays medium risk, alarms in the platform, automatically pops up a leakage image and marks the leakage position, and simultaneously displays the leakage position in a character state; and pushing the message to a responsible person in real time in a short message or WeChat mode; when the leakage early warning module diagnoses that the leakage state is jetting, the video management and control platform displays a major risk, the platform alarms, automatically pops up a leakage image and marks the leakage position, and simultaneously displays the leakage position in a text state; and the message is immediately pushed to a responsible person and a supervisor in a short message or WeChat mode.
Example 2
The embodiment is further described with reference to fig. 2, which is a method for intelligent early warning of leakage of a monitoring device.
The early warning method of the intelligent early warning system for monitoring the leakage of the device comprises the following steps:
step one: the method comprises the steps that a video is collected, a video collection module converts a video shot by field monitoring of a device into a picture or a video frame, and video images are transmitted to a video image enhancement module through a data transmission line;
step two: the method comprises the steps of carrying out sharpening processing on a video image, wherein a video image enhancement module is used for processing the video image under the condition of dim light by adopting a self-adaptive enhancement algorithm based on closed operation to obtain a sharpened video image; processing the video image under the weather conditions of rain and snow by adopting a fuzzy C-means algorithm to obtain a clear video image; processing the video image under the condition of haze weather by adopting a Retinex algorithm to obtain a clear video image;
step three: marking the video image, namely marking the obtained clear video image by using a boundary box by adopting an image marking tool, and marking three liquid leakage states of permeation, drip and injection in the boundary box;
step four: dividing the video image into a training video image and a verification video image;
step five: constructing a deep learning neural network, and constructing a five-layer deep neural network model, wherein the first layer adopts a sparse automatic encoder, the second layer and the third layer adopt universal automatic encoders, the fourth layer adopts a noise reduction automatic encoder, and the fifth layer adopts an SVM support vector machine;
step six: determining input and output data, selecting a training video image in the fourth step, inputting a picture or a video frame converted by the video acquisition module by an input layer, and outputting a result by an output layer as a device leakage position and a liquid leakage state;
step seven: training a deep learning neural network, and acquiring characteristic parameters of a liquid leakage state by using a model learning module, wherein the characteristic parameters comprise color layout description characteristics, color structure description characteristics, texture description characteristics, edge histogram description characteristics and spectral reflectivity characteristics;
step eight: adjusting the deep neural network, selecting proper characteristic parameters according to the characteristic parameters obtained in the step seven, and adjusting the fifth layer of the deep neural network according to the proper characteristic parameters;
step nine: model verification, namely inputting the video image for verification in the fourth step into the deep neural network model obtained in the eighth step, and outputting a result;
step ten: model optimization, namely optimizing a deep neural network model according to the verification result of the video image in the step nine;
step eleven: and (3) leakage early warning, namely accessing a monitoring video into a video acquisition module, inputting a system identification result into a video management and control platform, and respectively early warning three liquid leakage states of permeation, drip leakage and injection by the video management and control platform.
The intelligent early warning method has the highly intelligent and automatic monitoring function, realizes the safe and automatic monitoring of the leakage condition of the device, adjusts and optimizes according to the deep neural network model, has learning capability, and improves the early warning accuracy by continuously identifying the leakage condition of the device.
Example 3
The embodiment is a further detailed description of a video image processing method in the intelligent early warning method for leakage of the monitoring device in embodiment 2.
In the sharpness processing video image, for the video image under the condition of dim light processed by the self-adaptive enhancement algorithm based on the closed operation, the expression of the enhanced video image obtained by processing is
The processing method specifically comprises the following steps:
step one: performing low-pass filtering processing on the video image under the condition of dim light to obtain a low-frequency image A (x, y) containing contour information;
step two: acquiring high-frequency information containing video image details, in particular subtracting a low-frequency image A (x, y) from an original image F (x, y), namely F (x, y) ΘA (x, y);
step three: enhancing contrast, in particular by multiplying the high-frequency image by an enhancement factor a, i.e
Step four: conditional image brightness, in particular, a factor b is added that adjusts the image brightness.
In the process of sharpening a video image, for the video image under the condition of rain and snow, the fuzzy C-means algorithm is used for processing the video image under the condition of rain and snow, wherein a pixel estimated by background brightness is set as C, the percentage of the number in the background class accounting for the total number k is set as alpha, and C=alpha Bcentre+ (1-alpha) Rcenter is set, and the method comprises the following steps:
step one: the video image is composed of a plurality of pixels, the pixels are set to be I (x, y), and the light intensity sequences I (t) =I (1), I (2), … and I (k) in the image are set, wherein t=1, 2, … and k;
step two: setting two categories of a background category and a raindrop category, and setting the average brightness of the background category as Bmenter through brightness comparison; setting the average brightness of raindrops as Rcenter;
step three: initializing a membership matrix U, so that the sum of the membership satisfied by U is 1;
step four: calculating average brightness of two pixel points, namely a background pixel point and a raindrop pixel point, wherein the average brightness is specifically bcenter=min (I), and rcenter=max (I);
step five: calculating the absolute value of the average brightness of each pixel point I (t) and two categories of the background category and the raindrop category, and setting J B =|I(t)-Bcenter|,J R Comparative, if J B <J R The pixel point I (t) is summarized into the background class, if J B ≥J R The pixel point I (t) is summarized into raindrops;
step six: calculation of
If the value of J is smaller than the set threshold value, the classification is finished; if the value of J is not smaller than the set threshold value, calculating a new matrix U, and starting from the step three;
step seven: and (3) finishing classification, namely respectively carrying out arithmetic average on the brightness of the background class and the raindrop class, and calculating the Center brightness Center of the background class and the raindrop class.
In the process of sharpening the video image, the Retinex algorithm is used for processing the video image under the condition of haze weather, and the method specifically comprises the following steps of:
step one: the atmospheric light intensity estimation, the brightness values of all pixel points of the video image are ordered according to descending order, the position of the brightest point with the brightness value reaching the first 0.1% is selected, the brightness value is compared with the original video image, and the maximum value is selected as the value of the atmospheric light intensity A;
step two: calculating a transfer diagram, calculating and estimating to obtain an irradiation component, and converting a video image into a gray image to obtain a transfer diagram t (x);
step three: and solving an equation, and solving by using the atmospheric defogging model to obtain a defogging video image.
Wherein the expression of the atmospheric defogging model processed by the Retinex algorithm is F (x, y) =I (x, y) R (x, y), and F (x, y) represents an original video image; i (x, y) is represented as a light component, i.e., a transfer map t (x) of the atmospheric transfer model; r (x, y) represents the reflected component, including detail information in the video image.
The expression for solving the transfer diagram t (x) is:
wherein t (x) is a transfer diagram, and 1- ρ (x) represents an inverse albedo C (x); wherein, the calculation method of the albedo C (x) is C (x) =L (x, y) =I
O (x, y) G, where L (x, y) is the illumination component to be estimated, I
O (x, y) is a gray scale image into which the original video image I (x, y) is convertedLike, G is a gaussian function, which represents convolution.
The Gaussian function G is defined as follows
Wherein->
The defogging image J (x) has the expression:
it should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.