CN114429597A - Initial fire intelligent recognition system in petrochemical industry based on neural network - Google Patents
Initial fire intelligent recognition system in petrochemical industry based on neural network Download PDFInfo
- Publication number
- CN114429597A CN114429597A CN202111285250.0A CN202111285250A CN114429597A CN 114429597 A CN114429597 A CN 114429597A CN 202111285250 A CN202111285250 A CN 202111285250A CN 114429597 A CN114429597 A CN 114429597A
- Authority
- CN
- China
- Prior art keywords
- image
- fire
- area
- module
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 30
- 238000007781 pre-processing Methods 0.000 claims abstract description 22
- 238000012800 visualization Methods 0.000 claims abstract description 8
- 238000012544 monitoring process Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 30
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000033001 locomotion Effects 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 7
- 238000005260 corrosion Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 5
- 238000009792 diffusion process Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000007480 spreading Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 claims description 2
- 230000009191 jumping Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000005192 partition Methods 0.000 claims description 2
- 230000010339 dilation Effects 0.000 claims 2
- 210000005036 nerve Anatomy 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 32
- 238000012545 processing Methods 0.000 description 9
- 238000012706 support-vector machine Methods 0.000 description 8
- 230000003628 erosive effect Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 239000000779 smoke Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
An early stage fire intelligent identification system based on a neural network in the petrochemical industry comprises a system management module, an image preprocessing module, a fire identification module, an early warning management module and a visualization module which are connected, wherein the system management module is used for configuring relevant information of video monitoring; the image preprocessing module is used for acquiring a monitored real-time image through a camera, preprocessing the real-time image and sending the preprocessed real-time image to the fire recognition model module; a fire identification model in the fire identification model module identifies the received image and transmits the identification result to the early warning management module; the early warning management module analyzes the result of the fire recognition module through an artificial model established by combining the working experience of personnel, so that the occurrence of false alarm is further reduced; the visualization module is used for alarming and displaying. The invention not only realizes high recognition rate and accuracy of flame, but also meets the requirement of multi-path high concurrency, has shorter overall response time of the system, and reduces economic loss and influence on the device to the maximum extent.
Description
Technical Field
The invention relates to an intelligent fire identification system. In particular to an intelligent fire identification system based on a neural network and used in the initial stage of the petrochemical industry.
Background
The existing fire detection methods are mainly divided into two categories: a fire detection method based on traditional image processing and a fire detection method based on a neural network. The fire detection method based on traditional image processing mainly tends to train a fire classifier by analyzing static characteristics of visual characteristics of fire, such as color characteristics, texture characteristics, shape characteristics and the like; the fire detection method based on the neural network directly processes the images in the video frames by utilizing the convolutional neural network model, so that a large amount of early work of preprocessing training data in the traditional learning method is saved, and the convolutional neural network model has high efficiency 'intelligence' and 'initiative', so that the target detection method based on the neural network has greater advantages compared with the traditional method.
A fire detection method based on traditional image processing is mainly used for constructing a classifier by detecting visual characteristics of a fire image, such as color characteristics, texture characteristics, shape characteristics and other information. Color information is an important factor for fire detection, the existing mature color models comprise RGB and YCbCr color models, and researchers make a great deal of research on fire color characteristics. Celi, a self-adaptive video fire detection algorithm is proposed, which combines video foreground information and color information to detect fire-like areas in a video. Firstly, self-adaptive background modeling is carried out, then RGB color information of an image is extracted, a statistical color model is constructed, noise is removed by adopting corrosion and expansion operations of a morphological method, and finally fire detection is carried out by adopting a connected component marking algorithm. The method has strong real-time performance, but the accuracy of the algorithm is not high due to the fact that only color information is extracted. Subsequently, Celik et al proposed using polynomials to model the fire color distribution over the YCbCr color space. These color models are obtained by statistical analysis of fire video sequences and fire related pictures of different classes starting from the YCbCr color space. The model improves the accuracy of flame detection. But the dynamic characteristics of the flame are not combined, and the false alarm rate of the static target of the similar fire is high. Lin proposes an intelligent fire detection algorithm based on image processing, which takes the motion and static characteristics of fire into consideration comprehensively. Firstly, acquiring a video motion area; secondly, obtaining candidate areas through a color full-space model; thirdly, extracting shape characteristics such as the area, the perimeter and the like of the candidate region to eliminate the interference of some fire-like objects; and finally, adopting shape characteristics such as irregular polygons, circles and the like to carry out fire detection. The method has outstanding contribution to improving the performance and reducing the false detection rate. But the effect is not good when the distance is long or the image proportion of flames is small. Kosmas proposes a real-time fire detection algorithm, utilizes various space-time characteristics to model fire behavior, and utilizes a dynamic texture analysis model to model the time evolution of the pixel intensity of a candidate image block. The space-time characteristics adopted by the method are as follows: color probability information, flicker characteristics, wavelet analysis, space-time energy analysis, dynamic texture analysis and the like, after the space-time information characteristics of fire flames are extracted, an SVM (support vector machine) classifier is adopted for fire detection, and the accuracy and robustness of the algorithm are improved. But the time complexity is high and the time delay is large. The method comprises the steps of firstly adopting a vector correlation theory, then utilizing a neural network to classify the extracted characteristics, and obtaining a better classification result under the complex condition of illumination change. However, the small target recognition rate is low and the effect is poor under high concurrency. Wang D C, Cui X, Park E proposes an adaptive fire detection algorithm using random testing and robustness characteristics, and the algorithm firstly constructs a YCbCr color space model; secondly, updating the motion background image by adopting an approximate median method, and combining the motion background image and the motion background image to obtain some candidate video frames; then, extracting some fire characteristics including flicker frequency, area, mass center and the like to obtain a group of characteristic vectors; and finally, constructing a fire classification model by adopting a random forest algorithm in a machine learning algorithm to finish fire detection in the video. However, this method is still subject to interference from some external conditions, such as: the resolution of the camera, weather conditions, environmental factors, and the like. Jin improves the prior video fire detection method based on motion information and color models, and provides a real-time fire detection algorithm based on logistic regression and random test.
The fire detection method based on the neural network simulates a layered signal processing mechanism of a human brain nervous system by establishing a deep model structure. Frizzi proposes a fire smoke detection model based on a convolutional neural network. The method is greatly different from other methods in that a characteristic extraction stage is not provided, and a training model is obtained by directly operating an original RGB video frame. In the training process, in order to reduce the processing time required to detect fire smoke, it uses optical structures in a CNN (convolutional neural network) model and uses sliding windows to detect an original image or a reconstructed image. These sliding windows yield classification results through the convolutional neural network and the fully connected layer. The disadvantage of this algorithm is that it works poorly when the image resolution is high and the flame occupancy is small. Maksymiv proposes a fire detection method, which first uses Adaboost and a local binary pattern in combination to obtain a region of interest to reduce the time complexity; and then a convolutional neural network is utilized for fire detection to relieve the false alarm problem. The method can achieve 95.2% of accuracy in dealing with the detection problem of the emergency. The Adaboost training process is extremely time consuming, which is exactly one of the key issues that the algorithm needs to improve. Wang Z, Zhang H combines a convolutional neural network and a support vector machine to carry out fire detection, and the innovation point is that the support vector machine is used for replacing the last full connection layer of the convolutional neural network and Softmax classification, namely, the method utilizes the convolutional neural network to extract deep features of a fire image, and then an SVM (support vector machine) algorithm is used for constructing a classification model of the extracted features to realize fire detection. Experiments show that the method is superior to a fire detection method which singly uses CNN (convolutional neural network) or SVM (support vector machine). But cannot meet the requirement of multi-path high-concurrency real-time processing.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent early-stage fire identification system based on a neural network, which has high accuracy and quick overall response of the system and is used in the petrochemical industry.
The technical scheme adopted by the invention is as follows: an early stage fire intelligent identification system based on a neural network in the petrochemical industry comprises a system management module, an image preprocessing module, a fire identification module, an early warning management module and a visualization module which are sequentially connected, wherein the system management module is used for configuring relevant information of video monitoring; the image preprocessing module is used for acquiring a monitored real-time image through a camera, preprocessing the real-time image and sending the preprocessed real-time image to the fire recognition model module; the fire identification model module comprises a fire identification model, identifies the received image through the fire identification model and transmits the identification result to the early warning management module; the early warning management module analyzes the result of the fire recognition module through an artificial model established by combining the working experience of personnel, so that the occurrence of false alarm is further reduced; the visualization module is used for alarming and displaying.
The early-stage fire intelligent identification system based on the neural network in the petrochemical industry carries out image preprocessing through a foreground detection technology, completes flame detection based on neural network modeling, meanwhile establishes an artificial model based on enterprise staff and actual experience, classifies the flame according to the serious damage level, and realizes graded early warning. The method can adapt to the particularity of petrochemical scenes, and the identification accuracy rate is over 95 percent. The high recognition rate and the high accuracy of the flame are realized, and the overall response time of the system is short when the multi-channel high concurrency is met, so that the economic loss and the influence on the device are reduced to the maximum extent.
Drawings
FIG. 1 is a flow chart of the operation of the intelligent fire identification system based on neural network in the early stage of the petrochemical industry;
FIG. 2 is a schematic diagram of the convolution operation with an input of 3 × 3 array, a convolution kernel of 2 × 2, a number of channels of 1, and a step size of 1;
FIG. 3 is a schematic of the pooling operation with inputs of 4 x 4 array, filter of 2x2, and step size of 2.
Detailed Description
The intelligent fire identification system based on the neural network in the petrochemical industry is described in detail below with reference to embodiments and drawings.
As shown in fig. 1, the neural network-based early-stage fire intelligent identification system in the petrochemical industry of the present invention comprises a system management module, an image preprocessing module, a fire identification module, an early warning management module and a visualization module, which are connected in sequence, wherein the system management module is used for configuring relevant information of video monitoring; the image preprocessing module is used for acquiring a monitored real-time image through a camera, preprocessing the real-time image and sending the preprocessed real-time image to the fire recognition model module; the fire identification model module comprises a fire identification model, identifies the received image through the fire identification model and transmits the identification result to the early warning management module; the early warning management module analyzes the result of the fire recognition module through an artificial model established by combining the working experience of personnel, so that the occurrence of false alarm is further reduced; the visualization module is used for alarming and displaying.
The configuration related information in the system management module of the present invention includes:
1) configuring manufacturers, models, serial numbers, states and access paths of the hard disk video recorder;
2) configuring the manufacturer of the camera, the mounted hard disk video recorder, the serial number, the partition position, the delay time and the interval frame number information;
3) parameter information dependent on image preprocessing operation is configured; image pre-processing runtime dependent parameter information, including: camera RTSP address, camera resolution (1920 x 1080), flame recognition model address, buffer address, message queue address, dynamic threshold, streaming frame interval.
4) And maintaining the user information and the user role, and setting authority for the operation access of the user role.
The image preprocessing module acquires the configured related information and a real-time image monitored by a camera from a system management module; and preprocessing the real-time image by gray level conversion, Gaussian blur, frame difference, corrosion expansion and foreground extraction to obtain video frame data, and sending the video frame data to a fire identification module. Wherein,
the gray scale conversion: the gray level conversion is to process the independent pixel points, and the image is clear by changing the gray level range occupied by the original image data;
the Gaussian blur is as follows: for reducing image noise and reducing detail level;
the frame difference is as follows: the contour of the moving target is obtained by carrying out differential operation on two adjacent frames in a video image sequence, so that the method can be well suitable for the condition that a plurality of moving targets and a camera move;
the corrosion expansion is as follows: the expansion (contrast) and erosion (erosion) of the image are two basic morphological operations, which are used for searching a maximum area and a minimum area in the image, wherein the erosion is to reduce and thin a highlight area or a white part in the image, and the image after erosion processing is smaller than the highlight area of the original image; the expansion is to expand a highlight area or a white part of the image, and the image after the expansion processing is larger than the highlight area of the original image;
the fire identification module is a fire identification model based on a CNN and interframe difference algorithm, and the fire identification module operates the following steps:
1) and finding out a motion area in a video picture by adopting an interframe difference method through the characteristic that flame has jumping: the method comprises the following steps: converting the RGB three-channel color image into a gray image of a grey single channel, subtracting the fn-1 frame gray image from the fn-th frame gray image one by one to obtain a differential image, setting a threshold value to be 50, and enabling a value less than or equal to 50 in the differential image to be 0 and a value greater than 50 to be 255 to obtain a binary image; and then performing connectivity analysis to connect the plurality of concentrated small dynamic areas to obtain a large dynamic area.
When the large dynamic area is an area smaller than the threshold value, the area is an area with small change or static in the video picture and is a background; when the large dynamic area is an area which is larger than or equal to the threshold value, the pixel change of the area is large, and an object moves in the area, namely the area is a motion area; when a fire occurs in a video picture, it must be in a motion region larger than the threshold value. Finding a suspected flame area by finding a motion area;
and inputting the area where the suspected flame is found into the fire recognition model to judge whether the area is a fire or not.
The formula of the interframe difference method in the fire identification module is as follows:
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
wherein fn (x, y), fn-1(x, y) represent two adjacent frame images, and Dn is a differential image.
The binary image calculation formula in the fire identification module is as follows:
wherein T is a threshold value, R' n (x, y) is a binary image, and Dn is a difference image
The fire recognition model constituting the fire recognition module is a convolutional neural network, including:
input of the input layer: 64 x 3 images;
the first layer of convolution kernel is 3 x 3, the number of channels is 32, the step length is 1, the function relu is activated, and the same convolution is carried out (the scale of the image after the same convolution is not changed);
second-layer pooling: filter2x2, maxporoling at step size 2;
the third layer of convolution kernel is 3 x 3, the number of channels is 32, the step length is 1, the function relu is activated, and the same convolution is carried out (the scale of the image after the same convolution is not changed);
and fourth-layer pooling: filter2x2, maxporoling at step size 2;
a fifth layer convolution kernel is 3 x 3, the number of channels is 32, the step length is 1, and an activation function relu and same convolution (the scale of the image after same convolution is not changed);
sixth full tie layer: 512 neural units, 0.6 for random discard (dropout);
layer seven network output layer: the output node type is fire or no fire;
except for the seventh layer, the other activation functions of all layers adopt a ReLU function, and the expression is as follows:
z(x)=max(0,x);
mapping the output node value to a probability space by using a Softmax activation function to form the probability of fire or no fire;
the expression of the Softmax function is:
zj is one of the output nodes, the Zj power of e is the sum of powers of all the nodes e, and K is the number of the output nodes, namely the number of classified categories, so that the probability of fire or no fire is obtained;
the input of the convolution calculation of the convolution neural network is a two-dimensional array with the height and the width both being 3; the shape (shape) of the array is marked as 3 × 3 or (3, 3); the height and width of the kernel array are respectively 2 and are marked as 2 multiplied by 2 or (2, 2), and the kernel array is also called convolution kernel or filter (filter) in the convolution neural network calculation; the shape of the convolution kernel window (also called convolution window) is the height and width of the convolution kernel, namely 2x 2; the shaded part of the graph with the number of channels being the number of convolution kernels is the first output element and the input and kernel array elements used for calculation: 0 × 0+1 × 1+3 × 2+4 × 3 ═ 19; the step size is the interval of each convolution sliding; fig. 2 shows an array with 2 × 2 output for a convolution operation with 3 × 3 input, 2 × 2 convolution kernel, 1 number of channels, and 1 step size;
as shown in fig. 3, the input of the pooling calculation of the convolutional neural network is a two-dimensional array with a height and width of 4; the shape (shape) of the array is marked as 4 × 4 or (4, 4); filter window 2 × 2; maximum pooling (maxporoling) is the maximum in the withdrawal window; step length refers to the interval of each sliding;
initializing values of weights of each layer of the convolutional neural network in Gaussian distribution with mu equal to 0 and std equal to 0.1;
the artificial model established by combining the working experience of the personnel comprises the following steps:
1) area change model
Due to the diffusion and spreading properties of the flame, the area of the flame can be changed, in most cases, the spreading trend of the fire becomes larger and larger, the area change characteristic is represented by the increase rate of the area of the flame area of continuous frames, and the calculation formula is as follows:
γ=(S(R)t-S(R)t0)/(t-t0)γ=(S(R)t-S(R)t0)/(t-t0)
in the formula: gamma is the growth rate; s (R) t represents the area of the flame region of interest; s (R) t0 denotes the area of the flame region of interest at t 0; t-t0 represents a time interval;
according to the characteristics of the diffusion area change of the flame, in addition to the characteristics of the area increase rate, the area overlapping rate can also be expressed by the following calculation formula
Rs=SA∩Bmax{SA,SB}Rs=SA∩BmaxSA,SB
In the formula: rs represents the overlap ratio; SA and SB are the areas of the flame regions in successive previous and subsequent frames, respectively;
2) integral moving model
Flame can move and change along a burning object or a wind direction change, but the overall movement is greatly different from the movement of other rigid objects, although the position of the flame can be changed, the flame cannot be suddenly changed, the change is embodied in that the position of the mass center of the flame area cannot be suddenly changed, the overall movement characteristic is judged by calculating the mass center of the flame area, and the calculation formula is as follows
xi=∑(x,y)∈Sx/NSxi=∑(x,y)∈Sx/NS
yi=∑(x,y)∈Sy/NSyi=∑(x,y)∈Sy/NS
In the formula: s represents a detected flame region of interest; NS represents the number of pixel points of the flame region of interest; (x, y) are centroid coordinates;
3) flicker feature model
The stroboscopic feature is the most common flame dynamic feature, and is analyzed by wavelet decomposition and methods such as motion history detection and correlation among flame images. However, the most common method is to analyze the dynamic stroboscopic characteristics of flames by using spatial wavelet decomposition, perform wavelet decomposition on the flame image to obtain 4 subband codes, namely 1 low-frequency subband code (compressed image xLL) and 3 high-frequency subband codes (horizontal coefficient image xHL, vertical coefficient image xLH and diagonal coefficient image xHH), and then distinguish flames from non-flames by calculating spatial wavelet energy according to the following formula
e=1m×n∑|xLH|2+|xHL|2+|xHH|2e=1m×n∑xLH2+xHL2+xHH2
In the formula: m × n is a pixel value of the flame region of interest; e represents the spatial wavelet energy. Distinguishing flames from non-flames according to a coefficient curve graph of wavelet energy, wherein strong energy barriers exist between the flames and other objects;
4) intersection-proportion model
An Intersection-over-Union (IoU), a concept used in target detection, is the overlap ratio of the generated candidate frame (candidate frame) and the original labeled frame (ground round frame), i.e., the ratio of their Intersection to Union. The optimal situation is complete overlap, i.e. the ratio is 1, and the intersection ratio calculation formula is as follows:
in the formula, IoU is the intersection ratio, area (C) is the generated candidate frame, and area (G) is the original mark candidate frame.
Claims (10)
1. An intelligent fire identification system based on a neural network at the initial stage of the petrochemical industry is characterized by comprising a system management module, an image preprocessing module, a fire identification module, an early warning management module and a visualization module which are sequentially connected, wherein the system management module is used for configuring relevant information of video monitoring; the image preprocessing module is used for acquiring a monitored real-time image through a camera, preprocessing the real-time image and sending the preprocessed real-time image to the fire recognition model module; the fire identification model module comprises a fire identification model, identifies the received image through the fire identification model and transmits the identification result to the early warning management module; the early warning management module analyzes the result of the fire recognition module through an artificial model established by combining the working experience of personnel, so that the occurrence of false alarm is further reduced; the visualization module is used for alarming and displaying.
2. The intelligent neural network-based early fire identification system in the petrochemical industry according to claim 1, wherein the configuration-related information in the system management module comprises:
1) configuring manufacturers, models, serial numbers, states and access paths of the hard disk video recorder;
2) configuring the manufacturer of the camera, the mounted hard disk video recorder, the serial number, the partition position, the delay time and the interval frame number information;
3) parameter information dependent on image preprocessing operation is configured;
4) and maintaining the user information and the user role, and setting authority for the operation access of the user role.
3. The intelligent neural network-based early fire identification system in the petrochemical industry according to claim 2, wherein the parameter information depended on during the image preprocessing operation comprises: camera RTSP address, camera resolution, flame recognition model address, buffer address, message queue address, dynamic threshold, and streaming frame interval.
4. The intelligent neural network-based early fire identification system in the petrochemical industry is characterized in that the image preprocessing module acquires the configured related information from the system management module and a real-time image monitored by the camera; and preprocessing the real-time image by gray level conversion, Gaussian blur, frame difference, corrosion expansion and foreground extraction to obtain video frame data, and sending the video frame data to a fire identification module.
5. The neural network-based early fire intelligent recognition system in the petrochemical industry according to claim 4, wherein:
gray level conversion: the gray level conversion is to process the independent pixel points, and the image is clear by changing the gray level range occupied by the original image data;
gaussian blur: for reducing image noise and reducing detail level;
frame difference: obtaining the contour of a moving target by performing differential operation on two adjacent frames in a video image sequence;
corrosion expansion: the method is used for searching a maximum area and a minimum area in an image, wherein the corrosion is to reduce and thin a highlight area or a white part in the image, and the image after corrosion treatment is smaller than the highlight area of an original image; the dilation is to expand a highlight area or a white portion of an image, and the image after the dilation is larger than the highlight area of the original image.
6. The intelligent neural network-based early fire identification system in the petrochemical industry according to claim 1, wherein the fire identification module is a fire identification model based on a CNN and interframe difference algorithm, and the fire identification module operates the following steps:
1) and finding out a motion area in a video picture by adopting an interframe difference method through the characteristic that flame has jumping: the method comprises the following steps: converting the RGB three-channel color image into a gray image of a grey single channel, subtracting the fn-1 frame gray image from the fn-th frame gray image one by one to obtain a differential image, setting a threshold value to be 50, and enabling a value less than or equal to 50 in the differential image to be 0 and a value greater than 50 to be 255 to obtain a binary image; and then performing connectivity analysis to connect the plurality of concentrated small dynamic areas to obtain a large dynamic area.
When the large dynamic area is an area smaller than the threshold value, the area is an area with small change or static in the video picture and is a background; when the large dynamic area is an area which is larger than or equal to the threshold value, the pixel change of the area is large, and an object moves in the area, namely the area is a motion area; finding a suspected flame area by finding a motion area;
2) and inputting the area where the suspected flame is found into the fire recognition model to judge whether the area is a fire or not.
7. The neural network-based early fire intelligent recognition system in the petrochemical industry according to claim 6, wherein the interframe difference method formula is as follows:
Dn(x,y)=|fn(x,y)-fn-1(x,y)|
wherein fn (x, y), fn-1(x, y) represent two adjacent frame images, and Dn is a differential image.
9. The intelligent neural network-based early fire identification system in the petrochemical industry according to claim 6, wherein the fire identification model is a convolutional neural network, comprising:
input of the input layer: 64 x 3 images;
the first layer of convolution kernel is 3 x 3, the number of channels is 32, the step length is 1, and the activation function relu and same convolution are carried out;
second-layer pooling: filter2x2, maxporoling at step size 2;
the convolution kernel of the third layer is 3 x 3, the number of channels is 32, the step length is 1, and the activation function relu and same are convoluted;
and fourth-layer pooling: filter2x2, maxporoling at step size 2;
the fifth layer convolution kernel is 3 x 3, the number of channels is 32, the step length is 1, and the activation function relu, same convolution
Sixth full tie layer: 512 nerve units, 0.6 was randomly discarded;
layer seven network output layer: the output node type is fire or no fire;
except for the seventh layer, the other activation functions of all layers adopt a ReLU function, and the expression is as follows:
z(x)=max(0,x);
mapping the output node value to a probability space by using a Softmax activation function to form the probability of fire or no fire;
the expression of the Softmax function is:
zj is one of the output nodes, the Zj power of e is the sum of powers of all the nodes e, and K is the number of the output nodes, namely the number of classified categories, so that the probability of fire or no fire is obtained;
the input of the convolution calculation of the convolution neural network is a two-dimensional array with the height and the width both being 3; the shape of the array is 3 × 3 or (3, 3); the height and width of the kernel array are respectively 2 and are marked as 2 multiplied by 2 or (2, 2), and the kernel array is also called convolution kernel or filter in the convolution neural network calculation; the shape of the convolution kernel window takes the height and width of the convolution kernel, namely 2 multiplied by 2; the number of channels is: 0 × 0+1 × 1+3 × 2+4 × 3 ═ 19; the step size is the interval of each convolution sliding;
the input of the pooling calculation of the convolutional neural network is a two-dimensional array with the height and the width of 4; the shape of the array is marked as 4 multiplied by 4 or (4, 4); the filter window is 2x 2; maximum pooling is the maximum in the extraction window; step length refers to the interval of each sliding;
the initialized value of the weight of each layer of the convolutional neural network is randomly selected in a Gaussian distribution with mu equal to 0 and std equal to 0.1.
10. The neural network-based early fire intelligent recognition system in the petrochemical industry according to claim 1, wherein the artificial model built in conjunction with the working experience of the personnel comprises:
1) area change model
Due to the diffusion and spreading properties of the flame, the area of the flame can be changed, in most cases, the spreading trend of the fire becomes larger and larger, the area change characteristic is represented by the increase rate of the area of the flame area of continuous frames, and the calculation formula is as follows:
γ=(S(R)t-S(R)t0)/(t-t0)γ=(S(R)t-S(R)t0)/(t-t0)
in the formula: gamma is the growth rate; s (R) t represents the area of the flame region of interest; s (R) t0 denotes the area of the flame region of interest at t 0; t-t0 represents a time interval;
according to the characteristics of the diffusion area change of the flame, in addition to the characteristics of the area increase rate, the area overlapping rate can also be expressed by the following calculation formula
Rs=SA∩Bmax{SA,SB}Rs=SA∩BmaxSA,SB
In the formula: rs represents the overlap ratio; SA and SB are the areas of the flame regions in successive previous and subsequent frames, respectively;
2) integral moving model
The mass center of the flame area is calculated to judge the integral moving characteristic, and the calculation formula is as follows
xi=∑(x,y)∈Sx/NSxi=∑(x,y)∈Sx/NS
yi=∑(x,y)∈Sy/NSyi=∑(x,y)∈Sy/NS
In the formula: s represents a detected flame region of interest; NS represents the number of pixel points of the flame region of interest; (x, y) are centroid coordinates;
3) flicker feature model
The calculation formula is as follows:
e=1m×n∑|xLH|2+|xHL|2+|xHH|2e=1m×n∑xLH2+xHL2+xHH2
in the formula: m × n is a pixel value of the flame region of interest; e represents the spatial wavelet energy. Distinguishing flames from non-flames according to a coefficient curve graph of wavelet energy, wherein strong energy barriers exist between the flames and other objects;
4) intersection-proportion model
The intersection ratio calculation formula is as follows:
in the formula, IoU is the intersection ratio, area (C) is the generated candidate frame, and area (G) is the original mark candidate frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111285250.0A CN114429597A (en) | 2021-11-01 | 2021-11-01 | Initial fire intelligent recognition system in petrochemical industry based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111285250.0A CN114429597A (en) | 2021-11-01 | 2021-11-01 | Initial fire intelligent recognition system in petrochemical industry based on neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114429597A true CN114429597A (en) | 2022-05-03 |
Family
ID=81310604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111285250.0A Pending CN114429597A (en) | 2021-11-01 | 2021-11-01 | Initial fire intelligent recognition system in petrochemical industry based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114429597A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115410328A (en) * | 2022-10-31 | 2022-11-29 | 北京中海兴达建设有限公司 | Fire early warning method, device and equipment for construction site and readable storage medium |
CN117935166A (en) * | 2024-01-31 | 2024-04-26 | 中煤科工集团重庆研究院有限公司 | Intelligent fire monitoring method and system for coal mine goaf |
-
2021
- 2021-11-01 CN CN202111285250.0A patent/CN114429597A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115410328A (en) * | 2022-10-31 | 2022-11-29 | 北京中海兴达建设有限公司 | Fire early warning method, device and equipment for construction site and readable storage medium |
CN117935166A (en) * | 2024-01-31 | 2024-04-26 | 中煤科工集团重庆研究院有限公司 | Intelligent fire monitoring method and system for coal mine goaf |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11468660B2 (en) | Pixel-level based micro-feature extraction | |
Tudor Ionescu et al. | Unmasking the abnormal events in video | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
US20180082130A1 (en) | Foreground detector for video analytics system | |
CN103871029B (en) | A kind of image enhaucament and dividing method | |
US8705861B2 (en) | Context processor for video analysis system | |
CN101493980B (en) | Rapid video flame detection method based on multi-characteristic fusion | |
CN102201146B (en) | Active infrared video based fire smoke detection method in zero-illumination environment | |
CN103942557B (en) | A kind of underground coal mine image pre-processing method | |
CN109376747A (en) | A kind of video flame detecting method based on double-current convolutional neural networks | |
CN113537099B (en) | Dynamic detection method for fire smoke in highway tunnel | |
CN113449660B (en) | Abnormal event detection method of space-time variation self-coding network based on self-attention enhancement | |
US20070291991A1 (en) | Unusual action detector and abnormal action detecting method | |
KR101414670B1 (en) | Object tracking method in thermal image using online random forest and particle filter | |
WO2011022273A2 (en) | Field-of-view change detection | |
CN109902612B (en) | Monitoring video abnormity detection method based on unsupervised learning | |
CN114429597A (en) | Initial fire intelligent recognition system in petrochemical industry based on neural network | |
CN110874592A (en) | Forest fire smoke image detection method based on total bounded variation | |
CN107491749A (en) | Global and local anomaly detection method in a kind of crowd's scene | |
CN106056139A (en) | Forest fire smoke/fog detection method based on image segmentation | |
CN105046218A (en) | Multi-feature traffic video smoke detection method based on serial parallel processing | |
CN106570490A (en) | Pedestrian real-time tracking method based on fast clustering | |
CA3196344A1 (en) | Rail feature identification system | |
CN112270381A (en) | People flow detection method based on deep learning | |
CN113569756A (en) | Abnormal behavior detection and positioning method, system, terminal equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |