CN117911932B - Fire disaster intelligent detection method and system based on video detection - Google Patents

Fire disaster intelligent detection method and system based on video detection Download PDF

Info

Publication number
CN117911932B
CN117911932B CN202410318330.9A CN202410318330A CN117911932B CN 117911932 B CN117911932 B CN 117911932B CN 202410318330 A CN202410318330 A CN 202410318330A CN 117911932 B CN117911932 B CN 117911932B
Authority
CN
China
Prior art keywords
image
representing
fire
frame
fire source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410318330.9A
Other languages
Chinese (zh)
Other versions
CN117911932A (en
Inventor
张欣
章俊屾
吴珂
陈扩
李婕
何浩
寇正午
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University Qizhen Future City Technology Hangzhou Co ltd
Xian Shiyou University
Original Assignee
Zhejiang University Qizhen Future City Technology Hangzhou Co ltd
Xian Shiyou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University Qizhen Future City Technology Hangzhou Co ltd, Xian Shiyou University filed Critical Zhejiang University Qizhen Future City Technology Hangzhou Co ltd
Priority to CN202410318330.9A priority Critical patent/CN117911932B/en
Publication of CN117911932A publication Critical patent/CN117911932A/en
Application granted granted Critical
Publication of CN117911932B publication Critical patent/CN117911932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention belongs to the technical field of fire early warning, and provides a fire intelligent detection method and system based on video detection, wherein the method comprises the following steps: acquiring first video data of a detection area; screening the first video data for one time by combining an information entropy algorithm to obtain a primary screening video segment with fire characteristics; preprocessing each frame of image in the preliminary screening video segment, wherein the preprocessing comprises image denoising and edge detection; performing secondary screening based on the shape characteristics of the fire source on each frame of image in the pre-processed primary screening video segment to obtain the position of the fire source; collecting second video data of the fire source position within a preset duration, and extracting the fire source position of each frame of image in the second video data; calculating the dynamic motion vector of the extracted fire source position; determining the spreading direction of the fire source according to the dynamic motion vector; reporting the position of the fire source and the spreading direction of the fire source; the invention improves the fire detection efficiency and accuracy.

Description

Fire disaster intelligent detection method and system based on video detection
Technical Field
The invention belongs to the technical field of fire early warning, and particularly relates to an intelligent fire detection method and system based on video detection.
Background
The fire detection is a method for detecting and identifying the signs of fire by using advanced technical means, alarming in time and taking measures to reduce the hazard of the fire, and comprises a sensor, monitoring equipment, computer vision technology, intelligent algorithm and the like. The fire detection system can discover the fire source or the sign of the fire at the initial stage of the fire, which is helpful to take timely measures before the fire spreads, the fire extinguishing effect is improved, when the fire detection system detects the existence of the fire, the fire detection system can automatically trigger the alarm system to emit sound or light signals and notify related personnel or emergency services at the same time so as to timely cope with the fire, the fire detection system is usually combined with the monitoring system, and the condition in the building can be detected in real time, which is helpful for building managers to better know the fire risk, take preventive measures, improve the safety of the building, and through early discovery and timely response, the fire detection system can help to reduce casualties and property loss and improve the coping efficiency of the fire accident.
In the prior art, a fire detection mode is generally adopted, a plurality of smoke alarms or temperature sensors are arranged in a detection range, but the detection efficiency of the mode is limited, false alarm can be generated due to the influence of dust, steam and other factors, and the fire source position can not be accurately positioned when a fire disaster occurs, so that the operation difficulty of firefighters is increased.
Disclosure of Invention
In order to solve the technical problems that the detection efficiency is limited in the prior art, false alarm is possibly generated due to the influence of dust, steam and other factors, and the fire source position cannot be accurately positioned when a fire disaster occurs, so that the operation difficulty of firefighters is increased, the invention provides an intelligent fire disaster detection method and system based on video detection.
First aspect:
the invention provides a fire disaster intelligent detection method based on video detection, which comprises the following steps:
s101: acquiring first video data of a detection area;
s102: screening the first video data for one time by combining an information entropy algorithm to obtain a primary screening video segment with fire characteristics;
S103: preprocessing each frame of image in the preliminary screening video segment, wherein the preprocessing comprises image denoising and edge detection;
S104: performing secondary screening based on the shape characteristics of the fire source on each frame of image in the pre-processed primary screening video segment to obtain the position of the fire source;
s105: collecting second video data of the fire source position within a preset duration, and extracting the fire source position of each frame of image in the second video data;
s106: calculating the dynamic motion vector of the extracted fire source position;
S107: determining the spreading direction of the fire source according to the dynamic motion vector;
S108: reporting the position and the spreading direction of the fire source.
Further, the step S102 specifically includes:
S1021: calculating a difference image between a kth frame image and a kth-1 frame image in the first video data, and a kth-1 frame image and a kth-2 frame image:
Wherein, Representing the current frame image, i.e. the kth frame imageThe k-1 frame image which is the previous frame image to the current frame imageA first differential image between the first and second images,The previous frame image representing the current frame image, i.e., the kth-1 frame image and the kth-2 frame imageA second differential image between the first and second images,Representing pixel point space coordinates of each frame of image;
S1022: multiplying the first differential image and the second differential image to obtain a continuous differential image
S1023: calculating the information entropy H of the continuous differential image:
Wherein, Representing the probability of occurrence of a pixel value I in said successive difference images,Representing a logarithmic operation;
S1024: extracting an image with information entropy larger than 0 to obtain the primary screening video segment
Wherein,Representing an image with an information entropy greater than 0.
Further, the step S103 specifically includes:
S1031: image denoising is carried out on each frame of image through a Gaussian filtering algorithm, and high-frequency noise in the image is eliminated:
Wherein, Representing the pixel value of the denoised image at coordinates (x, y), k representing the filter radius, i and j representing the horizontal offset distance of coordinate x and the vertical offset distance of coordinate y respectively,Represents the standard deviation of the gaussian distribution, e represents the natural constant,Representing the coordinates of each frame of the input imagePixel values at;
s1032: and carrying out edge detection on the denoised image by utilizing a Sobel operator:
Wherein, Represents the horizontal gradient of the denoised image,Representing the vertical gradient of the denoised image,Representing edge strength;
S1033: and reserving pixel points with the intensity larger than the preset edge in the denoised image, obtaining the edge of the denoised image, and finishing the pretreatment of each frame of image in the primary screening video segment.
Further, the step S104 specifically includes:
s1041: extracting the image contour of each frame of image;
s1042: calculating the area of the image contour Perimeter d and aspect ratio b:
wherein D represents an area surrounded by the image contour, Representing the area of the hogels, C representing the image contour,Representing the infinitesimal arc length, W representing the image contour width, and H representing the image contour height;
S1043: calculating a comprehensive judgment value of the image contour based on the fire source shape feature
S1044: the base Yu Jieer nike algorithm calculates a shape descriptor of the image contour:
Wherein, Representing the jernike moment i.e. the shape descriptor,Represents a jernike polynomial of the formula,Respectively representing the radial distance and angle of the image profile in a polar coordinate system, m and n representing the angular and radial orders of the jernike polynomials, respectively,Representing the image contour;
S1045: and determining the image outline as the fire source position under the condition that the comprehensive judging value is in a preset judging value range or the shape descriptor accords with a preset shape descriptor.
Further, the step S106 specifically includes:
s1061: continuous differential processing based on image acquisition time is carried out on the adjacent frame images including the fire source position, and differential images between the adjacent frame images are obtained:
Wherein, Representing a differential image between adjacent frame images,AndRespectively representing images obtained by sampling at a t sampling time and a t+1 sampling time;
S1062: calculating an optical flow field including a motion vector of each pixel point using a rukas-kanadata algorithm based on a differential image between adjacent frame images
Wherein,Representing the motion vectors of the optical flow field in the horizontal and vertical directions respectively,AndRepresenting the gradient of the differential image between the adjacent frame images in the horizontal direction and the gradient in the vertical direction respectively,Representing a gradient matrix comprising horizontal gradients and vertical gradients, G representing the image gray values.
Further, the step S107 specifically includes:
s1071: performing k-means clustering on each pixel point motion vector in the optical flow field to obtain a plurality of motion areas;
S1072: and analyzing the main components of each movement area to generate the fire spreading direction.
Further, the S1071 specifically includes:
S1071A: initializing a preset number of clustering centers;
S1071B: calculating Euclidean distance between the pixel point motion vector and each clustering center, and distributing the pixel point motion vector to the adjacent clustering center according to the Euclidean distance:
Wherein the symbol is' "Means that the euclidean distance is calculated,Representing the p-th motion vectorThe cluster to which the cluster belongs is selected,The center of the q-th cluster is indicated,A q value representing when the squared euclidean distance value is selected to be minimum;
S1071C: updating the cluster center to be the mean value of the motion vectors belonging to the cluster:
Wherein, Representing the q-th cluster after the update,Representing the updated cluster center;
S1071D: the iteration number is increased by one, whether the iteration number reaches the preset iteration number is judged, if not, S1071E is carried out, and if not, S1071F is carried out;
S1071E: repeating S1071B-S1071D;
S1071F: each cluster is taken as a motion area.
Further, the step S1072 specifically includes:
S1072A: calculating a motion vector mean value contained in each motion region:
wherein n represents the number of motion vectors in the motion region, and mean represents the motion vector mean;
S1072B: and calculating covariance matrixes in each motion area by combining the motion vector mean values:
Wherein, Representing the covariance matrix;
S1072C: solving eigenvalues and eigenvectors of the covariance matrix;
S1072D: and taking the characteristic vector corresponding to the maximum characteristic value as the main component direction, namely the fire source spreading direction.
Further, after S107, the method further includes:
S107A: and visualizing the position of the fire source and the spreading direction of the fire source.
Second aspect:
The invention provides a fire intelligent detection system based on video detection, which comprises a processor and a memory for storing executable instructions of the processor, wherein the processor is used for storing executable instructions of the processor; the processor is configured to invoke the memory-stored instructions for performing the video detection-based fire intelligent detection method of the first aspect.
Compared with the prior art, the invention has at least the following beneficial technical effects:
In the invention, the video data based on the detection data is screened for multiple times, a multi-layer detection mechanism is arranged, the information entropy with lower complexity is selected in the primary screening process to carry out accurate and efficient video stream filtering, under the condition of keeping abnormal video streams, huge resource consumption caused by continuous extraction of a large amount of video data is avoided, a large amount of non-case video data is eliminated, the calculation burden and the resource consumption of a video detection system are reduced, further the video detection hardware requirement and the application cost are reduced, the video detection mode application scene is improved, more calculation resources are placed in the secondary screening process to carry out fine granularity detection based on the shape characteristics of the fire source, the fire source spreading direction is determined by analyzing the dynamic vector of the extracted video data, important data reference is provided for the decision of firefighters, the fire extinguishing difficulty is reduced, the fire extinguishing speed is improved, the property safety is protected, and the fire early warning speed is effectively improved while the false alarm rate is reduced.
Drawings
The above features, technical features, advantages and implementation of the present invention will be further described in the following description of preferred embodiments with reference to the accompanying drawings in a clear and easily understood manner.
Fig. 1 is a schematic flow chart of a fire intelligent detection method based on video detection.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
Example 1
In one embodiment, referring to fig. 1 of the specification, a schematic flow chart of a fire intelligent detection method based on video detection provided by the invention is shown.
The invention provides a fire disaster intelligent detection method based on video detection, which comprises the following steps:
s101: first video data of a detection area is acquired.
S102: and screening the first video data once by combining an information entropy algorithm to obtain a primary screening video segment with fire characteristics.
The information entropy algorithm is a relatively simple algorithm, so that the calculation complexity in the screening stage is relatively low, which is helpful for the primary screening of the video data in the application with high real-time requirement. The information entropy algorithm has a good sensitivity to motion or changes in the captured image. The information entropy algorithm is adopted for preliminary screening, so that the video clips with fire disaster characteristics can be effectively extracted while relatively low calculation complexity is maintained in video detection, targeted data is provided for subsequent fine granularity detection, the consumption of data processing resources is reduced, more calculation resources are placed in secondary screening, and the overall detection speed is improved.
In one possible implementation, S102 specifically includes:
s1021: calculating a difference image between a kth frame image and a kth-1 frame image in the first video data, and between the kth-1 frame image and the kth-2 frame image:
Wherein, Representing the current frame image, i.e. the kth frame imageThe k-1 frame image which is the previous frame image to the current frame imageA first differential image between the first and second images,The previous frame image representing the current frame image, i.e., the kth-1 frame image and the kth-2 frame imageA second differential image between the first and second images,Representing the pixel point spatial coordinates of each frame of image.
S1022: multiplying the first differential image and the second differential image to obtain continuous differential images
It should be noted that the operation of element-wise multiplication helps to simultaneously preserve two pixels at the same position, and the continuous differential image corresponds to the position of the moving object detected in both adjacent frames, and this operation aims to utilize the differential information between the two frames, and simultaneously consider the motion situation of the two adjacent frames, so as to reduce the problem of data holes and ghosts caused in the differential process, and improve the capturing effect of the dynamic characteristics.
S1023: calculating the information entropy H of the continuous differential image:
Wherein, Representing pixel values in successive differential imagesIs used for the detection of the probability of occurrence of (1),Representing a logarithmic operation.
S1024: extracting an image with information entropy larger than 0 to obtain a primary screening video segment
Wherein,Representing an image with an information entropy greater than 0.
It should be noted that, by multiplying element by element, the continuous differential image can highlight the outline of the moving object, so that the static background is effectively filtered in the primary screening, and the detection pertinence is improved. The operation of element-wise multiplication helps to preserve pixels in the same position in two adjacent frames at the same time, thereby reducing data holes and ghost problems in the differential image. The method is beneficial to improving the capturing effect of the dynamic characteristics, the information entropy algorithm can be adaptively adjusted according to the change of the image content, has certain adaptability to different scenes and different types of fire sources, improves the universality of the algorithm, can also store video clips with tiny abnormal conditions, and ensures the effectiveness of the method.
S103: and preprocessing each frame of image in the primary screening video segment, wherein the preprocessing comprises image denoising and edge detection.
Among them, image denoising is a technique for reducing or removing unwanted noise (random interference) in an image, and in intelligent fire detection, video data may be affected by factors such as ambient light, sensor noise, etc., resulting in noise in the image, and the purpose of image denoising is to remove such noise, make the image clearer, improve accuracy of subsequent processing, smooth the image, and suppress high-frequency noise. Edge detection is an image processing technology, which is used for detecting an area with significant gray level change in an image, namely an edge in the image, and in fire detection, the edge detection is helpful for extracting the characteristics of large difference between a fire source, smoke or the like in the image and the surrounding environment, and identifying the position with large gray level change in the image, namely the edge, for subsequent analysis. Each frame of image in the primary screening video segment is optimized by denoising and edge detection. Denoising improves image quality, reduces interference, enables subsequent analysis to be more stable, facilitates capturing significant features in an image, provides more definite information for subsequent shape analysis and fire source positioning, and facilitates improving accuracy and robustness of a fire detection system.
In one possible implementation, S103 specifically includes:
S1031: image denoising is carried out on each frame of image through a Gaussian filtering algorithm, and high-frequency noise in the image is eliminated:
Wherein, Representing the pixel value of the denoised image at coordinates (x, y), k representing the filter radius, i and j representing the horizontal offset distance of coordinate x and the vertical offset distance of coordinate y respectively,Represents the standard deviation of the gaussian distribution, e represents the natural constant,Representing the coordinates of each frame of the input imagePixel values at;
s1032: and carrying out edge detection on the denoised image by utilizing a Sobel operator:
Wherein, Represents the horizontal gradient of the denoised image,Representing the vertical gradient of the denoised image,Representing edge strength;
S1033: and reserving pixel points with the intensity larger than the preset edge in the denoised image, obtaining the edge of the denoised image, and finishing the pretreatment of each frame of image in the primary screening video segment.
It should be noted that, the size of the preset edge strength can be set by those skilled in the art according to actual needs, and the present invention is not limited herein.
Specifically, image denoising of each frame of image by a gaussian filter algorithm can eliminate high-frequency noise in the image, that is, fine variations that do not actually contribute to the overall image quality, and gaussian filtering allows the noise to be blurred by smoothing the image, thereby improving the image quality. The edge detection is performed on the denoised image by using a Sobel operator, which is an operator commonly used for image edge detection, and highlights edges in the image by calculating horizontal and vertical gradients of each pixel point in the image. The system firstly performs image denoising through Gaussian filtering, then performs edge detection through a Sobel operator, and finally only retains important edge information through setting an edge intensity threshold value, so that the series of operations can improve the image quality, and the subsequent fire source shape feature analysis and secondary screening are more accurate and reliable.
S104: and carrying out secondary screening based on the shape characteristics of the fire source on each frame of image in the pre-treated primary screening video fragment to obtain the position of the fire source.
It should be noted that the fire source often has some specific shape characteristics, for example, the shape of the flame may be more specific, and the screening manner based on the shape characteristics may better adapt to the shapes of different types of fire sources, so as to improve the versatility of the detection system. The shape of an object can be described more discriminatively using shape features such as area, perimeter, aspect ratio, etc. Shape features can provide more definite geometric properties of the object than relying solely on pixel intensity or color information, which helps to reduce false positive rates. The use of the shape features is helpful for reducing the sensitivity to environmental factors, and compared with some methods which are possibly greatly influenced by illumination, shadow and the like, the screening mode based on the shape features is more robust, and is helpful for reducing the influence of the environmental factors on the detection result. By comprehensively judging the shape characteristics and calculating the shape descriptors, the fire source position can be accurately positioned, the method not only considers the integral characteristics of the shape, but also considers the detailed description of the shape, and the method is beneficial to improving the detection accuracy. The use of shape features makes the fire source detection system more adaptive, and can cope with various shapes and changes possibly existing in different scenes, which helps to improve the robustness of the system, so that the system can maintain better performance in varied environments. The secondary screening of the fire source position is realized by extracting, calculating and judging the shape characteristics of each frame of image in the primary screening video segment, and the screening mode based on the shape characteristics is favorable for accurately positioning the fire source position and improves the precision and reliability of the fire detection system.
In one possible implementation, S104 specifically includes:
s1041: extracting the image contour of each frame of image;
s1042: calculating the area of the image contour Perimeter d and aspect ratio b:
wherein D represents an area surrounded by the image contour, Representing the area of the hogels, C representing the image contour,Representing the infinitesimal arc length, W representing the image contour width, and H representing the image contour height;
S1043: comprehensive judgment value for calculating image contour based on fire source shape characteristics
S1044: the base Yu Jieer nike algorithm calculates the shape descriptor of the image contour:
Wherein, Representing the jernike moment or shape descriptor,Represents a jernike polynomial of the formula,Respectively representing the radial distance and angle of the image profile in the polar coordinate system, m and n representing the angular and radial orders of the jernike polynomials, respectively,Representing an image contour;
s1045: and determining the image contour as the fire source position under the condition that the comprehensive judging value is in a preset judging value range or the shape descriptor accords with the preset shape descriptor.
Specifically, by introducing comprehensive consideration of comprehensive judgment values on different characteristics, comprehensive understanding of the shape of a fire source is improved, false detection rate is reduced, detection reliability is improved, a Jack algorithm is adopted to calculate shape descriptors, the algorithm is very effective for extracting shape information of image contours, a system can capture detailed characteristics of target shapes, the method is more adjustable by setting a preset judgment value range, sensitivity of the system can be flexibly adjusted according to actual scenes and requirements, false detection rate is reduced, and the embodiment can analyze the shape characteristics of the fire source more comprehensively and accurately relative to the traditional technology, so that performance and reliability of the fire detection system are improved. Specifically, the preset determination value range and the preset shape descriptor come from experimental analysis of fire source samples in the prior art, and a fire laboratory has conventional shape analysis and the preset determination value range of fire sources, so that the system can accurately detect the fire sources and effectively exclude non-fire sources.
In one possible embodiment, the size of the preset determination value range and the descriptor format of the preset shape descriptor may be set by a person skilled in the art according to actual needs, and the present invention is not limited herein.
S105: and acquiring second video data of the fire source position within a preset time period, and extracting the fire source position of each frame of image in the second video data.
It can be understood that the extraction mode of the fire source position of each frame of image in the second video data is the same as that of S104, and the acquisition of the second video data within the preset time length is helpful to ensure the real-time performance of the data, rather than just relying on the historical acquisition data, so as to improve the accuracy of fire detection.
S106: and calculating the dynamic motion vector of the extracted fire source position.
In one possible implementation, S106 specifically includes:
S1061: continuous differential processing based on image acquisition time is carried out on the adjacent frame images including the fire source position, and differential images between the adjacent frame images are obtained:
Wherein, Representing a differential image between adjacent frame images,AndRespectively representing images obtained by sampling at a t sampling time and a t+1 sampling time;
S1062: calculating an optical flow field including a motion vector of each pixel point using a rukas-kanadata algorithm based on a differential image between adjacent frame images
Wherein,Representing the motion vectors of the optical flow field in the horizontal and vertical directions respectively,AndRepresenting the gradient of the differential image between the adjacent frame images in the horizontal direction and the gradient in the vertical direction respectively,Representing a gradient matrix comprising horizontal gradients and vertical gradients, G representing the image gray values.
The adjacent frame images including the fire source position are subjected to continuous differential processing, and the adjacent images are compared to calculate the difference between them so as to capture the movement of the target (fire source), and the image differential processing can help capture the movement of the target, thereby obtaining the differential image between the adjacent frame images. And calculating the motion vector of each pixel point according to the differential image by using a Lucas-Kanade algorithm to form an optical flow field. The optical flow field represents the movement direction and speed of each pixel point in the image, and the dynamic movement information of the extracted fire source position between adjacent frames can be obtained by calculating the optical flow field. The quantitative analysis of the dynamic movement of the fire source position is realized, the calculation of the optical flow field can help understand the moving track and speed of the fire source in the image, and a data basis is provided for further analysis and judgment of the spreading direction of the fire source.
S107: and determining the fire source spreading direction according to the dynamic motion vector.
It should be noted that, in the fire extinguishing process, knowing the direction of fire source spreading can help firefighters to formulate a more effective fire extinguishing strategy, and by predicting the direction of fire source spreading, fire extinguishing points can be selected and fireproof barriers can be set in a targeted manner so as to suppress fire spreading to the greatest extent. Depending on the direction of propagation, a relatively safe area may also be set, providing a safe working area for firefighters, which helps to avoid accidental injury and risk of personnel getting trapped during fire fighting actions. In general, determining the direction of fire source spread helps to improve the efficiency and safety of fire extinguishing actions, so that firefighters can better cope with the development trend of fire, and more scientific and reasonable measures are taken to reduce disaster effects.
In one possible implementation, S107 specifically includes:
S1071: and performing k-means clustering on the motion vectors of all the pixel points in the optical flow field to obtain a plurality of motion areas.
In one possible embodiment, S1071 specifically includes:
S1071A: initializing a preset number of clustering centers;
S1071B: calculating Euclidean distance between the pixel point motion vector and each cluster center, and distributing the pixel point motion vector to the adjacent cluster center according to the Euclidean distance:
Wherein the symbol is' "Means that the euclidean distance is calculated,Representing the p-th motion vectorThe cluster to which the cluster belongs is selected,The center of the q-th cluster is indicated,A q value representing when the squared euclidean distance value is selected to be minimum;
S1071C: updating the cluster center to be the mean value of the motion vectors belonging to the cluster:
Wherein, Representing the q-th cluster after the update,Representing the updated cluster center;
S1071D: the iteration number is increased by one, whether the iteration number reaches the preset iteration number is judged, if not, S1071E is carried out, and if not, S1071F is carried out;
S1071E: repeating S1071B-S1071D;
S1071F: each cluster is taken as a motion area.
The k-means clustering is a common unsupervised learning algorithm, which is used for dividing samples in a data set into k clusters which are mutually disjoint, distributing the samples to the nearest clusters in an iterative mode, and updating the cluster positions by recalculating the centers of the clusters, wherein the aim of the whole process is to minimize the square sum of the distance between each sample and the cluster center to which each sample belongs. Through k-means clustering, the pixel points of similar motion vectors are divided into the same cluster to form a plurality of motion areas, each motion area represents one moving object or motion area in the video and can be regarded as a set of pixel points with similar motion trends, through the operation, the motion trends of different areas in the video can be better understood, the analysis of dynamic objects in the video is facilitated, and powerful support is provided in the determination of the fire spreading direction. Overall, k-means clustering facilitates grouping of motion vectors to better understand the motion trend of different moving objects in a video.
In one possible implementation, the size of the preset number of iterations can be set by a person skilled in the art according to actual needs, and the present invention is not limited herein.
S1072: and analyzing the main components of each movement area to generate the fire spreading direction.
In one possible embodiment, S1072 specifically includes:
S1072A: calculating a motion vector mean value contained in each motion region:
Wherein n represents the number of motion vectors in the motion region, and mean represents the motion vector mean;
S1072B: combining the motion vector mean value, calculating a covariance matrix in each motion area:
Wherein, Representing a covariance matrix;
S1072C: solving eigenvalues and eigenvectors of the covariance matrix;
S1072D: and taking the characteristic vector corresponding to the maximum characteristic value as the main component direction, namely the fire source spreading direction.
The principal component analysis is a commonly used dimension reduction technology, and aims to project original data onto a set of new coordinate axes through linear transformation so as to keep variance of the data to the greatest extent, the new coordinate axes are called principal components, the principal components are ordered according to the variance, dimension reduction of the data can be realized by selecting the first several principal components, and meanwhile, information in the original data is kept as much as possible. Through principal component analysis, the dimensionality of original motion vector data can be reduced to a main direction, the redundancy of the data is reduced, the feature vector corresponding to the maximum feature value of the covariance matrix is selected, which is equivalent to selecting the direction with the maximum data change, so that main motion trend information can be kept, the feature vector corresponding to the maximum feature value is used as the main component direction, namely the fire spreading direction, which is helpful for understanding the main motion trend in a motion area, providing a favorable quantitative index for the fire spreading direction, keeping the main information, extracting the main motion trend in the motion area while reducing the data dimensionality, providing an effective mathematical means for generating the fire spreading direction, and being helpful for understanding the fire spreading situation, and in the practical application process, the motion area can be subjected to multiple principal component analysis, extracting a plurality of large probability spreading directions, and further ensuring the blocking of the fire spreading.
In one possible implementation, after S107, the method further includes:
S107A: the position of the fire source and the spreading direction of the fire source are visualized.
Specifically, the visualization mode may be a fire source position mark or a fire source spreading direction arrow mark, and the visualization of the fire source position and the fire source spreading direction is to present the extracted key information in a graphical form so that a user can understand and analyze more intuitively.
S108: reporting the position and the spreading direction of the fire source.
It can be appreciated that the detection system can automatically transmit information about the fire source and the spreading condition thereof to a relevant decision maker, emergency response personnel or other systems, thereby promoting timely fire extinguishing and rescue actions, automatically and intelligently completing fire detection and improving the fire detection efficiency and accuracy.
Compared with the prior art, the invention has at least the following beneficial technical effects:
In the invention, the video data based on the detection data is screened for multiple times, a multi-layer detection mechanism is arranged, the information entropy with lower complexity is selected in the primary screening process to carry out accurate and efficient video stream filtering, under the condition of keeping abnormal video streams, huge resource consumption caused by continuous extraction of a large amount of video data is avoided, a large amount of non-case video data is eliminated, the calculation burden and the resource consumption of a video detection system are reduced, further the video detection hardware requirement and the application cost are reduced, the video detection mode application scene is improved, more calculation resources are placed in the secondary screening process to carry out fine granularity detection based on the shape characteristics of the fire source, the fire source spreading direction is determined by analyzing the dynamic vector of the extracted video data, important data reference is provided for the decision of firefighters, the fire extinguishing difficulty is reduced, the fire extinguishing speed is improved, the property safety is protected, and the fire early warning speed is effectively improved while the false alarm rate is reduced.
Example 2
In one embodiment, the invention provides a fire intelligent detection system based on video detection, which comprises a processor and a memory for storing instructions executable by the processor; the processor is configured to invoke the instructions stored in the memory for performing the video detection-based fire intelligent detection method of embodiment 1.
The intelligent fire detection system based on video detection provided by the invention can realize the steps and effects of the intelligent fire detection method based on video detection in the embodiment 1, and in order to avoid repetition, the invention is not repeated.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (5)

1. The intelligent fire detection method based on video detection is characterized by comprising the following steps of:
s101: acquiring first video data of a detection area;
S102: screening the first video data for one time by combining an information entropy algorithm to obtain a primary screening video segment with fire characteristics;
S103: preprocessing each frame of image in the preliminary screening video segment, wherein the preprocessing comprises image denoising and edge detection;
S104: performing secondary screening based on the shape characteristics of the fire source on each frame of image in the pre-processed primary screening video segment to obtain the position of the fire source;
s105: collecting second video data of the fire source position within a preset duration, and extracting the fire source position of each frame of image in the second video data;
s106: calculating the dynamic motion vector of the extracted fire source position;
s107: determining the spreading direction of the fire source according to the dynamic motion vector;
s108: reporting the position of the fire source and the spreading direction of the fire source;
wherein, the step S104 specifically includes:
s1041: extracting the image contour of each frame of image;
s1042: calculating the area of the image contour Perimeter d and aspect ratio b:
wherein D represents an area surrounded by the image contour, Representing the infinitesimal area, C representing the image contour line,/>Representing the infinitesimal arc length, W representing the image contour width, and H representing the image contour height;
S1043: calculating a comprehensive judgment value of the image contour based on the fire source shape feature
S1044: the base Yu Jieer nike algorithm calculates a shape descriptor of the image contour:
Wherein, Representing the Jie Nick moment, i.e. the shape descriptor,/>Representing a Jack polynomial,/>Respectively representing the radial distance and angle of the image contour in a polar coordinate system, m and n respectively representing the angular order and the radial order of the Jack polynomial,/>Representing the image contour;
S1045: determining the image contour as the fire source position under the condition that the comprehensive judging value is in a preset judging value range or the shape descriptor accords with a preset shape descriptor;
The step S106 specifically includes:
s1061: continuous differential processing based on image acquisition time is carried out on the adjacent frame images including the fire source position, and differential images between the adjacent frame images are obtained:
Wherein, Representing a difference image between adjacent frame images,/>And/>Respectively representing images obtained by sampling at a t sampling time and a t+1 sampling time;
S1062: calculating an optical flow field including a motion vector of each pixel point using a rukas-kanadata algorithm based on a differential image between adjacent frame images
Wherein,Representing the motion vector of the optical flow field in the horizontal and vertical directions respectively,/>And/>Representing the gradient of the differential image between adjacent frame images in the horizontal direction and the gradient in the vertical direction respectively,/>Representing a gradient matrix comprising horizontal gradients and vertical gradients, G representing an image gray value;
the step S107 specifically includes:
s1071: performing k-means clustering on each pixel point motion vector in the optical flow field to obtain a plurality of motion areas;
S1072: analyzing the main components of each movement area to generate a fire source spreading direction;
the S1071 specifically includes:
S1071A: initializing a preset number of clustering centers;
S1071B: calculating Euclidean distance between the pixel point motion vector and each clustering center, and distributing the pixel point motion vector to the adjacent clustering center according to the Euclidean distance:
Wherein the symbol is' "Means calculating Euclidean distance,/>Representing the p-th motion vector/>Cluster to which belongs,/>Represents the q-th cluster center,/>A q value representing when the squared euclidean distance value is selected to be minimum;
S1071C: updating the cluster center to be the mean value of the motion vectors belonging to the cluster:
Wherein, Represents the q-th cluster after updating,/>Representing the updated cluster center;
S1071D: the iteration number is increased by one, whether the iteration number reaches the preset iteration number is judged, if not, S1071E is carried out, and if not, S1071F is carried out;
S1071E: repeating S1071B-S1071D;
S1071F: each cluster is taken as a motion area;
the step S1072 specifically includes:
S1072A: calculating a motion vector mean value contained in each motion region:
wherein n represents the number of motion vectors in the motion region, and mean represents the motion vector mean;
S1072B: and calculating covariance matrixes in each motion area by combining the motion vector mean values:
Wherein, Representing the covariance matrix;
S1072C: solving eigenvalues and eigenvectors of the covariance matrix;
S1072D: and taking the characteristic vector corresponding to the maximum characteristic value as the main component direction, namely the fire source spreading direction.
2. The intelligent fire detection method based on video detection according to claim 1, wherein the step S102 specifically comprises:
S1021: calculating a difference image between a kth frame image and a kth-1 frame image in the first video data, and a kth-1 frame image and a kth-2 frame image:
Wherein, Representing the current frame image, i.e. the kth frame image/>, andThe k-1 frame image/>, which is the previous frame image to the current frame imageFirst difference image between,/>The previous frame image representing the current frame image, namely the k-1 frame image and the k-2 frame image/>Second difference image between,/>Representing pixel point space coordinates of each frame of image;
S1022: multiplying the first differential image and the second differential image to obtain a continuous differential image
S1023: calculating the information entropy H of the continuous differential image:
Wherein, Representing the probability of occurrence of a pixel value I in said successive difference images,/>Representing a logarithmic operation;
S1024: extracting an image with information entropy larger than 0 to obtain the primary screening video segment
Wherein,Representing an image with an information entropy greater than 0.
3. The intelligent fire detection method based on video detection according to claim 1, wherein S103 specifically comprises:
S1031: image denoising is carried out on each frame of image through a Gaussian filtering algorithm, and high-frequency noise in the image is eliminated:
Wherein, Representing the pixel value of the denoised image at coordinates (x, y), k representing the filter radius, i and j representing the horizontal offset distance of coordinate x and the vertical offset distance of coordinate y, respectively,/>Represents the standard deviation of Gaussian distribution, e represents a natural constant,/>Representing each frame of image input at coordinates/>Pixel values at;
s1032: and carrying out edge detection on the denoised image by utilizing a Sobel operator:
Wherein, Representing the horizontal gradient of the denoised image,/>Representing the vertical gradient of the denoised image,/>Representing edge strength;
S1033: and reserving pixel points with the intensity larger than the preset edge in the denoised image, obtaining the edge of the denoised image, and finishing the pretreatment of each frame of image in the primary screening video segment.
4. The intelligent fire detection method based on video detection according to claim 1, further comprising, after S107:
S107A: and visualizing the position of the fire source and the spreading direction of the fire source.
5. The intelligent fire detection system based on video detection is characterized by comprising a processor and a memory for storing instructions executable by the processor; the processor is configured to invoke the instructions stored in the memory for performing the video detection-based fire intelligent detection method of any one of claims 1 to 4.
CN202410318330.9A 2024-03-20 2024-03-20 Fire disaster intelligent detection method and system based on video detection Active CN117911932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410318330.9A CN117911932B (en) 2024-03-20 2024-03-20 Fire disaster intelligent detection method and system based on video detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410318330.9A CN117911932B (en) 2024-03-20 2024-03-20 Fire disaster intelligent detection method and system based on video detection

Publications (2)

Publication Number Publication Date
CN117911932A CN117911932A (en) 2024-04-19
CN117911932B true CN117911932B (en) 2024-05-28

Family

ID=90686282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410318330.9A Active CN117911932B (en) 2024-03-20 2024-03-20 Fire disaster intelligent detection method and system based on video detection

Country Status (1)

Country Link
CN (1) CN117911932B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840571A (en) * 2010-03-30 2010-09-22 杭州电子科技大学 Flame detection method based on video image
CN102867386A (en) * 2012-09-10 2013-01-09 南京恩博科技有限公司 Intelligent video analysis-based forest smoke and fire detection method and special system thereof
CN104853151A (en) * 2015-04-17 2015-08-19 张家港江苏科技大学产业技术研究院 Large-space fire monitoring system based on video image
CN105788142A (en) * 2016-05-11 2016-07-20 中国计量大学 Video image processing-based fire detection system and detection method
JP2020021300A (en) * 2018-08-01 2020-02-06 株式会社シー・イー・デー・システム Fire monitoring device, fire monitoring system, and program for fire monitoring device
KR102087000B1 (en) * 2019-08-13 2020-05-29 주식회사 지에스아이엘 Method And System for Monitoring Fire
JP2020126439A (en) * 2019-02-05 2020-08-20 ホーチキ株式会社 Fire detection device and fire detection method
KR102521726B1 (en) * 2022-06-30 2023-04-17 주식회사 아이미츠 Fire detection system that can predict direction of fire spread based on artificial intelligence and method for predicting direction of fire spread

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286050B2 (en) * 2003-12-05 2007-10-23 Honeywell International, Inc. Fire location detection and estimation of fire spread through image processing based analysis of detector activation
TWI694382B (en) * 2019-01-04 2020-05-21 財團法人金屬工業研究發展中心 Smoke detection method with deep vision
KR102399295B1 (en) * 2020-01-16 2022-05-18 계명대학교 산학협력단 Real-time night fire detection apparatus and method with deep neural network and fire-tube
US20240005759A1 (en) * 2022-09-09 2024-01-04 Nanjing University Of Posts And Telecommunications Lightweight fire smoke detection method, terminal device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840571A (en) * 2010-03-30 2010-09-22 杭州电子科技大学 Flame detection method based on video image
CN102867386A (en) * 2012-09-10 2013-01-09 南京恩博科技有限公司 Intelligent video analysis-based forest smoke and fire detection method and special system thereof
CN104853151A (en) * 2015-04-17 2015-08-19 张家港江苏科技大学产业技术研究院 Large-space fire monitoring system based on video image
CN105788142A (en) * 2016-05-11 2016-07-20 中国计量大学 Video image processing-based fire detection system and detection method
JP2020021300A (en) * 2018-08-01 2020-02-06 株式会社シー・イー・デー・システム Fire monitoring device, fire monitoring system, and program for fire monitoring device
JP2020126439A (en) * 2019-02-05 2020-08-20 ホーチキ株式会社 Fire detection device and fire detection method
KR102087000B1 (en) * 2019-08-13 2020-05-29 주식회사 지에스아이엘 Method And System for Monitoring Fire
KR102521726B1 (en) * 2022-06-30 2023-04-17 주식회사 아이미츠 Fire detection system that can predict direction of fire spread based on artificial intelligence and method for predicting direction of fire spread

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于SVM的视频火焰检测算法;熊昊;李伟;;传感器与微系统;20200120(01);全文 *
林火图像识别理论研究进展;袁雯雯;姜树海;;世界林业研究;20171108(01);全文 *

Also Published As

Publication number Publication date
CN117911932A (en) 2024-04-19

Similar Documents

Publication Publication Date Title
RU2393544C2 (en) Method and device to detect flame
US9652354B2 (en) Unsupervised anomaly detection for arbitrary time series
US7859419B2 (en) Smoke detecting method and device
RU2380758C2 (en) Method and device for smoke detection
Khalil et al. Fire detection using multi color space and background modeling
KR101764845B1 (en) A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
KR101084719B1 (en) Intelligent smoke detection system using image processing and computational intelligence
CN107404628B (en) Image processing apparatus and method, and monitoring system
Dong et al. A novel infrared small moving target detection method based on tracking interest points under complicated background
US20060215904A1 (en) Video based fire detection system
JPWO2010084902A1 (en) Intrusion alarm video processor
JP7253573B2 (en) Matching method, device, electronic device and computer readable storage medium
WO2009136895A1 (en) System and method for video detection of smoke and flame
JP4999794B2 (en) Still region detection method and apparatus, program and recording medium
EP2000952B1 (en) Smoke detecting method and device
Thomaz et al. Anomaly detection in moving-camera video sequences using principal subspace analysis
CN103996045B (en) A kind of smog recognition methods of the various features fusion based on video
WO2009136894A1 (en) System and method for ensuring the performance of a video-based fire detection system
JP2022177147A (en) Smoke detection device and smoke identification method
CN107704818A (en) A kind of fire detection system based on video image
Frejlichowski et al. SmartMonitor: An approach to simple, intelligent and affordable visual surveillance system
CN117911932B (en) Fire disaster intelligent detection method and system based on video detection
CN107729811B (en) Night flame detection method based on scene modeling
KR20150088613A (en) Apparatus and method for detecting violence situation
JP2017102719A (en) Flame detection device and flame detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant