CN117516708B - Flame detection method and flame detector - Google Patents

Flame detection method and flame detector Download PDF

Info

Publication number
CN117516708B
CN117516708B CN202410021397.6A CN202410021397A CN117516708B CN 117516708 B CN117516708 B CN 117516708B CN 202410021397 A CN202410021397 A CN 202410021397A CN 117516708 B CN117516708 B CN 117516708B
Authority
CN
China
Prior art keywords
data
feature vectors
feature
flame
different scales
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410021397.6A
Other languages
Chinese (zh)
Other versions
CN117516708A (en
Inventor
赵伟刚
张永珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Bokang Electronics Co ltd
Original Assignee
Xi'an Bokang Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Bokang Electronics Co ltd filed Critical Xi'an Bokang Electronics Co ltd
Priority to CN202410021397.6A priority Critical patent/CN117516708B/en
Publication of CN117516708A publication Critical patent/CN117516708A/en
Application granted granted Critical
Publication of CN117516708B publication Critical patent/CN117516708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • G01J1/4228Photometry, e.g. photographic exposure meter using electric radiation detectors arrangements with two or more detectors, e.g. for sensitivity compensation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J2003/283Investigating the spectrum computer-interfaced

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)

Abstract

The application relates to a flame detection method and a flame detector, wherein the method comprises the following steps of; the method comprises the steps of obtaining multi-source fire data by using a signal obtaining module and converting the multi-source fire data into corresponding digital signals; inputting the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially carrying out fusion calculation and feature reconstruction to obtain a detection result; outputting a detection result; the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module. The method comprises the steps of acquiring multi-source fire data, analyzing and processing, effectively solving the problems of single-face and missing fire information caused by single information source and discretization information, improving the reliability of the data, combining a deep learning algorithm model to extract, fuse and reconstruct the data feature vector, realizing early and long-distance flame detection, detecting moving fire, and meeting the fire detection requirement.

Description

Flame detection method and flame detector
Technical Field
The disclosure relates to the technical field of fire alarm detection, in particular to a flame detection method and a flame detector.
Background
Flame detectors, also known as photosensitive fire detectors, are capable of responding sensitively to the spectral characteristics, illumination intensity, and flame flicker frequency of a material burning flame, and are therefore widely used in oil fire detection in the industry.
Conventional infrared and ultraviolet fire detectors can only respond to the spectrum of radiation in a specific wavelength range, such as 2.7um or 4.4um, of the infrared and ultraviolet bands of the combustion flame, respectively. However, in actual life, the fire occurrence is very complex, and light radiation with various wavelengths is released outwards when substances are combusted, so that the extremely early requirements of fire detection are difficult to fundamentally meet. Moreover, the traditional flame detector is mainly judged by comparing the time domain amplitude of the signal with the frequency characteristic threshold value in the signal processing process, and along with the change of the flame size and the detection distance, the detected flame energy also changes along with the change of the flame size and the detection distance, and the frequency characteristic of the signal is particularly unobvious when the distance is far.
Therefore, a method capable of satisfying the requirement of detecting long-distance and non-fixed fire sources in very early stage while detecting multi-band signals is provided, which is a problem to be solved at present.
Disclosure of Invention
In view of the above, the present application provides a flame detection method and a flame detector to solve the above problems.
In one aspect of the present application, a flame detection method is provided, including the following steps:
the method comprises the steps of obtaining multi-source fire data by using a signal obtaining module and converting the multi-source fire data into corresponding digital signals;
inputting the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially carrying out fusion calculation and feature reconstruction to obtain detection results;
outputting the detection result;
the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module.
As an optional embodiment of the present application, optionally, the signal acquisition module is used to acquire multi-source fire data and convert the multi-source fire data into corresponding digital signals, including:
respectively acquiring visible light spectrum data and near infrared spectrum data by utilizing a multispectral photoelectric conversion module;
and converting the visible light spectrum data and the near infrared spectrum data into a visible light digital signal and a near infrared digital signal respectively.
As an optional implementation manner of the present application, optionally, inputting the digital signal into a deep learning model to extract feature vectors of different scales, and sequentially performing fusion calculation and feature reconstruction to obtain a detection result, where the method includes:
respectively inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network to obtain corresponding feature vectors with different scales, and determining a feature vector extremum;
performing fusion calculation on the feature vectors with different scales to obtain a fusion result;
and reconstructing the characteristic vector based on the fusion result, and determining a detection result.
As an optional embodiment of the present application, optionally, inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network respectively to obtain feature vectors with different scales, and determining a feature vector extremum includes:
respectively inputting the visible light data signal and the near infrared light digital signal into a convolutional neural network, and generating two corresponding sets of feature vectors by utilizing a linear rectification function;
and the two groups of feature vectors respectively pass through a deconvolution layer to obtain two groups of feature vectors with different scales, and the extremum of the feature vectors is determined by using an extremum method.
As an optional implementation manner of the present application, optionally, after the two sets of feature vectors respectively pass through the deconvolution layer to obtain two sets of feature vectors with different scales, determining an extremum of the feature vectors by using an extremum method, the method further includes:
and presetting a threshold value, and judging whether the characteristic vector extremum exceeds the threshold value.
As an alternative embodiment of the present application, optionally, the spectral bands of the multi-source fire data include 400nm-800nm and 1000nm-25000nm.
In another aspect of the present application, a fire detector is provided, for implementing a flame detection method as set forth in any one of the above, including:
the signal acquisition module is configured to acquire multi-source fire data and convert the multi-source fire data into corresponding digital signals;
the spectrum characteristic information fusion module inputs the digital signals into a deep learning model to extract characteristic vectors with different scales, and fusion calculation and characteristic reconstruction are sequentially carried out to obtain a detection result;
an interface output module configured to output the detection result;
the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module.
As an optional embodiment of the present application, optionally, the multispectral photoelectric conversion module includes:
a visible light multispectral sensor configured to acquire visible light spectrum data;
a near infrared multispectral sensor configured to acquire near infrared spectral data.
In another aspect of the present application, there is provided an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement any one of the flame detection methods described above when executing the executable instructions.
In yet another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a flame detection method as described in any one of the above.
The invention has the technical effects that:
according to the method, the observed multi-source fire data are input into the deep learning model, different scales are introduced on the basis of the deep learning model, a multi-stage fusion structure is constructed, and the acquired multi-source fire data are fused at the multiple scales, so that the purposes of detecting and identifying the fire are achieved. Specifically, the method comprises the following steps: the method comprises the steps of obtaining multi-source fire data by using a signal obtaining module and converting the multi-source fire data into corresponding digital signals; inputting the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially carrying out fusion calculation and feature reconstruction to obtain a detection result; outputting a detection result; the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module. The method comprises the steps of acquiring multi-source fire data, analyzing and processing, effectively solving the problems of single-face and missing fire information caused by single information source and discretization information, improving the reliability of the data, combining a deep learning algorithm model to extract, fuse and reconstruct the data feature vector, realizing early and long-distance flame detection, detecting moving fire, and meeting the fire detection requirement.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic flow chart of a flame detection method of the present invention;
FIG. 2 shows a schematic block diagram of a flame detector of the present invention;
fig. 3 shows a schematic structure of the flame detector of the present invention.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
FIG. 1 is a schematic flow chart of a flame detection method of the present invention; FIG. 2 shows a schematic block diagram of a flame detector of the present invention; fig. 3 shows a schematic structure of the flame detector of the present invention.
Example 1
As shown in fig. 1, in one aspect, the present application proposes a flame detection method, including the following steps:
s100, acquiring multi-source fire data by using a signal acquisition module and converting the multi-source fire data into corresponding digital signals;
s200, inputting the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially carrying out fusion calculation and feature reconstruction to obtain a detection result;
s300, outputting the detection result;
the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module.
In the embodiment, the observed multi-source fire data are input into the deep learning model, different scales are introduced on the basis of the deep learning model, a multi-stage fusion structure is constructed, and the acquired multi-source fire data are fused at a plurality of scales, so that the purposes of detecting and identifying the fire are achieved. The method comprises the steps of acquiring multi-source fire data, analyzing and processing, effectively solving the problems of single-face and missing fire information caused by single information source and discretization information, improving the reliability of the data, combining a deep learning algorithm model to extract, fuse and reconstruct the data feature vector, realizing early and long-distance flame detection, detecting moving fire, and meeting the fire detection requirement.
Specifically, the multi-source fire data is acquired by the signal acquisition module and converted into corresponding data signals through step S100. Here, it should be noted that, the signal acquisition module includes a multispectral photoelectric conversion module and a signal acquisition module, and in this embodiment, the multispectral photoelectric conversion module is utilized to realize acquisition of multisource fire data, and multiple radiation signals in the flame combustion process are acquired simultaneously.
Wherein, as an optional implementation manner of the application, optionally, the multi-source fire data is acquired and converted into corresponding digital signals, including:
respectively acquiring visible light spectrum data and near infrared spectrum data;
and converting the visible light spectrum data and the near infrared spectrum data into a visible light digital signal and a near infrared digital signal respectively. That is, the visible light spectrum data is obtained through the visible light multispectral sensor in the multispectral photoelectric conversion module, and the near infrared light spectrum data is obtained through the near infrared multispectral sensor in the multispectral photoelectric conversion module, so that compared with the situation that the spectrum characteristics are collected by the same-property sensor, the situation that the fire information is on one side and the fire information is missing caused by a single information source can be effectively avoided, a plurality of optical spectrum wave bands are obtained, infrared light is expanded on the basis of visible light, and more accurate data information and decision basis are provided for subsequent data processing.
As an alternative embodiment of the present application, optionally, the spectral bands of the multi-source fire data include 400nm-800nm and 1000nm-25000nm. The method and the device can sense two spectrum bands to acquire data, wherein the spectrum resolution is 200nm, the imaging resolution is not lower than 300 multiplied by 200, the acquired bands are numerous, and each spectrum is two-dimensional data. Furthermore, the acquired data are converted into real-time digital signals by utilizing the signal acquisition module, so that the data are convenient to be input into a deep learning model for feature extraction.
Further, through step S200, the digital signals are input into a deep learning model to extract feature vectors with different scales, and fusion calculation and feature reconstruction are sequentially performed to obtain a detection result.
As an optional implementation manner of the present application, optionally, inputting the digital signal into a deep learning model to extract feature vectors of different scales, and sequentially performing fusion calculation and feature reconstruction to obtain a detection result, where the method includes:
respectively inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network to obtain corresponding feature vectors with different scales, and determining a feature vector extremum;
performing fusion calculation on the feature vectors with different scales to obtain a fusion result;
and reconstructing the characteristic vector based on the fusion result, and determining a detection result.
Here, it should be noted that, for the multispectral sensor, in the fusion process of sensor data with different resolutions, a problem that spectral information and spatial information are not balanced easily occurs by using a deep learning algorithm, so that certain spectral feature data is distorted, and the problem is aggravated by excessive bands of the multispectral sensor. Therefore, the flame detection method utilizes the self-adaptive multi-scale convolutional neural network, introduces multi-scale based on the convolutional neural network for the situation that the spectrum characteristics are complex and changeable in a plurality of spectrum bands, particularly in a fire detection application scene, constructs a multi-level fusion structure, and fuses different sensor data in the multi-scale, thereby effectively improving the problem of data distortion. Specifically, the visible light digital signal and the near infrared data signal are respectively subjected to self-adaptive scale spectrum feature extraction by adopting a convolutional neural network to obtain feature vectors under different scales, namely, the feature vectors with different resolutions are obtained. Furthermore, the convolution characteristics of the convolutional neural network CNN are utilized, characteristic mapping required by the learning network is utilized to perform characteristic fusion, the original data is effectively compressed, and data processing is accelerated. And then, reconstructing a needed multi-spectrum feature vector of the fire scene from the fused feature vectors by the convolutional neural network CNN to obtain a detection result.
The feature extraction process will be described in detail below.
The feature extraction of the method adopts a Scale Invariant Feature Transform (SIFT) method, the SIFT is a computer vision algorithm used for detecting and describing local features in multi-source two-dimensional spectrum data, and the method searches extreme points in spatial scale and extracts the position, scale and rotation invariants of the extreme points.
The specific characteristic extraction process is as follows:
(1) Constructing a scale space, wherein the aim is mainly to obtain multi-scale characteristics of multi-source fire data;
(2) Finding out extreme points by adopting Gaussian difference, thereby obtaining key points;
(3) Screening the key points to remove the key points with lower contrast and unstable edge response points;
(4) Utilizing gradient direction distribution characteristics of the neighborhood data values of the key points as direction parameters of each key point;
(5) And finally, generating feature vectors with different scales.
Firstly, building fire data pyramids under different scales, and carrying out Gaussian difference, wherein the formula is as follows:
(1)
the above equation represents a convolution operation,representing a two-dimensional normal distribution, +.>Is generated by convolving the input spectral data with the differences of the two-dimensional spectral data (x, y) at different scales. Finding out extreme points of Gaussian difference is the key points of the spectrum data. />The Taylor formula expands as follows:
(2)
deriving the above formula and making it be 0 to obtain accurate position,
(3)
removing low contrast points, calculating function value at extreme point, and obtaining the first two items
(4)
Further screening key points, and removing keys with lower contrast through formulas (2), (3) and (4)And (5) a dot. If it isThe feature point is reserved, otherwise, discarding is carried out, unstable edge response points in the key points are removed through formulas (5) and (6), and when the key points meet formula (6), the feature point is reserved, otherwise, discarding is carried out.
An extremum of a poorly defined gaussian difference operator has a larger principal curvature across the edge and a smaller principal curvature in the direction perpendicular to the edge. The principal curvature is determined by a 2×2 Hessian matrix H:
(5)
the derivative is estimated from the sample point adjacent difference. D is proportional to the characteristic value of H, let α be the larger characteristic value and β be the smaller characteristic value, then:
let α=γβ, then:
the value of (r+1) 2/r is smallest when the two eigenvalues are equal, increasing with increasing r, so, to detect if the principal curvature is at a certain threshold r, only:
(6)
and calculating a direction parameter by using the gradient direction distribution characteristic of the key point neighborhood data, as shown in a formula (7). And finally forming the characteristic vector.
(7)
Optionally, as an optional embodiment of the present application, the inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network respectively, to obtain feature vectors with different scales, and determining a feature vector extremum includes:
respectively inputting the visible light data signal and the near infrared light digital signal into a convolutional neural network, and generating two corresponding sets of feature vectors by utilizing a linear rectification function;
and the two groups of feature vectors respectively pass through a deconvolution layer to obtain two groups of feature vectors with different scales, and the extremum of the feature vectors is determined by using an extremum method.
Specifically, in the process of extracting the spectrum characteristics of the self-adaptive scale, a convolution neural network is respectively adopted for a visible light multispectral sensor and a near infrared multispectral sensor of the multispectral detector to process. After visible light spectrum data acquired by a visible light multispectral sensor are converted into digital signals, a linear rectification function is adopted for a first layer of a convolutional neural network CNN to generate feature vectors, and then the feature vectors with different resolutions, namely, feature vectors with different scales, are obtained through a deconvolution layer. Similarly, the near infrared spectrum data acquired by the near infrared multispectral sensor is processed in the same manner, and the details are not repeated.
Further, as an optional implementation manner of the present application, optionally, after the two sets of feature vectors respectively pass through the deconvolution layer to obtain two sets of feature vectors with different scales, determining an extremum of the feature vectors by using an extremum method, the method further includes:
and presetting a threshold value, and judging whether the characteristic vector extremum exceeds the threshold value.
That is, the extremum method is used to determine the extremum of the feature vector under the corresponding scale, and the extremum of the feature vector is compared with the preset threshold, if the extremum of the feature vector is within the allowable range, the process is repeated, and the feature vectors with different resolutions are obtained continuously until the threshold is exceeded. Then, feature fusion is carried out, and a needed multi-spectrum feature vector of the fire scene is reconstructed from the fused feature vectors, and the special explanation is that after two deconvolution lamination layers, the scale of the feature vector can meet the requirement, so that the final layer of the convolutional neural network adopts a convolutional layer to output a result.
And outputting the detection result through the step S300.
After the characteristic data is processed by the convolutional neural network, a detection result is obtained, and the detection result, namely the alarm information, is output through the interface output module to finish fire alarm.
In summary, the application realizes flame detection and alarm by acquiring the multi-source fire data, namely sensing physical characteristics and information of a target in space in spectrum dimension, performing multi-spectrum characteristic self-adaptive fusion by adopting a convolutional neural network, and establishing a corresponding convolutional neural network early-warning model to make decision judgment when a fire disaster occurs. The whole process is more convenient, so that the full-scale performance in the feature extraction process is avoided to a great extent, the accuracy of fire disaster identification is improved, and the situation that the spectrum features of the fire disaster detection application scene are complex and changeable is effectively treated.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment methods may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the embodiment flow of each control method as described above when executed. The storage medium may be a magnetic disk, an optical disc, a Read-only memory (ROM), a random access memory (RandomAccessMemory, RAM), a flash memory (flash memory), a hard disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Example 2
As shown in fig. 2 and fig. 3, further, in another aspect of the present application, a fire detector is provided for implementing the flame detection method described in any one of the foregoing, and since the working principle of the device in the embodiment of the present disclosure is the same as or similar to the principle of the method for analyzing hidden danger data of the power distribution network in the embodiment of the present disclosure, the repetition is not repeated. The device of the disclosed embodiment of the application comprises:
a signal acquisition module 100 configured to acquire multi-source fire data and convert the multi-source fire data into corresponding digital signals;
the spectral feature information fusion module 200 is configured to input the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially perform fusion calculation and feature reconstruction to obtain a detection result;
an interface output module 300 configured to output the detection result;
the signal acquisition module 100 includes a multispectral photoelectric conversion module 110 and a signal acquisition module 120.
The flame detector in this embodiment includes a signal acquisition module 100, a spectral feature information fusion module 200, and an interface output module 300. The signal acquisition module 100 acquires multi-source fire data and converts the multi-source fire data into digital signals, the spectral feature information fusion module 200 is in communication connection with the signal acquisition module 100, so as to receive the digital signals, input the digital signals into the deep learning model to extract and extract feature vectors of different scales, and sequentially perform fusion calculation and feature reconstruction to obtain detection results, and input the detection results into the interface output module 300 in communication connection with the spectral feature information fusion module 200 to perform fire alarm. The flame detector is further provided with a protective cover 400, and the signal acquisition module 100, the spectral feature information fusion module and the interface module are all located in the protective cover 400.
It should be noted that, the signal acquisition module 100 includes a multispectral photoelectric conversion module 110 and a signal acquisition module 120, where the multispectral photoelectric conversion module 110 is configured to acquire the multi-source fire data, and the signal acquisition module 120 is communicatively connected to the multispectral photoelectric conversion module 110 and is configured to convert the multi-source fire data into a corresponding digital signal. It should be further noted that, the spectral feature information fusion module 200 is composed of a high-performance MCU, and a multispectral fusion processing algorithm based on convolutional neural network deep learning is run in the spectral feature information fusion module 200. Further, the obtained detection result, namely the alarm information, is output through the interface output module 300, wherein the interface output module 300 supports output modes such as a relay, 4-20 mA,485, CAN bus, TCP/IP bus and the like.
As an optional embodiment of the present application, optionally, the multispectral photoelectric conversion module 110 includes: a visible light multispectral sensor 111 configured to acquire visible light spectrum data; a near infrared multispectral sensor 112 configured to acquire near infrared spectral data.
Example 3
In another aspect of the present application, there is provided an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement any one of the flame detection methods described above when executing the executable instructions.
Embodiments of the present disclosure control a system that includes a processor and a memory for storing processor-executable instructions. Wherein the processor is configured to implement any of the flame detection methods described above when executing the executable instructions.
Here, it should be noted that the number of processors may be one or more. Meanwhile, in the control system of the embodiment of the present disclosure, an input device and an output device may be further included. The processor, the memory, the input device, and the output device may be connected by a bus, or may be connected by other means, which is not specifically limited herein.
The memory is a computer-readable storage medium that can be used to store software programs, computer-executable programs, and various modules, such as: the embodiment of the disclosure relates to a program or a module corresponding to a flame detection method. The processor executes various functional applications and data processing of the control system by running software programs or modules stored in the memory.
The input device may be used to receive an input number or signal. Wherein the signal may be a key signal generated in connection with user settings of the device/terminal/server and function control. The output means may comprise a display device such as a display screen.
Example 4
In yet another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the flame detection method of any one of the above.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (7)

1. A method of flame detection comprising the steps of:
the method comprises the steps of obtaining multi-source fire data by using a signal obtaining module and converting the multi-source fire data into corresponding digital signals;
the method for acquiring the multi-source fire data by using the signal acquisition module and converting the multi-source fire data into corresponding digital signals comprises the following steps:
respectively acquiring visible light spectrum data and near infrared spectrum data by utilizing a multispectral photoelectric conversion module;
converting the visible light spectrum data and the near infrared spectrum data into a visible light digital signal and a near infrared digital signal respectively;
inputting the digital signals into a deep learning model to extract feature vectors with different scales, and sequentially carrying out fusion calculation and feature reconstruction to obtain detection results;
the method for extracting the feature vectors of different scales by inputting the digital signals into a deep learning model, and sequentially carrying out fusion calculation and feature reconstruction to obtain detection results comprises the following steps:
respectively inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network to obtain corresponding feature vectors with different scales, and determining a feature vector extremum;
performing fusion calculation on the feature vectors with different scales to obtain a fusion result;
reconstructing the characteristic vector based on the fusion result, and determining a detection result;
the method for obtaining the feature vector of different scales by respectively inputting the visible light digital signal and the near infrared digital signal into a convolutional neural network and determining the extremum of the feature vector comprises the following steps:
respectively inputting the visible light data signal and the near infrared light digital signal into a convolutional neural network, and generating two corresponding sets of feature vectors by utilizing a linear rectification function;
the two groups of feature vectors respectively pass through a deconvolution layer to obtain two groups of feature vectors with different scales, and an extremum method is utilized to determine a feature vector extremum;
the feature extraction process comprises the following steps: constructing a scale space; finding out extreme points by adopting Gaussian difference, and obtaining key points; screening the key points to remove the key points with lower contrast and unstable edge response points; utilizing gradient direction distribution characteristics of the neighborhood data values of the key points as direction parameters of each key point; finally, generating feature vectors with different scales;
outputting the detection result;
the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module.
2. The flame detection method of claim 1, wherein after the two sets of feature vectors respectively pass through deconvolution layers to obtain two sets of feature vectors with different scales, and the extremum of the feature vectors is determined by using an extremum method, the flame detection method further comprises:
and presetting a threshold value, and judging whether the characteristic vector extremum exceeds the threshold value.
3. The flame detection method of any of claims 1-2, wherein the spectral bands of the multi-source fire data comprise 400nm-800nm and 1000nm-25000nm.
4. A flame detector for implementing the flame detection method of any of the preceding claims 1-3, comprising:
the signal acquisition module is configured to acquire multi-source fire data and convert the multi-source fire data into corresponding digital signals;
the spectrum characteristic information fusion module is configured to input the digital signals into a deep learning model to extract characteristic vectors with preset resolution, and sequentially perform fusion calculation and characteristic reconstruction to obtain detection results;
an interface output module configured to output the detection result;
the signal acquisition module comprises a multispectral photoelectric conversion module and a signal acquisition module.
5. The flame detector of claim 4, wherein the multispectral photoelectric conversion module comprises:
a visible light multispectral sensor configured to acquire visible light spectrum data;
a near infrared multispectral sensor configured to acquire near infrared spectral data.
6. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the flame detection method of any one of claims 1 to 3 when executing the executable instructions.
7. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the flame detection method of any of claims 1 to 3.
CN202410021397.6A 2024-01-08 2024-01-08 Flame detection method and flame detector Active CN117516708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410021397.6A CN117516708B (en) 2024-01-08 2024-01-08 Flame detection method and flame detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410021397.6A CN117516708B (en) 2024-01-08 2024-01-08 Flame detection method and flame detector

Publications (2)

Publication Number Publication Date
CN117516708A CN117516708A (en) 2024-02-06
CN117516708B true CN117516708B (en) 2024-04-09

Family

ID=89744299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410021397.6A Active CN117516708B (en) 2024-01-08 2024-01-08 Flame detection method and flame detector

Country Status (1)

Country Link
CN (1) CN117516708B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966372A (en) * 2015-06-09 2015-10-07 四川汇源光通信有限公司 Multi-data fusion forest fire intelligent recognition system and method
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN110334685A (en) * 2019-07-12 2019-10-15 创新奇智(北京)科技有限公司 Flame detecting method, fire defector model training method, storage medium and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7786877B2 (en) * 2008-06-20 2010-08-31 Billy Hou Multi-wavelength video image fire detecting system
JP7558243B2 (en) * 2019-03-15 2024-09-30 レチンエイアイ メディカル アーゲー Feature Point Detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966372A (en) * 2015-06-09 2015-10-07 四川汇源光通信有限公司 Multi-data fusion forest fire intelligent recognition system and method
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN110334685A (en) * 2019-07-12 2019-10-15 创新奇智(北京)科技有限公司 Flame detecting method, fire defector model training method, storage medium and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于注意力机制的3D卷积双波段烟火识别方法;宋俊猛 等;《消防科学与技术》;20230228;第1-2节 *

Also Published As

Publication number Publication date
CN117516708A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN108710910B (en) Target identification method and system based on convolutional neural network
Liu et al. Spatial-spectral kernel sparse representation for hyperspectral image classification
US10032077B1 (en) Vehicle track identification in synthetic aperture radar images
CN113628261B (en) Infrared and visible light image registration method in electric power inspection scene
CN116343301B (en) Personnel information intelligent verification system based on face recognition
CN117557775B (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN114937206A (en) Hyperspectral image target detection method based on transfer learning and semantic segmentation
CN113705361A (en) Method and device for detecting model in living body and electronic equipment
Gu et al. Hyperspectral target detection via exploiting spatial-spectral joint sparsity
CN117690161B (en) Pedestrian detection method, device and medium based on image fusion
Li et al. An all-sky camera image classification method using cloud cover features
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
CN118115947A (en) Cross-mode pedestrian re-identification method based on random color conversion and multi-scale feature fusion
CN117516708B (en) Flame detection method and flame detector
Maheen et al. Machine learning algorithm for fire detection using color correlogram
Wu et al. Spectral spatio-temporal fire model for video fire detection
CN109461176A (en) The spectrum method for registering of high spectrum image
CN113344987A (en) Infrared and visible light image registration method and system for power equipment under complex background
CN109101977A (en) A kind of method and device of the data processing based on unmanned plane
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
CN116977747A (en) Small sample hyperspectral classification method based on multipath multi-scale feature twin network
JP4711131B2 (en) Pixel group parameter calculation method and pixel group parameter calculation apparatus
CN114170145B (en) Heterogeneous remote sensing image change detection method based on multi-scale self-coding
Hasanlou et al. Sensitivity analysis on performance of different unsupervised threshold selection methods in hyperspectral change detection
Olivatti et al. Analysis of artificial intelligence techniques applied to thermographic inspection for automatic detection of electrical problems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant