CN110895701A - Forest fire online identification method and device based on CN and FHOG - Google Patents

Forest fire online identification method and device based on CN and FHOG Download PDF

Info

Publication number
CN110895701A
CN110895701A CN201910504074.1A CN201910504074A CN110895701A CN 110895701 A CN110895701 A CN 110895701A CN 201910504074 A CN201910504074 A CN 201910504074A CN 110895701 A CN110895701 A CN 110895701A
Authority
CN
China
Prior art keywords
image
fhog
sample
flame
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910504074.1A
Other languages
Chinese (zh)
Other versions
CN110895701B (en
Inventor
赵运基
魏胜强
刘晓光
孔军伟
周梦林
范存良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201910504074.1A priority Critical patent/CN110895701B/en
Publication of CN110895701A publication Critical patent/CN110895701A/en
Application granted granted Critical
Publication of CN110895701B publication Critical patent/CN110895701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a forest fire online identification method and a device based on CN (color names) and FHOG (fused Histogram organized gradient), comprising the following steps: s1, constructing a flame sample image set; s2, projecting the sample image in a CN color space, and determining a projection color space pivot according to a pivot analysis method; s3, constructing FHOG characteristics of the flame, performing mean pooling by applying a pyramid pooling method, and constructing an FHOG characteristic set; s4, collecting images, performing principal component color space projection, and determining a suspected fire point area by applying threshold processing in the projection images; s5, calculating FHOG characteristics of the suspected fire point area and carrying out pyramid pooling; s6, calculating the similarity between the pyramid pooling FHOG characteristics of the suspected fire point area and the sample characteristics in the FHOG characteristic set, and finally determining the fire occurrence condition according to a threshold value; and S7, if a fire disaster happens, alarming information is given. The invention also provides an online fire identification device based on the CN + FHOG characteristic. The invention has high precision and can effectively carry out on-line fire identification.

Description

Forest fire online identification method and device based on CN and FHOG
Technical Field
The invention relates to the technical field of image processing, in particular to a forest fire online identification method and device based on CN and FHOG.
Background
Forest fires are one of the factors that seriously affect the ecological environment. The damage to the forest and the environment is destructive. Once a forest fire happens, the difficulty of putting out the forest fire is high. Therefore, the early warning of the forest fire is very important. With the development of science and technology, the early warning of forest fires is greatly improved. The forest fire detection methods are various, and the forest fire detection algorithms based on image recognition are more. Among them, there are various algorithms for fire detection and identification based on color space. The fire identification algorithm based on the color can not get rid of the inherent defect of the color space in the detection process, namely the color is easily influenced by illumination, and finally the fire detection algorithm based on the color space has higher false alarm rate. The method is characterized in that the textural features of the image are slightly influenced by the change of illumination, but the textural features are easily influenced by deformation, and in view of the complementary characteristics between the textural features and the color features, the invention provides a forest fire detection method combining coarse detection of a multi-color space and fine detection of FHOG features, and the detection algorithm is solidified to a processor, so that the processor can carry out online detection on the fire condition in the field range of the image acquisition equipment, and finally the detection result is transmitted to a server through a network.
Disclosure of Invention
In order to overcome the defect that the false alarm rate of a fire detection algorithm based on color space description is high due to the influence of illumination transformation in the flame detection process based on the color space, the invention aims to provide a fire detection method which carries out coarse detection by applying the color space and then carries out fine detection by applying FHOG characteristics. By constructing a multivariate sample color space, the robustness of the sample to flame color descriptions is enhanced. Aiming at the problems that the candidate region detected by CN is inconsistent in scale, the obtained FHOG features are inconsistent in scale and the similarity judgment cannot be carried out, a processing method of the mean pooling FHOG features is applied, the processed FHOG features of the fire candidate region and the FHOG features in the sample set have the same dimensionality, and whether fire exists in the candidate region can be judged directly through the Euclidean distance.
The invention also aims to provide a forest fire online identification device based on CN and FHOG, which is characterized in that a description set of a flame sample multi-color space is constructed by using multiple samples to determine the principal element color of flame color description, meanwhile, a pyramid mean pooling method is used to determine an FHOG characteristic set of sample flame, an acquired video image is projected to the principal element color space, a suspected fire area is determined by a binarization method, the FHOG characteristic of the area is obtained in the suspected area and pyramid mean pooling is carried out, the FHOG characteristic of the mean pooling of the candidate area is compared with the characteristic of the sample set, and finally whether fire exists in the candidate area is determined, if so, the candidate area is transmitted to a server in a 5G wireless transmission mode and is alarmed.
In order to achieve one of the above purposes, the invention provides the following technical scheme:
a forest fire online identification method and device based on CN and FHOG comprises the following steps:
s1, selecting forest fire images under different illumination conditions, and constructing a set of forest fire flame samples;
s2, converting the RGB images of the flame sample set into 11-dimensional multi-color space by applying a color space conversion matrix provided by a multi-color space CN algorithm, and extracting color principal component projection vectors described by the flame sample set in the 11-dimensional color space by applying a principal component analysis method;
s3, calculating FHOG characteristics of each sample in the flame sample set, performing pooling treatment on the calculated characteristics by a 2 x 2 mean value pooling method, and finally constructing a mean value pooled FHOG characteristic set corresponding to the flame sample set;
s4, projecting the RGB image in the collected original image frame to a principal component color space, constructing a projection image, carrying out corrosion and expansion treatment on the principal component projection image, and determining a connected region in the projection image by applying a connected region treatment method to the processed result image, wherein the determined connected region is a suspected region of a fire;
and S5, extracting the RGB image of the suspected area from the original RGB image according to the area parameters determined by the suspected area, and calculating the FHOG characteristic of the area. Pooling the FHOG characteristics obtained by calculation by a 2 x 2 mean value to finally obtain 2 x 31 dimensional characteristics of the suspected area;
s6, carrying out similarity calculation on the mean pooling characteristic of the candidate region and the mean pooling FHOG characteristic of the original sample set, and finally determining whether a fire disaster exists in the candidate region;
s7, when a fire disaster exists, alarm information is given;
the method for extracting the pivot color description in the multivariate color space comprises the following steps:
s11, selecting a single sample image in the sample set, ensuring that the sample image is an RGB color image, projecting the RGB three-channel original sample image through a multi-color space projection matrix proposed by a CN algorithm, and finally obtaining multi-channel images of 11 channels;
s12, performing matrix conversion on the multi-channel image three-dimensional matrix, converting the multi-channel image three-dimensional matrix into a (m multiplied by n) multiplied by 11 two-dimensional matrix, centralizing the matrix, solving a covariance matrix of the centralized matrix, performing singular value decomposition on the covariance matrix, and sequencing corresponding eigenvectors according to the magnitude of the eigenvalues; the feature vector corresponding to the maximum feature value is the principal component projection vector applied by the invention;
s13, constructing pivot description vectors of different flame samples according to the same process;
in order to achieve the second purpose, the invention provides the following technical scheme:
the sample set construction module is used for selecting images of flame areas in a fire image library, selecting ten flame areas for shearing under different illumination conditions, night, day and other conditions, and taking the sheared images as sample set elements;
the principal component projection vector calculation module is used for projecting an original RGB image to a 10-dimensional multi-color space by applying a multi-color space projection matrix provided by an RGB color space provided by a CN algorithm to each sample in the selected flame set, centralizing a projection result matrix, solving a covariance matrix of the centralized matrix, solving an average value of covariance matrices obtained by all samples in the sample set, taking the average value of the covariance matrices as a final covariance matrix, finally obtaining a 10 x 10-dimensional covariance matrix, applying SVD (singular value decomposition) to obtain an eigenvalue and an eigenvector corresponding to the covariance matrix, and obtaining the eigenvector corresponding to the maximum eigenvalue as the principal component projection vector;
the FHOG characteristic set calculation module is used for calculating FHOG characteristics of each frame of image for the flame images selected from the characteristic set, applying a pyramid mean pooling mode to the FHOG characteristics obtained by calculation, converting the FHOG characteristic set corresponding to the sample set into an FHOG vector with a fixed size, and finally constructing an FHOG characteristic set with 155 dimensions corresponding to each frame of flame image;
the acquisition module is used for carrying out principal component vector projection on the acquired image to obtain a projection image, applying corrosion expansion processing and threshold processing on the obtained principal component projection image to finally obtain a binary image, projecting the binary image along an x axis, segmenting the binary image according to a projection result, projecting the segmented result image along a y axis, and finally determining the position of a non-zero region in the projection result image. And extracting an original image area corresponding to the non-zero result image area, wherein the area is a fire candidate area. And calculating FGOG characteristics of the candidate area images, and performing pyramid mean pooling on the FGOG characteristics. Finally, obtaining 135-dimensional FGOG characteristics of pyramid mean pooling of the candidate target area;
the identification module is used for calculating the Euclidean distance between the mean pooling FHOG characteristic of the candidate region and the characteristic in the mean pooling FHOG characteristic set of the samples in the sample set, and comparing the minimum value with a set identification threshold value so as to accurately identify the final result;
and the alarm module is used for transmitting an alarm signal through a network if the flame characteristics exist in the final identification result, wherein the alarm signal data comprises the code of the hardware. In order to determine the location of the fire.
Compared with the prior art, the forest fire online identification method and device based on CN and FHOG have the advantages that:
1. the method combining CN and FHOG characteristics is applied to realize the recognition of forest fires, fully realizes the effective complementation of the CN and the FHOG in the process of describing flames, and improves the effectiveness of fire detection;
2. on the basis of the FHOG characteristics, pyramid mean value pooling is carried out on the FHOG characteristics, and the robustness of the characteristics to scale change is improved;
3. the mode of coarse detection to fine detection is adopted, so that the calculation burden is effectively reduced, and the hardware cost is reduced;
4. the method provided by the invention can be solidified on a common ARM and related hardware equipment, realizes networking, has low requirement on hardware computing capacity, can be arranged in a large range, and improves the precision of forest fire detection.
Drawings
FIG. 1 is a flow chart of a method for on-line forest fire identification based on CN and FHOG according to an embodiment of the present invention;
FIG. 2 is a sample set image example;
FIG. 3 is an example of a projection result image from RGB three channels to CN color space;
FIG. 4 is a schematic diagram of computing FHOG characteristics of a cell;
FIG. 5 is a schematic diagram of a pyramid mean pooling calculation process
FIG. 6 is a schematic view of flame candidate region location
Fig. 7 is a schematic structural diagram of a forest fire online identification device based on CN and FHOG according to a second embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
The embodiment of the invention provides a forest fire online identification method based on CN and FHOG. Aiming at the problem that a fire identification algorithm based on color information is susceptible to illumination change, the invention provides a method for performing coarse detection by applying a CN algorithm and then performing fine detection by applying an FHOG algorithm. Firstly, constructing a color principal element description vector according to a fire sample, projecting an image to be detected on the principal element vector in a detection process, and determining a projected image. And determining a candidate fire area in the projection result image through a connected domain function. In the detailed inspection process based on the FHOG features, the dimension of the FHOG features is inconsistent due to the fact that the size of a candidate target area is inconsistent with the size of a sample, and therefore the invention provides a method for pooling the FHOG features by the pyramid mean value, the FHOG features of images with different scales are planned to be the same dimension, and comparison of the features is facilitated. And finally, giving alarm information according to the comparison result.
Specifically, please refer to fig. 1, which includes the following steps:
and S1, selecting fire images under different environmental conditions from the fire sample data set, extracting fire area images from the fire images, and making the extracted fire area images into a flame image sample set. 10 frames of flame images under different environmental conditions were selected as a sample set of flames. Fig. 2 gives a partial sample example.
And S2, performing multichannel color space projection of the flame sample image, and determining a principal component description vector based on principal component analysis.
And converting the RGB images in the flame sample set into multi-channel sample images through a color space conversion matrix 32768 x 10 provided by a CN algorithm. The original sample is converted into a multi-channel image of 11 channels by a conversion matrix, including a conversion matrix converted channel image of 10 channels and a grayscale image of 1 channel. Fig. 3 gives an example of converting an original sample RBG three-channel image into 11-channel images. For the original mxnx3 sample image, the original mxnx10 sample image is transformed into mxnx10 by the projection matrix, and the original 3-channel image is transformed into 10-channel image. The principal component vector that best describes the flame sample is sought in the 10 channel images other than the grayscale image. Converting each channel image in the M multiplied by N multiplied by 10 of the flame sample projection result into a vector form, converting the final conversion result into a two-dimensional matrix (M multiplied by N) multiplied by 10, centralizing the two-dimensional matrix, solving a covariance matrix of the matrix after centralization, solving an average value of covariance matrices obtained by all samples in a sample set, taking the average value of the covariance matrix as a final covariance matrix, and finally obtaining a covariance matrix of 10 multiplied by 10 dimensions. And carrying out singular value decomposition on the covariance matrix, and sequencing the eigenvectors according to the magnitude of the eigenvalues. The eigenvector corresponding to the largest eigenvalue is the largest principal component projection vector with dimension 10 × 1. And projecting the 10-channel image of the projection result into the direction of the principal component vector, and finally obtaining the projection result image of M × N × 10 × 10 × 1 or M × N.
S3, constructing an FHOG feature set of the sample, and carrying out pyramid mean pooling on the FHOG features of the sample.
And (4) solving FHOG characteristics corresponding to the images in the flame sample set. In the calculation process of the FHOG feature, the size of the cell is selected to be 4 multiplied by 4 pixel area, 9-dimensional direction insensitive feature, 18-dimensional direction sensitive feature and 4-dimensional summation feature.
The method of calculating the FHOG characteristic of each cell is conventional in the art and is only briefly described here.
Referring to fig. 4, the gradient and direction of each pixel point are calculated for the pixel points in each cell. (if the pixel point is located at the boundary position of the segmentation region, the solution is realized by adopting a linear interpolation mode). The magnitude and direction of the gradient are respectively expressed as: r (x, y), θ (x, y). Discretizing the gradient direction of each pixel point can discretize the gradient direction of the pixel point into one of P1 values (actually selected P1 is 18) of B1 (0-360 degrees) sensitive to the direction and one of P2 values (actually selected P2 is 9) of B2 (0-180 degrees) insensitive to the direction, and the specific calculation process is shown as formula 1 and formula 2. When the direction-sensitive P1 is selected to be 18, that is, 0 to 360 degrees are divided into 18 sections, that is, the gradient direction of each pixel is normalized to 18 discrete sections of 0 to 17, for a certain pixel, a corresponding numerical bit between 0 to 17 of the B1 value is 1, and a corresponding value between 0 to 7 of the B2 value is 1. For P2-9, the gradient direction is normalized to 8 discrete intervals of 0-7.
Figure BDA0002091187670000071
Figure BDA0002091187670000072
A feature map at the pixel level is defined, specifying a sparse histogram of the gradient magnitudes of each pixel. The calculation of the direction-sensitive sparse histogram and the direction-insensitive sparse histogram at the pixel point (x, y) is shown in equations 3 and 4. FB1 is a 1 × 18 vector, the feature of the pixel (x, y) in this vector is a 1 × 18 vector whose position value corresponding to b1 is r (x, y), only the position corresponding to b1 in this vector is nonzero, and the values of the other 17 positions are all zero. The same way can obtain the direction insensitive characteristic FB2 of the pixel point (x, y).
Figure BDA0002091187670000081
Figure BDA0002091187670000082
After the direction sensitive feature FB1 and the direction insensitive feature FB2 of each pixel point are obtained, feature solution is carried out on 9 areas which are divided by 3 multiplied by 3 cells of the original image. The 9 regions can be considered as dividing the original image into 9 cells, and defining the feature of each of the 9 cells as the mean of FB1 and FB2 of all pixels in the cell. To enhance the invariance of the gradient to the change in bias, the features in each cell are normalized by a factor selected as: n is a radical ofδ,γ(i, j). For a normalization factor of 9 cells of 3 × 3, i, j ∈ {1, 2, 3}, δ, γ ∈ {1, -1 }. The normalization factor is calculated as shown in equation 5. Each factor contains the energy of four cells. For the feature in the factor cell, 4 normalized factors can be generated, and since the feature in the factor cell applies the 4 normalized factors, 4 normalized features can be generated, so that the FHOG feature corresponding to each cell has 18-dimensional direction sensitive feature and 9-dimensional direction insensitive featureThe perceptual features and the 4 normalized factors total a 31-dimensional vector. Normalized feature Tα(v) Representing the vector formed after the vector v is truncated by α (the values larger than α in the vector v are all set to be α), the characteristics of the truncation result are shown in the formula 6, the direction sensitive characteristics and the direction sensitive characteristics which are normalized and circularly translated are connected in series, and the FGOG characteristics of the cell are finally obtained.
Figure BDA0002091187670000083
Figure BDA0002091187670000091
For an original fire image sample M × N × 3, FHOG features corresponding to the image sample can be obtained, and the feature dimension is round (M/4) × round (N/4) × 31. Because the image sample dimensions of the original flame sample are different, the obtained FHOG characteristic dimensions are different, and the similarity between the FHOG characteristics cannot be calculated by using a uniform standard in the detection process. Therefore, the present invention proposes to apply the pyramid pooling idea to pyramid mean-pooling the FHOG features of the original flame sample. Fig. 5 shows a schematic of the pyramid mean pooling calculation process. Firstly, the FHOG characteristic of a flame sample image is obtained, the obtained FHOG characteristic of round (M/4) multiplied by round (N/4) multiplied by 31 dimensions is regarded as the two-dimensional matrix characteristic of 31 channels, therefore, the 31 channel characteristics correspond to 31 two-dimensional matrixes, the mean value of each two-dimensional matrix is obtained, and finally, a vector of 1 multiplied by 31 dimensions is obtained, and the vector is the FHOG characteristic vector of the global mean value. In the same way, a 2 × 2-dimensional mean vector of the matrix is obtained for each two-dimensional matrix of 31 channels, a 4 × 31-dimensional FHOG feature is finally constructed, the global mean pooling vector and the 2 × 2-dimensional mean pooling vector are connected in series, and a mean pooled feature of the samples is finally constructed, wherein the feature is 1 × 155. The original flame sample image has 10 frames, so the mean pooling feature matrix of the finally constructed sample image is 10 × 155.
And S4, image acquisition and projection of the pivot color space image. Convert the collected image into RGB three-channel image mi×ni3, projecting the original RGB color three-channel image to the color space of 10 channels by applying a 32768 x 10 conversion matrix provided by CN algorithm to obtain a projection result image mi×niX 10, projecting the multi-channel projection result image on a principal component color space vector, namely, on a 10 x 1 vector, and finally obtaining a projection result image mi×ni
And S5, determining the candidate fire image area based on the CN projection. After obtaining the final projection result image mi×niThe treatment of corrosion and expansion is carried out. And then carrying out binarization on the result image after the corrosion and expansion treatment, wherein the binarization rule is that the non-zero value of the pixel value is 1, otherwise, the non-zero value is 0. And finally constructing a binary image. And projecting the binarized image on an x-axis, determining a non-zero value area of a projection result, and segmenting the original image along the non-zero value area. And (3) projecting an image of the segmentation result along the y axis, determining a non-zero area by the same method, and finally determining the area of the fire by the non-zero projection method. Fig. 6 shows an example of determining a candidate target region after the projection result image is subjected to erosion-dilation processing.
S6, FHOG feature extraction and pyramid mean pooling of the candidate target region. On the premise of determining the candidate target area, extracting a corresponding area image from the original image, and calculating FGOG characteristics corresponding to the area image for the extracted result image. And converting the FGOG characteristics obtained by calculation into a vector with dimensions of 1 x 155 by a pyramid mean pooling method. This vector is the FHOG feature of the candidate target region.
And S7, comparing the candidate region characteristics with the sample set characteristics, and finally judging whether the candidate region has the fire or not through a threshold value. And calculating Euclidean distances by the features of 1 x 155 dimensions pooled by the pyramid mean value of the candidate region and the features in the sample feature matrix 10 x 155, and selecting the minimum value of the Euclidean distances as the similarity of the candidate region features and the sample features. Through a large number of experimental image verifications, the threshold value T of 0.8 is selected as a criterion for determining whether a fire disaster exists in the candidate area. If the minimum value of the Euclidean distance is smaller than the threshold value, the flame exists in the area, namely, a fire disaster exists, otherwise, the fire disaster does not exist.
And S8, if a fire disaster exists, sending a signal to the server through the network, wherein the signal is information of hardware information of the image acquisition equipment and information of the fire disaster.
Example two
Referring to fig. 7, an on-line forest fire recognition device based on CN and FHOG is a virtual device according to a first embodiment, and includes:
the sample set construction module 10 selects images of flame areas in a fire image library, selects ten flame areas for shearing under different illumination conditions, night, day and the like, and takes the sheared images as sample set elements;
the principal component projection vector calculation module 20 projects each sample in the selected flame set to a 10-dimensional multi-color space by applying a multi-color space projection matrix provided by an RGB color space provided by a CN algorithm, centralizes a projection result matrix, solves a covariance matrix of the centralized matrix, calculates an average value of covariance matrices obtained by all samples in the sample set, uses the average value of the covariance matrices as a final covariance matrix, finally obtains a 10 × 10-dimensional covariance matrix, and solves an eigenvalue and an eigenvector corresponding to the covariance matrix by applying SVD decomposition, wherein the eigenvector corresponding to the maximum eigenvalue obtained is the principal component projection vector.
The FHOG feature set calculation module 30 calculates FHOG features of each frame of image for the flame images selected in the feature set, applies a pyramid mean pooling mode to the calculated FHOG features, converts the FHOG feature set corresponding to the sample set into an FHOG vector with a fixed size, and finally constructs an FHOG feature set with 155 dimensions corresponding to each frame of flame image.
The acquisition module 40 is configured to perform principal component vector projection on the acquired image to obtain a projection image, apply corrosion expansion processing and threshold processing to the obtained principal component projection image to finally obtain a binarized image, project the binarized image along an x-axis, segment the binarized image according to a projection result, project the segmented result image along a y-axis, and finally determine a position of a non-zero region in the projection result image. And extracting an original image area corresponding to the non-zero result image area, wherein the area is a fire candidate area. And calculating FGOG characteristics of the candidate area images, and performing pyramid mean pooling on the FGOG characteristics. And finally, obtaining the FHOG characteristic of 155 dimensions of pyramid mean pooling of the candidate target area.
And the identification module 50 is configured to perform euclidean distance calculation on the mean-pooling FHOG features of the candidate region and the features in the mean-pooling FHOG feature set of the samples in the sample set, and compare the minimum value with a set identification threshold, thereby accurately identifying a final result.
And the alarm module 60 transmits an alarm signal through the network if the flame characteristics exist in the final recognition result, wherein the alarm signal data comprises the code of the hardware. In order to determine the location of the fire.
The pyramid mean pooling method comprises the following steps:
the first calculating unit is used for calculating FHOG characteristics of the original image, dividing the FHOG characteristic image of the original image to obtain 2 x 2 cells, and calculating the mean value of the FHOG characteristics in the area determined by each cell in the 2 x 2 cells and recording the mean value as local FHOG characteristics; any one channel of each FHOG feature image contains 4 mean FHOG features. Finally, a 4 x 31 dimensional mean pooling FHOG feature is obtained.
The second computing unit is used for taking the 2 x 2 cells as a whole to obtain FHOG characteristics of the whole and recording the FHOG characteristics as FHOGALL characteristics;
and the series unit is used for connecting 4 local FHOG mean characteristics and 1FHOG all characteristic corresponding to each single-number hand-written sample image or double-number hand-written sample image or segmentation result image in series to obtain the final pyramid mean value pooling characteristic.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.

Claims (6)

1. A forest fire online identification method and device based on CN and FHOG are characterized by comprising the following steps:
s1, selecting forest fire images under different illumination conditions, and constructing a set of forest fire flame samples;
s2, converting the RGB images of the flame sample set into 11-dimensional multi-color space by applying a color space conversion matrix provided by a multi-color space CN algorithm, and extracting color principal component projection vectors described by the flame sample set in the 11-dimensional color space by applying a principal component analysis method;
s3, calculating FHOG characteristics of each sample in the flame sample set, performing pooling treatment on the calculated characteristics by a 2 x 2 mean value pooling method, and finally constructing a mean value pooled FHOG characteristic set corresponding to the flame sample set;
s4, projecting the RGB image in the collected original image frame to a principal component color space, constructing a projection image, carrying out corrosion and expansion treatment on the principal component projection image, and determining a connected region in the projection image by applying a connected region treatment method to the processed result image, wherein the determined connected region is a suspected region of a fire;
s5, extracting the RGB image of the suspected area from the original RGB image according to the area parameters determined by the suspected area, calculating FHOG characteristics of the area, and pooling the calculated FHOG characteristics by 2 x 2 mean value to finally obtain 2 x 31 dimensional characteristics of the suspected area;
s6, carrying out similarity calculation on the mean pooling characteristic of the candidate region and the mean pooling FHOG characteristic of the original sample set, and finally determining whether a fire disaster exists in the candidate region;
and S7, giving alarm information when a fire disaster exists.
2. A method as claimed in claim 1, wherein the flame color principal component projection vector comprises:
s21, selecting a single sample image in the sample set, ensuring that the sample image is an RGB color image, projecting the RGB three-channel original sample image through a multi-color space projection matrix proposed by a CN algorithm, and finally obtaining a multi-channel image of 10 channels;
s22, performing matrix conversion on a multi-channel image three-dimensional matrix, converting the multi-channel image three-dimensional matrix into a (mxn) × 10 two-dimensional matrix, centralizing the matrix, solving a covariance matrix of the centralized matrix, calculating a covariance matrix of each sample in a sample set, then solving a mean matrix of covariance matrices of all samples in the sample set, taking the mean matrix as a final covariance matrix, performing singular value decomposition on the covariance matrix, and sequencing corresponding eigenvectors according to the size of eigenvalues; the feature vector corresponding to the maximum feature value is the principal component projection vector applied by the invention;
and S23, constructing pivot description vectors of different flame samples according to the same process.
3. A method for on-line identification of forest fires based on CN and FHOG according to claim 1, characterised in that said pyramid mean-pooled FHOG comprises:
for the images in the flame sample set, solving FHOG characteristics corresponding to the images; in the calculation process of the FHOG characteristic, the size of a cell is selected to be a 4 multiplied by 4 pixel area, a 9-dimensional direction insensitive characteristic, an 18-dimensional direction sensitive characteristic and a 4-dimensional summation characteristic;
aiming at an original fire image sample M multiplied by N multiplied by 3, FHOG characteristics corresponding to the image sample can be obtained, and the characteristic dimension is round (M/4) multiplied by round (N/4) multiplied by 31; because the image sample dimensions of the original flame sample are different, the obtained FHOG characteristic dimensions are different, and the similarity between the FHOG characteristics cannot be calculated by using a uniform standard in the detection process; therefore, the invention provides a pyramid mean pooling method for the FHOG characteristics of the original flame sample by applying the pyramid pooling idea; firstly, the FHOG characteristics of a flame sample image are obtained, the obtained FHOG characteristics of round (M/4) multiplied by round (N/4) multiplied by 31 dimensions are regarded as the two-dimensional matrix characteristics of 31 channels, therefore, the 31 channel characteristics correspond to 31 two-dimensional matrixes, the mean value of the two-dimensional matrix of each channel is obtained, and finally, a vector of 1 multiplied by 31 dimensions is obtained, and the vector is the FHOG characteristic vector of the global mean value; in the same way, solving a 2 × 2-dimensional mean vector of each two-dimensional matrix of 31 channels, finally constructing a 4 × 31-dimensional FGOG feature, connecting the global mean pooling vector and the 2 × 2-dimensional mean pooling vector in series, and finally constructing a mean pooled feature of the sample, wherein the feature is 1 × 155; the original flame sample image has 10 frames, so the mean pooling feature matrix of the finally constructed sample image is 10 × 155.
4. The CN and FHOG-based forest fire online identification method according to claim 1, wherein the method of applying color projection principal component vector projection to determine candidate target areas comprises the following steps:
the collected image is subjected to multi-color space conversion by applying a color space conversion matrix provided by a CN algorithm, and the conversion result is projected on a color principal component vector to obtain a principal component vector description image;
processing the principal component description image by using a corrosion expansion method, and binarizing a processing result;
the binary image is projected on an X axis, the flame area is segmented according to the projection result, the segmented result is projected along a Y axis, the final non-zero pixel rectangular area is determined to correspond to parameters such as position scale in the original image, and the position of a suspected fire point in the original image, namely the candidate target area, is determined according to the parameters.
5. The CN and FHOG-based forest fire online identification method according to claim 1, wherein the fine inspection of the candidate target area comprises the following steps:
calculating FHG characteristics in an original image area determined by the candidate target area, performing pyramid mean pooling on the obtained FHG characteristics, finally obtaining 1 × 31 FHG characteristics and 2 × 2 × 31 dimensional characteristics of the overall mean pooling, and connecting the characteristics in series to obtain the characteristics of 1 × 155 dimensional, namely the characteristics which need to be compared finally;
and solving Euclidean distances from the series features of the candidate target region and the features in the sample set feature matrix, comparing the minimum Euclidean distance with a set threshold value, and finally determining whether the candidate region has fire points.
6. A forest fire online identification device based on CN and FHOG is characterized in that it includes:
the sample set construction module is used for selecting images of flame areas in a fire image library, selecting ten flame areas for shearing under different illumination conditions, night, day and other conditions, and taking the sheared images as sample set elements;
the principal component projection vector calculation module is used for projecting an original RGB image to a 10-dimensional multi-color space by applying a multi-color space projection matrix provided by an RGB color space provided by a CN algorithm to each sample in the selected flame set, centralizing a projection result matrix, solving a covariance matrix of the centralized matrix, solving an average value of covariance matrices obtained by all samples in the sample set, taking the average value of the covariance matrices as a final covariance matrix, finally obtaining a 10 x 10-dimensional covariance matrix, applying SVD (singular value decomposition) to obtain an eigenvalue and an eigenvector corresponding to the covariance matrix, and obtaining the eigenvector corresponding to the maximum eigenvalue as the principal component projection vector;
the FHOG characteristic set calculation module is used for calculating FHOG characteristics of each frame of image for the flame images selected from the characteristic set, applying a pyramid mean pooling mode to the FHOG characteristics obtained by calculation, converting the FHOG characteristic set corresponding to the sample set into an FHOG vector with a fixed size, and finally constructing an FHOG characteristic set with 155 dimensions corresponding to each frame of flame image;
the acquisition module is used for carrying out principal component vector projection on the acquired image to obtain a projected image, applying corrosion expansion processing to the obtained principal component projected image, carrying out threshold processing to finally obtain a binary image, projecting the binary image along an x axis, segmenting the binary image according to a projection result, projecting the segmented result image along a y axis, finally determining the position of a non-zero region in the projection result image, extracting an original image region corresponding to the non-zero result image region, wherein the region is a fire candidate region, calculating FGOG characteristics of the candidate region image, carrying out pyramid mean pooling on the FGOG characteristics, and finally obtaining 135-dimensional FGOG characteristics of the pyramid mean pooling of the candidate target region;
the identification module is used for calculating the Euclidean distance between the mean pooling FHOG characteristic of the candidate region and the characteristic in the mean pooling FHOG characteristic set of the samples in the sample set, and comparing the minimum value with a set identification threshold value so as to accurately identify the final result;
the alarm module is used for transmitting an alarm signal through a network if flame characteristics exist in the final identification result, and the alarm signal data comprises the code of the hardware so as to determine the position of the fire;
the pyramid mean pooling method comprises the following steps:
the first calculating unit is used for calculating FHOG characteristics of the original image, dividing the FHOG characteristic image of the original image to obtain 2 x 2 cells, and calculating the mean value of the FHOG characteristics in the area determined by each cell in the 2 x 2 cells and recording the mean value as local FHOG characteristics; each FHOG characteristic image comprises FHOG characteristics with 4 mean values in any channel; finally obtaining the FHOG characteristic of mean value pooling of 4 x 31 dimensions;
the second computing unit is used for taking the 2 x 2 cells as a whole to obtain FHOG characteristics of the whole and recording the FHOG characteristics as FHOGALL characteristics;
and the series unit is used for connecting 4 local FHOG mean characteristics and 1FHOG all characteristic corresponding to each single-number hand-written sample image or double-number hand-written sample image or segmentation result image in series to obtain the final pyramid mean value pooling characteristic.
CN201910504074.1A 2019-06-12 2019-06-12 Forest fire online identification method and device based on CN and FHOG Active CN110895701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910504074.1A CN110895701B (en) 2019-06-12 2019-06-12 Forest fire online identification method and device based on CN and FHOG

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910504074.1A CN110895701B (en) 2019-06-12 2019-06-12 Forest fire online identification method and device based on CN and FHOG

Publications (2)

Publication Number Publication Date
CN110895701A true CN110895701A (en) 2020-03-20
CN110895701B CN110895701B (en) 2023-03-24

Family

ID=69785502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910504074.1A Active CN110895701B (en) 2019-06-12 2019-06-12 Forest fire online identification method and device based on CN and FHOG

Country Status (1)

Country Link
CN (1) CN110895701B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836346A (en) * 2021-01-07 2021-05-25 河南理工大学 Motor fault diagnosis method based on CN and PCA, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101237089B1 (en) * 2011-10-12 2013-02-26 계명대학교 산학협력단 Forest smoke detection method using random forest classifier method
CN106991689A (en) * 2017-04-05 2017-07-28 西安电子科技大学 Method for tracking target and GPU based on FHOG and color characteristic accelerate
CN109165577A (en) * 2018-08-07 2019-01-08 东北大学 A kind of early stage forest fire detection method based on video image
CN109635814A (en) * 2018-12-21 2019-04-16 河南理工大学 Forest fire automatic testing method and device based on deep neural network
US20190162507A1 (en) * 2017-11-24 2019-05-30 Huntercraft Limited Automatic target point tracing method for electro-optical sighting system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101237089B1 (en) * 2011-10-12 2013-02-26 계명대학교 산학협력단 Forest smoke detection method using random forest classifier method
CN106991689A (en) * 2017-04-05 2017-07-28 西安电子科技大学 Method for tracking target and GPU based on FHOG and color characteristic accelerate
US20190162507A1 (en) * 2017-11-24 2019-05-30 Huntercraft Limited Automatic target point tracing method for electro-optical sighting system
CN109165577A (en) * 2018-08-07 2019-01-08 东北大学 A kind of early stage forest fire detection method based on video image
CN109635814A (en) * 2018-12-21 2019-04-16 河南理工大学 Forest fire automatic testing method and device based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯汉等: "多特征融合与尺度自适应核相关滤波跟踪算法", 《计算机与数字工程》 *
徐爱俊等: "基于可见光视频的森林火灾识别算法", 《北京林业大学学报》 *
马宗方等: "基于颜色模型和稀疏表示的图像型火焰探测", 《光子学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836346A (en) * 2021-01-07 2021-05-25 河南理工大学 Motor fault diagnosis method based on CN and PCA, electronic equipment and medium
CN112836346B (en) * 2021-01-07 2024-04-23 河南理工大学 Motor fault diagnosis method based on CN and PCA, electronic equipment and medium

Also Published As

Publication number Publication date
CN110895701B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
US20210264217A1 (en) Systems and methods for automatic estimation of object characteristics from digital images
CN109255317B (en) Aerial image difference detection method based on double networks
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN107133569B (en) Monitoring video multi-granularity labeling method based on generalized multi-label learning
Liu et al. A contrario comparison of local descriptors for change detection in very high spatial resolution satellite images of urban areas
Yamada et al. Learning features from georeferenced seafloor imagery with location guided autoencoders
CN107025445B (en) Multisource remote sensing image combination selection method based on class information entropy
CN111259704A (en) Training method of dotted lane line endpoint detection model
CN115131580B (en) Space target small sample identification method based on attention mechanism
CN111310690B (en) Forest fire recognition method and device based on CN and three-channel capsule network
WO2023273337A1 (en) Representative feature-based method for detecting dense targets in remote sensing image
CN117671508B (en) SAR image-based high-steep side slope landslide detection method and system
CN110895701B (en) Forest fire online identification method and device based on CN and FHOG
CN109068349B (en) Indoor intrusion detection method based on small sample iterative migration
CN111291712B (en) Forest fire recognition method and device based on interpolation CN and capsule network
Demars et al. Multispectral detection and tracking of multiple moving targets in cluttered urban environments
CN115631211A (en) Hyperspectral image small target detection method based on unsupervised segmentation
Promsuk et al. Numerical Reader System for Digital Measurement Instruments Embedded Industrial Internet of Things.
CN114724089A (en) Smart city monitoring method based on Internet
CN113989571A (en) Point cloud data classification method and device, electronic equipment and storage medium
CN115496931B (en) Industrial robot health monitoring method and system
Kaddah et al. Automatic pavement crack classification on two-dimensional VIAPIX images
CN116452791B (en) Multi-camera point defect area positioning method, system, device and storage medium
CN117788463B (en) Ship draft detection method based on video AI and multi-mode data fusion
Yavariabdi et al. Unsupervised satellite change detection using particle swarm optimisation in spherical coordinates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant