CN113299034B - Flame identification early warning method suitable for multiple scenes - Google Patents

Flame identification early warning method suitable for multiple scenes Download PDF

Info

Publication number
CN113299034B
CN113299034B CN202110346931.7A CN202110346931A CN113299034B CN 113299034 B CN113299034 B CN 113299034B CN 202110346931 A CN202110346931 A CN 202110346931A CN 113299034 B CN113299034 B CN 113299034B
Authority
CN
China
Prior art keywords
scene
flame
current
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110346931.7A
Other languages
Chinese (zh)
Other versions
CN113299034A (en
Inventor
丁武
林琳
李林
陈学志
于洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Huadun Safety Technology Co ltd
Original Assignee
Liaoning Huadun Safety Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Huadun Safety Technology Co ltd filed Critical Liaoning Huadun Safety Technology Co ltd
Priority to CN202110346931.7A priority Critical patent/CN113299034B/en
Publication of CN113299034A publication Critical patent/CN113299034A/en
Application granted granted Critical
Publication of CN113299034B publication Critical patent/CN113299034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The application provides a flame identification early warning method suitable for multiple scenes, which comprises the following steps: s1, acquiring multi-scene flame images, and inputting the multi-scene flame images into a deep learning recognition model for training; s2, acquiring a current scene image in real time, and judging a current illumination scene corresponding to the current scene image; s3, inputting the current scene image and the current illumination scene into a trained deep learning identification model, calling different identification submodels based on the current illumination scene by the deep learning identification model to identify the current scene image, and outputting a flame identification result; and S4, determining whether to output early warning information or not based on the flame identification result. The scheme of this application trains the degree of deep learning recognition model based on the image under the different illumination scenes, and the degree of deep learning recognition model that trains out has better generalization like this, can adapt to the flame discernment under more scenes. In addition, the scheme of the application sets that different scene images are identified based on different illumination scenes, namely, the infrared images are input to identify flames in a strong light scene, and the common images are used for identifying in a weak light scene, so that the adaptability to different scenes is further improved.

Description

Flame recognition early warning method suitable for multiple scenes
Technical Field
The application relates to the technical field of image recognition, in particular to a multi-scene-adaptive flame recognition early warning method.
Background
Along with the progress of society, the industrial technical level and the economic level of China are increasingly improved, various large-scale buildings and facilities in cities are complex in structure and close to automation in functions, meanwhile, the requirements on safety production and living are increasingly urgent, and higher requirements on the reliability and the stability of fire-fighting safety, fire-proof and smoke-proof systems are also provided. The fire seriously damages the physical and mental health, the economic property and the property of people, is a non-negligible and multiple problem, and the great destructiveness of the fire can be known, so the fire prevention and monitoring and the flame identification have important significance. The traditional fire detector has the advantages of temperature sensing, smoke sensing, light sensing and the like, can only judge certain specific information characteristics, is influenced by conditions such as environment and space, and has poor identification effect. With the gradual maturity of computer vision and image processing technology, a fire monitoring system of video images is generated, and compared with the traditional fire monitoring, the fire monitoring system has the characteristics of high recognition speed, low false alarm rate and convenience in operation.
The flame recognition based on the image is divided into a traditional recognition method and a recognition method based on deep learning. The traditional identification method has complex steps and poor self-adaptive capacity, flame and similar flame are difficult to distinguish according to color judgment and general processing, and the accuracy is low.
In summary, the prior art has many defects for flame identification, and it is difficult to obtain reliable flame identification results, so that timely early warning of fire can not be realized.
Disclosure of Invention
In order to solve the technical problems in the background art, the application provides a flame identification early warning method suitable for multiple scenes.
The application provides a first aspect of a multi-scene-adaptive flame recognition early warning method, which comprises the following steps:
s1, acquiring multi-scene flame images, and inputting the multi-scene flame images into a deep learning recognition model for training;
s2, acquiring a current scene image in real time, and judging a current illumination scene corresponding to the current scene image;
s3, inputting the current scene image into a trained deep learning identification model based on the current illumination scene, identifying the current scene image by the deep learning identification model, and outputting a flame identification result;
and S4, determining whether to output early warning information or not based on the flame identification result.
Optionally, the lighting scene comprises a strong light scene and a weak light scene.
Optionally, the determining the current illumination scene corresponding to the current scene image includes:
extracting a static area in the current scene image, binarizing the static area in the current scene image, segmenting the binarized static area image according to the direction, respectively identifying the light intensity in the image, weighting and summing the light intensity of each segmented static area image, and if the sum is greater than a first threshold value, judging that the current scene image is a strong light scene, otherwise, judging that the current scene image is a weak light scene; wherein the weight is preset based on an attribute of the current scene.
Optionally, the current scene image includes a normal image and an infrared image;
then, inputting the current scene image into a trained deep learning recognition model based on the current illumination scene, including:
if the current illumination scene is a strong light scene, inputting the infrared image in the current scene image into a trained deep learning identification model; and if the current illumination scene is a weak light scene, inputting a common image in the current scene image into the trained deep learning identification model.
Optionally, the deep learning identification model performs flame identification on the current scene image, including:
and identifying flames of all connected regions in the current scene image based on the fluctuation characteristic, the motion characteristic and the flicker characteristic.
Optionally, the identifying flames of the connected regions in the current scene image based on the fluctuation characteristic, the motion characteristic, and the flicker characteristic specifically includes:
1) The fluctuation characteristic of each connected region is calculated by adopting the following method:
Figure GDA0003154327380000031
wherein fluc is a fluctuation characteristic value, x i The ith vertical step value of each connected region is represented, namely the height value of each step after the contour histogram of each connected region is converted;
Figure GDA0003154327380000032
is the average value of the height values of the i vertical steps; a. b and c are preset power values;
and if the fluctuation characteristic value fluc is larger than a second threshold value, judging that the connected region meets the fluctuation characteristic condition.
2) Calculating the motion characteristic of each connected region by adopting the following method:
and calculating the transverse motion amplitude of each connected region through a plurality of frames of the scene images, and if the transverse motion amplitude is greater than a third threshold value, judging that the connected region has motion characteristics and meets the motion characteristic condition.
3) Calculating the flicker characteristics of each connected region by adopting the following method:
Figure GDA0003154327380000041
wherein, flas is a flicker characteristic value, m is a state change accumulated value of each connected domain, and t (i) -t (i-1) is a time interval value of each state change;
and if the flicker characteristic value flas is larger than a fourth threshold value, judging that the connected region meets the flicker characteristic condition.
4) Calculating a flame identification value using the following formula:
fire t rue=αF1-βM+γF2
in the formula, fire _ true is a flame identification value, F1 is a fluctuation characteristic normalization value, and is positively correlated with the degree that the fluctuation characteristic value fluc is greater than a second threshold value; m is a motion characteristic normalization value that is positively correlated to the extent to which the lateral motion amplitude is greater than a third threshold; f2 is a flicker characteristic normalized value, which is positively correlated with the degree to which the flicker characteristic value flas is greater than a fourth threshold value; alpha, beta and gamma are respectively preset weights of F1, M and F2, however, if F1 > theta 1 、F2>θ 2 Adjusting β = β ', and β' < β,
Figure GDA0003154327380000042
wherein, theta 1 、θ 2 Is a preset threshold.
And if the flame identification value fire _ true is greater than a fifth threshold value, determining that the connected region is a flame.
Optionally, in step S1, before inputting it into the deep learning recognition model for training, the method further includes:
and manually marking the flames in the flame images of the multiple scenes.
The second aspect of the application provides a flame discernment early warning system who adapts to many scenes, the system is including obtaining module, deep learning identification model and early warning module:
the acquisition module is used for acquiring flame images of multiple scenes and inputting the flame images into the deep learning recognition model for training; the system is also used for acquiring a current scene image in real time, judging a current illumination scene corresponding to the current scene image, and inputting the current scene image into a trained deep learning identification model based on the current illumination scene;
the deep learning identification model is used for training based on the multi-scene flame images, identifying the current scene images and outputting a flame identification result;
and the early warning module is used for determining whether to output early warning information or not based on the flame identification result.
A third aspect of the present application also provides an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform the steps of the method as previously described.
The fourth aspect of the present application also provides a computer storage medium disposed in a server, where the storage medium stores computer instructions, and the computer instructions, when called, are used to execute the steps of the method as described above.
The beneficial effect of this application lies in:
the scheme of this application trains the degree of deep learning recognition model based on the image under the different illumination scenes, and the degree of deep learning recognition model that trains out has better generalization like this, can adapt to the flame discernment under more scenes. In addition, the scheme of the application sets that different scene images are identified based on different illumination scenes, namely, the infrared images are input to identify flames in a strong light scene, and the common images are used for identifying in a weak light scene, so that the adaptability to different scenes is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic flowchart of a multi-scene-adaptive flame identification early warning method disclosed in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a multi-scene-adaptive flame recognition early warning system disclosed in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that if the terms "upper", "lower", "inner", "outer", etc. are used to indicate an orientation or positional relationship based on that shown in the drawings or that the application product is usually placed in use, the description is merely for convenience and simplicity, and it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore should not be construed as limiting the present application.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are only used to distinguish one description from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a multi-scene adaptive flame identification early warning method according to an embodiment of the present disclosure. As shown in fig. 1, a multi-scene-adaptive flame identification early warning method according to an embodiment of the present application includes:
s1, acquiring multi-scene flame images, and inputting the multi-scene flame images into a deep learning recognition model for training;
s2, acquiring a current scene image in real time, and judging a current illumination scene corresponding to the current scene image;
s3, inputting the current scene image into a trained deep learning identification model based on the current illumination scene, identifying the current scene image by the deep learning identification model, and outputting a flame identification result;
and S4, determining whether to output early warning information or not based on the flame identification result.
In the embodiment of the application, the scheme of the application collects images under different illumination scenes to train the deep learning recognition model, and the deep learning recognition model trained in this way has better generalization, so that higher recognition accuracy can be ensured when flame recognition is carried out under various scenes. The deep learning identification model utilized by the present application can be a model based on various native or improved or fused neural network algorithms, among others.
Optionally, the lighting scene comprises a strong light scene and a weak light scene.
The strong light scene and the weak light scene can be distinguished simply on the basis of outdoor scenes and indoor scenes and can also be distinguished on the basis of illumination intensity, and therefore the identification accuracy of the illumination scene is improved.
Optionally, the determining the current illumination scene corresponding to the current scene image includes:
extracting a static area in the current scene image, binarizing the static area in the current scene image, segmenting the binarized static area image according to the direction, respectively identifying the light intensity in the image, weighting and summing the light intensity of each segmented static area image, and if the sum is greater than a first threshold value, judging that the current scene image is a strong light scene, otherwise, judging that the current scene image is a weak light scene; wherein the weight is preset based on an attribute of the current scene.
In the embodiment of the application, because frequent activities of people and objects exist in a dynamic region in a scene image, the frequent activities can cause large fluctuation of reflected light intensity, and large interference is easily generated on light intensity identification of the whole scene, the application only segments and identifies a static region in the static region; for the identification of the static area, the static area can be extracted based on a differential identification technology among a plurality of frame images, and the technology belongs to a conventional technology in the field of image identification, and is not repeated in this application. The attributes of the current scene (such as a warehouse, a big store, an office and the like) can be input in advance, areas irradiated by light in the scenes with different attributes are different, for example, in the scenes of the warehouse and the office, the light basically mainly irradiates from the top downwards, at the moment, a larger weight is set for a ground area, a smaller weight is set for a wall, and for the top, the smallest weight is set for the top in order to avoid interference caused by the light; for a big store, besides the light which is directly emitted from the top downwards, more atmosphere lights such as advertisement screens are usually set in the scene, and most of the strong light sources are set on the wall, so that the largest weight can be set for the ground, and smaller weights can be set for the top and wall areas; in addition, for outdoor scenes, the ground can be given the maximum weight, the wall area is easy to be shaded, the wall is given the minimum weight to reduce the interference of the shading, and the top is given the weight with the middle size. The specific setting range of the weights may be set by those skilled in the art according to actual situations, and this application is not limited to this.
Optionally, the current scene image includes a normal image and an infrared image;
then, inputting the current scene image into a trained deep learning recognition model based on the current illumination scene, including:
if the current illumination scene is a strong light scene, inputting the infrared image in the current scene image into a trained deep learning identification model; and if the current illumination scene is a weak light scene, inputting the common image in the current scene image into a trained deep learning identification model.
Wherein, when current illumination scene is the highlight scene, because flame light itself is less than strong light (for example sunlight, hi-lite light ray) in the scene, and the light is less, can become littleer and the profile is fuzzy under the shining of sunlight, this moment, if still discern based on ordinary image and go on hardly finding flame, or there is great erroneous judgement probably. Therefore, the infrared image is input into the deep learning identification model in the strong light scene, and the flame is obvious and prominent in the infrared image, so that the identification is more accurate and easier.
Optionally, the deep learning identification model performs flame identification on the current scene image, including:
and identifying flames of all connected regions in the current scene image based on the fluctuation characteristic, the motion characteristic and the flicker characteristic.
Optionally, the identifying flames of the connected regions in the current scene image based on the fluctuation characteristic, the motion characteristic, and the flicker characteristic specifically includes:
1) The fluctuation characteristics of each connected region are calculated by adopting the following method:
Figure GDA0003154327380000101
wherein fluc is a fluctuation characteristic value, x i The ith vertical step value of each connected region is represented, namely the step height value of each step after the contour histogram of each connected region is converted;
Figure GDA0003154327380000102
is the average value of the height values of the i vertical steps; a. b and c are preset power values;
and if the fluctuation characteristic value fluc is larger than a second threshold value, judging that the connected region meets the fluctuation characteristic condition.
2) Calculating the motion characteristic of each connected region by adopting the following method:
and calculating the transverse motion amplitude of each connected region through a plurality of frames of the scene images, and if the transverse motion amplitude is larger than a third threshold value, judging that the connected regions have motion characteristics and meet the motion characteristic condition.
3) Calculating the flicker characteristics of each connected region by adopting the following method:
Figure GDA0003154327380000111
wherein, flas is a flicker characteristic value, m is a state change accumulated value of each connected domain, and t (i) -t (i-1) is a time interval value of each state change;
and if the flicker characteristic value flas is larger than a fourth threshold value, judging that the connected region meets the flicker characteristic condition.
In the embodiment of the application, the flame is identified by utilizing the fluctuation characteristic, the motion characteristic and the flicker characteristic of the flame, and the fluctuation characteristic, the motion characteristic and the flicker characteristic of each communication area are respectively extracted by adopting the formula.
4) Calculating a flame identification value using the following formula:
fire t rue=αF1-βM+γF2
in the formula, fire _ true is a flame identification value, F1 is a fluctuation characteristic normalization value, and is positively correlated with the degree that the fluctuation characteristic value fluc is greater than a second threshold value; m is a motion characteristic normalization value that is positively correlated to the extent to which the lateral motion amplitude is greater than a third threshold; f2 is a flicker characteristic normalization value, which is positively correlated with the degree to which the flicker characteristic value flas is greater than a fourth threshold value; alpha, beta and gamma are respectively preset weights of F1, M and F2, however, if F1 > theta 1 、F2>θ 2 Adjusting β = β ', and β' < β,
Figure GDA0003154327380000121
wherein, theta 1 、θ 2 Is a preset threshold.
And if the flame identification value fire _ true is greater than a fifth threshold value, determining that the connected region is a flame.
Wherein when F1 > θ is satisfied 1 、F2>θ 2 During the process, the wave property and the flicker property of the connected region are obvious, and the flame is highly similar, so that the weight of the motion characteristic normalization value M is reduced, and the weight of the wave property and the flicker property is proportionally increased, so that the tolerance of the connected region to the transverse motion is enhanced, particularly for large flames, the transverse motion is obvious, and the possibility of misjudgment caused by the transverse motion under the condition that the wave property and the flicker property are obvious can be reduced.
Optionally, in step S1, before inputting it into the deep learning recognition model for training, the method further includes:
and manually marking the flames in the flame images of the multiple scenes.
In the embodiment of the present application, an important content of preparation work before training the deep learning recognition model is labeling, that is, manually drawing a circle of a flame part of a scene image in a training set, and inputting the manually labeled training into the deep learning recognition model for training. Of course, corresponding to the above, the images in the training set also correspond to the illumination scene, that is, if the images are in the strong light scene, the images in the training set are replaced by the infrared images, and the weak light scene is still the normal image.
Optionally, the method further comprises an evaluation step of calculating the accuracy, the recall rate and the false alarm rate respectively by using the following formulas:
Figure GDA0003154327380000131
Figure GDA0003154327380000132
Figure GDA0003154327380000133
wherein FN is the presence of a flame but the recognition result is absence; FP represents absence of flame but presence of recognition; TN represents the absence of flame identification and absence; TP represents the presence of flame recognition results as well.
For example, when the identification result is that flame exists, if the early warning information is confirmed, it is determined that flame exists, and if the early warning information is rejected, it is determined that flame does not exist; and when the identification result is that no flame exists, continuously detecting whether the fire extinguishing behavior occurs in the current scene image in a subsequent preset time period, if so, judging that the flame really exists, and if not, judging that the flame really does not exist. By the aid of the judging method, FN, FP, TN and TP values can be simply and reliably determined, and when the accuracy, the recall rate and the false alarm rate cannot meet corresponding threshold values, the deep learning identification model needs to be further optimized and updated.
The fire extinguishing behavior can be judged by detecting whether the current scene image identifies a fire extinguisher in a non-fixed state. If the fire extinguisher appears in the scene image, the potential fire extinguishing behavior is shown, meanwhile, whether the fire extinguisher is in a static state in a time period which is larger than a time threshold value is further judged, if yes, the fire extinguisher is originally placed in the scene image, otherwise, the fire extinguisher is in a used state, and the fire extinguishing behavior is generated.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a multi-scene adaptive flame identification early warning system according to an embodiment of the present disclosure. As shown in fig. 2, a multi-scene-adaptive flame identification early warning system according to an embodiment of the present application corresponds to the method in the first embodiment; the system comprises an acquisition module, a deep learning identification model and an early warning module:
the acquisition module is used for acquiring flame images of multiple scenes and inputting the flame images into the deep learning recognition model for training; the system is also used for acquiring a current scene image in real time, judging a current illumination scene corresponding to the current scene image, and inputting the current scene image into a trained deep learning identification model based on the current illumination scene;
the deep learning identification model is used for training based on the flame images of the multiple scenes, identifying the current scene image and outputting a flame identification result;
and the early warning module is used for determining whether to output early warning information or not based on the flame identification result.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 3, an electronic device according to an embodiment of the present application includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform the steps of the method according to embodiment one.
Example four
The embodiment of the present application further provides a computer storage medium, where the computer storage medium is disposed in a server, and the storage medium stores computer instructions, and the computer instructions are used to execute the steps of the method according to the first embodiment when being called.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (5)

1. A flame identification early warning method suitable for multiple scenes is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring multi-scene flame images, and inputting the multi-scene flame images into a deep learning recognition model for training;
s2, acquiring a current scene image in real time, and judging a current illumination scene corresponding to the current scene image;
s3, inputting the current scene image into a trained deep learning identification model based on the current illumination scene, identifying the current scene image by the deep learning identification model, and outputting a flame identification result;
s4, determining whether to output early warning information or not based on the flame identification result;
the illumination scene comprises a strong light scene and a weak light scene;
the judging the current illumination scene corresponding to the current scene image comprises:
extracting a static area in the current scene image, binarizing the static area in the current scene image, segmenting the binarized static area image according to the direction, respectively identifying the light intensity in the image, weighting and summing the light intensity of each segmented static area image, and if the sum is greater than a first threshold value, judging that the current scene image is a strong light scene, otherwise, judging that the current scene image is a weak light scene; wherein the weight is preset based on the attribute of the current scene;
the current scene image comprises a common image and an infrared image;
then, inputting the current scene image into a trained deep learning recognition model based on the current illumination scene, including:
if the current illumination scene is a strong light scene, inputting the infrared image in the current scene image into a trained deep learning identification model; if the current illumination scene is a weak light scene, inputting a common image in the current scene image into a trained deep learning identification model;
the deep learning identification model carries out flame identification on the current scene image, and the method comprises the following steps:
performing flame identification on each connected region in the current scene image based on the fluctuation characteristic, the motion characteristic and the flicker characteristic;
the flame identification of each connected region in the current scene image based on the fluctuation characteristic, the motion characteristic and the flicker characteristic specifically comprises the following steps:
1) The fluctuation characteristic of each connected region is calculated by adopting the following method:
Figure FDA0003974100370000021
wherein fluc is a fluctuation characteristic value, x i The ith vertical step value of each connected region is represented, namely the step height value of each step after the contour histogram of each connected region is converted;
Figure FDA0003974100370000022
is the average value of the height values of the i vertical steps; a. b and c are preset power values;
if the fluctuation characteristic value fluc is larger than a second threshold value, judging that the connected region meets the fluctuation characteristic condition;
2) Calculating the motion characteristic of each connected region by adopting the following method:
calculating the transverse motion amplitude of each connected region through a plurality of frames of the scene images, and if the transverse motion amplitude is larger than a third threshold value, judging that the connected regions have motion characteristics and meet the motion characteristic conditions;
3) Calculating the flicker characteristics of each connected region by adopting the following method:
Figure FDA0003974100370000023
wherein, flas is a flicker characteristic value, m is a state change accumulated value of each connected domain, and t (i) -t (i-1) is a time interval value of each state change;
if the flicker characteristic value flas is larger than a fourth threshold value, judging that the connected region meets flicker characteristic conditions;
4) Calculating a flame identification value using the following formula:
fire_true=αF1-βM+γF2
in the formula, fire _ true is a flame identification value, and F1 is a fluctuation characteristic normalization value which is positively correlated with the degree that the fluctuation characteristic value fluc is greater than a second threshold value; m is a motion characteristic normalization value which is positively correlated with the degree to which the lateral motion amplitude is greater than a third threshold value; f2 is a flicker characteristic normalized value, which is positively correlated with the degree to which the flicker characteristic value flas is greater than a fourth threshold value; alpha, beta and gamma are respectively preset weight of F1, M and F2, however, if F1>θ 1 、F2>θ 2 Then adjust β = β ', and β'<β,
Figure FDA0003974100370000031
Figure FDA0003974100370000032
Wherein, theta 1 、θ 2 Is a preset threshold;
and if the flame identification value fire _ true is greater than a fifth threshold value, determining that the connected region is a flame.
2. The method of claim 1, wherein: in step S1, before inputting the deep learning recognition model to train, the method further includes:
and manually marking the flames in the flame images of the multiple scenes.
3. A multi-scenario-adaptive flame recognition early warning system for implementing the method of any one of claims 1-2, the system comprising an acquisition module, a deep learning recognition model and an early warning module:
the acquisition module is used for acquiring flame images of multiple scenes and inputting the flame images into the deep learning recognition model for training; the system is also used for acquiring a current scene image in real time, judging a current illumination scene corresponding to the current scene image, and inputting the current scene image into a trained deep learning identification model based on the current illumination scene;
the deep learning identification model is used for training based on the flame images of the multiple scenes, identifying the current scene image and outputting a flame identification result;
and the early warning module is used for determining whether to output early warning information or not based on the flame identification result.
4. An electronic device, the device comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform the steps of the method according to any of claims 1-2.
5. A computer storage medium provided at a server, the storage medium storing computer instructions for performing the steps of the method according to any one of claims 1-2 when the computer instructions are invoked.
CN202110346931.7A 2021-03-31 2021-03-31 Flame identification early warning method suitable for multiple scenes Active CN113299034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110346931.7A CN113299034B (en) 2021-03-31 2021-03-31 Flame identification early warning method suitable for multiple scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110346931.7A CN113299034B (en) 2021-03-31 2021-03-31 Flame identification early warning method suitable for multiple scenes

Publications (2)

Publication Number Publication Date
CN113299034A CN113299034A (en) 2021-08-24
CN113299034B true CN113299034B (en) 2023-01-31

Family

ID=77319257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110346931.7A Active CN113299034B (en) 2021-03-31 2021-03-31 Flame identification early warning method suitable for multiple scenes

Country Status (1)

Country Link
CN (1) CN113299034B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841920A (en) * 2022-03-29 2022-08-02 清华大学 Flame identification method and device based on image processing and electronic equipment
CN116432090B (en) * 2023-06-13 2023-10-20 荣耀终端有限公司 Scene recognition method, system and terminal equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2419148C2 (en) * 2006-07-28 2011-05-20 Телеспацио С.П.А. Automatic detection of fires on earth surface and atmospheric agents, such as clouds, cloudy mantle, fog and similar, by means of satellite system
TWI420423B (en) * 2011-01-27 2013-12-21 Chang Jung Christian University Machine vision flame identification system and method
CN202720745U (en) * 2012-05-23 2013-02-06 无锡蓝天电子有限公司 Image type smoke fire detector
CN102708651B (en) * 2012-05-23 2017-03-08 无锡蓝天电子股份有限公司 A kind of image type smoke fire disaster detection and system
CN102867386B (en) * 2012-09-10 2015-01-07 南京恩博科技有限公司 Intelligent video analysis-based forest smoke and fire detection method and special system thereof
CN103116746B (en) * 2013-03-08 2016-08-03 中国科学技术大学 A kind of video flame detection method based on multiple features fusion technology
CA2891435A1 (en) * 2014-06-11 2015-12-11 Kevin Lee Friesth Quintuple-effect generation multi-cycle hybrid renewable energy system with integrated energy provisioning, storage facilities and amalgamated control system
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN106845443B (en) * 2017-02-15 2019-12-06 福建船政交通职业学院 Video flame detection method based on multi-feature fusion
CN106997461B (en) * 2017-03-28 2019-09-17 浙江大华技术股份有限公司 A kind of firework detecting method and device
CN107729811B (en) * 2017-09-13 2020-07-07 浙江大学 Night flame detection method based on scene modeling

Also Published As

Publication number Publication date
CN113299034A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
Gong et al. A real-time fire detection method from video with multifeature fusion
CN111091072A (en) YOLOv 3-based flame and dense smoke detection method
JP6867153B2 (en) Abnormality monitoring system
CN110689054B (en) Worker violation monitoring method
CN107437318B (en) Visible light intelligent recognition algorithm
CN113299034B (en) Flame identification early warning method suitable for multiple scenes
CN101751744B (en) Detection and early warning method of smoke
CN111626188B (en) Indoor uncontrollable open fire monitoring method and system
CN108389359B (en) Deep learning-based urban fire alarm method
CN104598895A (en) Method and device for flame detection based on video image analysis
CN106682635A (en) Smoke detecting method based on random forest characteristic selection
CN103400111A (en) Method for detecting fire accident on expressway or in tunnel based on video detection technology
CN109741565B (en) Coal mine fire disaster recognition system and method
Chowdhury et al. Computer vision and smoke sensor based fire detection system
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN108664875A (en) Underground belt-conveying monitoring method based on image recognition
KR101196678B1 (en) Real-time fire detection device and method
CN110276228B (en) Multi-feature fusion video fire disaster identification method
CN112699801A (en) Fire identification method and system based on video image
CN107704818A (en) A kind of fire detection system based on video image
CN107729811B (en) Night flame detection method based on scene modeling
CN112613483A (en) Outdoor fire early warning method based on semantic segmentation and recognition
CN109074714B (en) Detection apparatus, method and storage medium for detecting event
KR102432675B1 (en) Monitoring and safety alarm system for abnormal signs and drowning accidents of pool users using CCTV images
CN112507925A (en) Fire detection method based on slow characteristic analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant