CN115311604B - Fire fighting method based on Internet of things - Google Patents

Fire fighting method based on Internet of things Download PDF

Info

Publication number
CN115311604B
CN115311604B CN202211195077.XA CN202211195077A CN115311604B CN 115311604 B CN115311604 B CN 115311604B CN 202211195077 A CN202211195077 A CN 202211195077A CN 115311604 B CN115311604 B CN 115311604B
Authority
CN
China
Prior art keywords
cluster
acquiring
clusters
following
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211195077.XA
Other languages
Chinese (zh)
Other versions
CN115311604A (en
Inventor
魏景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haizhou Security Technology Co ltd
Original Assignee
Jiangsu Haizhou Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Haizhou Security Technology Co ltd filed Critical Jiangsu Haizhou Security Technology Co ltd
Priority to CN202211195077.XA priority Critical patent/CN115311604B/en
Publication of CN115311604A publication Critical patent/CN115311604A/en
Application granted granted Critical
Publication of CN115311604B publication Critical patent/CN115311604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a fire fighting method based on the Internet of things. The method comprises the following steps: acquiring a multi-frame gray image in a warehouse area; acquiring a difference image between adjacent frame gray images, and clustering difference pixel points in each difference image to obtain a plurality of clusters, wherein the difference pixel points are pixel points with nonzero gray values in the difference image; acquiring an average gray value of each cluster in the initial differential image and the adjacent multiple differential images thereof and a distance between every two clusters to construct a following function, and acquiring following clusters and followed clusters in all clusters based on the following function; acquiring a cluster center of each cluster, acquiring position changes of a following cluster and a followed cluster based on each cluster center, and further acquiring a suspicious region; converting the suspicious region into a frequency domain space to obtain energy changes of a plurality of continuous suspicious regions, determining a smoke region according to the energy changes, and performing fire alarm; the result of identifying the smoke area is more reliable and accurate.

Description

Fire fighting method based on Internet of things
Technical Field
The invention relates to the technical field of data processing, in particular to a fire fighting method based on the Internet of things.
Background
The fire-fighting internet of things is characterized in that a large number of sensor nodes form a monitoring network, and information such as temperature and toxic gas is collected through various sensors, so that real-time monitoring of a home environment is realized, and a user is helped to find problems in time; the high-sensitivity fire fighting basic environment is constructed, real-time, dynamic, interactive and fused fire fighting information acquisition, transmission and processing are realized, people are helped to reasonably utilize resources, and disasters and casualties are avoided.
In case of spontaneous combustion of goods, ignition of cigarette butts and the like in a closed warehouse environment, the smoldering time is long due to the fact that indoor air is not circulated, and therefore when a large amount of air rushes into the warehouse door, fire can spread rapidly and the difficulty in fighting is large. And since warehouses generally have high ceilings and open spaces, this makes fire detection relatively ineffective, because the temperature and the installation position of the smoke sensor are high, the smoke sensor is not sensitive to weak smoke, and great fire-fighting hidden danger is caused; therefore, the video monitoring is adopted to capture smoldering or smaller flames more quickly than the feedback of temperature and smoke sensors, but the coverage range of the monitoring of the warehouse installation camera is large, so that the identification of the weak smoke and the flames in the video image is difficult.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a fire fighting method based on the internet of things, which comprises the following steps:
acquiring a plurality of frames of video images in a warehouse area, wherein the warehouse area comprises a smoke generating point, and converting each frame of video image into a corresponding gray image;
acquiring a differential image between adjacent frame gray level images, recording a differential image of a pixel point with a nonzero gray level value for the first time as an initial differential image, and acquiring a plurality of adjacent differential images behind the initial differential image; clustering differential pixel points in each differential image to obtain a plurality of clusters, wherein the differential pixel points are pixel points with nonzero gray values in the differential image;
acquiring an average gray value of each cluster in the initial differential image and the adjacent multiple differential images thereof, constructing a following function based on the average gray value and the distance between every two clusters, and acquiring following clusters and followed clusters in all clusters based on the following function;
acquiring a cluster center of each cluster in the initial differential image and the adjacent differential images, acquiring position changes of the following cluster and the followed cluster based on each cluster center, and acquiring a suspicious region in each differential image based on the position changes;
and converting the suspicious region into a frequency domain space, acquiring energy changes of a plurality of continuous suspicious regions based on the frequency domain space, determining a smoke region according to the energy changes, and performing fire alarm based on the smoke region.
Preferably, the step of constructing a following function based on the average gray value and the distance between every two clusters includes:
obtaining the difference value of the average gray values between every two clusters; taking the coordinate of the cluster center of each cluster as the coordinate position of the cluster, and calculating the distance between two clusters according to the coordinate position between every two clusters;
and taking the negative number of the difference value and the negative number of the distance as the power of an exponential function with a natural constant as a base respectively, and obtaining a following function based on the two exponential functions.
Preferably, the step of obtaining a following cluster and a followed cluster in all clusters based on the following function includes:
taking any cluster in the current frame differential image as a target cluster, and calculating a following function between any cluster in the next frame differential image and the target cluster, wherein when the following function is maximum, the cluster in the next frame differential image is a corresponding cluster of the target cluster;
the target cluster is a followed cluster, and a corresponding cluster of the target cluster is a following cluster.
Preferably, the step of obtaining the position change of the following cluster and the followed cluster based on the center of each cluster includes:
respectively acquiring the vertical coordinates corresponding to the followed cluster and the following cluster, and calculating the vertical coordinate difference value between the followed cluster and the following cluster; the difference value of the vertical coordinates is position change.
Preferably, the step of acquiring the suspicious region in each differential image based on the position change includes:
selecting any followed cluster and a corresponding following cluster as a group of following groups, and when the position change corresponding to the following groups in the initial differential image and the adjacent multi-frame differential image is greater than 0, the clusters in the following groups are suspicious clusters;
when the position change corresponding to the following group in the initial differential image and the multi-frame differential images adjacent to the initial differential image is less than 0, recording the differential image with the position change less than 0 for the first time as a marked image;
acquiring a cluster which is closest to the cluster center in the following group in the marked image, and recording the cluster as a marked cluster, and acquiring a first direction vector of the cluster in the following group and the marked cluster;
acquiring a second direction vector between a cluster in the following group in the marked image and a corresponding cluster in a differential image of a next frame of the marked image;
acquiring a degree of similarity based on the first direction vector and the second direction vector; when the similarity degree is larger than a preset threshold value, the cluster in the following group is a suspicious cluster;
and acquiring all tracks with variable positions capable of clustering, and framing the tracks by using the maximum circumscribed rectangle to obtain a suspicious region.
Preferably, the step of obtaining energy changes of a plurality of consecutive suspicious regions based on the frequency domain space includes:
acquiring a frequency domain area of each suspicious region in a frequency domain space, calculating the average brightness of all frequency domain points in the frequency domain area, and acquiring the energy change of the suspicious region based on the average brightness.
The invention has the following beneficial effects: in the embodiment of the invention, the motion condition of an object in a warehouse area is obtained by analyzing a plurality of frames of differential images, a plurality of clusters are obtained by clustering differential pixel points with gray values in the differential images, and a following cluster and a followed cluster are obtained by analyzing based on the average gray value and the distance change of each cluster, so that the motion condition of the object is more accurately analyzed; further, a suspicious region is found out based on the change of the position between the following clusters and the followed clusters, the suspicious region is screened more convincingly based on the smoke motion characteristics, then the suspicious region is subjected to energy analysis in a corresponding frequency domain space, a final smoke region is obtained, and the result of smoke region identification is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a fire fighting method based on the internet of things according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the fire fighting method based on the internet of things according to the present invention, its specific implementation, structure, features and effects will be given with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The specific scheme of the fire fighting method based on the internet of things is specifically described below with reference to the attached drawings.
Referring to fig. 1, a flowchart of a method for fire fighting based on the internet of things according to an embodiment of the present invention is shown, where the method includes the following steps:
step S100, obtaining a plurality of frames of video images in a warehouse area, wherein the warehouse area comprises a smoke generating point, and converting each frame of video image into a corresponding gray image.
Specifically, a smoke generating point is randomly placed in a warehouse, the occurrence of small flames and smoldering situations is simulated, and continuous multi-frame video images are collected in a monitoring system; and each frame of video image is further converted into a gray image, so that the data identification speed of the smoke alarm system during smoke identification is improved, and the redundancy and the interference of redundant color information are reduced.
Step S200, obtaining a difference image between adjacent frame gray level images, recording the difference image of a pixel point with a non-zero gray level value appearing for the first time as an initial difference image, and obtaining a plurality of adjacent difference images after the initial difference image; and clustering the differential pixel points in each differential image to obtain a plurality of clusters, wherein the differential pixel points are pixel points with nonzero gray value in the differential image.
When the goods are smoldered or in a small fire, due to the problem of monitoring dead angles, if the goods are required to capture sparks and flames for early warning, the goods are obviously obstructed, smoke caused by fire is scattered and lifted, and the smoke can penetrate through the monitoring dead angles to be obtained by video monitoring; in a traditional smoke detection method, various visual characteristics of smoke are analyzed to analyze motion information of the smoke, suspicious regions of each frame are positioned, and various characteristics, such as color, texture and the like, of the smoke are analyzed to obtain a detection result; however, smoke has no form and does irregular disordered movement, particularly in a small fire, the smoke is quite thin, and the smoke is captured only according to visual features, so that the accuracy of the smoke is easily reduced due to scene interference.
In the embodiment of the invention, all gray level images are subjected to difference processing, namely, the difference between two adjacent frames of gray level images is subjected to preliminary screening of suspicious regions, and the difference between corresponding pixel points in the two frames of gray level images is carried out:
Figure DEST_PATH_IMAGE001
wherein,
Figure 341839DEST_PATH_IMAGE002
indicating the ^ th or greater in the differential image>
Figure 727821DEST_PATH_IMAGE003
Gray values corresponding to the pixel points; />
Figure 976400DEST_PATH_IMAGE004
Represents a fifth or fifth party>
Figure 89718DEST_PATH_IMAGE005
In the frame gray image
Figure 372932DEST_PATH_IMAGE003
Gray values corresponding to the pixel points; />
Figure 929815DEST_PATH_IMAGE006
Represents a fifth or fifth party>
Figure 665690DEST_PATH_IMAGE007
The ^ th or greater in the frame gray image>
Figure 84164DEST_PATH_IMAGE003
And gray values corresponding to the pixel points.
By analogy, obtaining the gray difference value of corresponding pixel points between two adjacent frames of gray images to obtain a difference image; when people, carts and the like move in the warehouse, the information contained in the differential image is more; for weak smoke, the information contained in the differential image is very little; for a static image part, the gray value of a pixel point in a differential image is 0; in the embodiment of the invention, pixel points with nonzero gray values in the differential image are recorded as differential pixel points; when the difference images are sequentially solved for the continuous multi-frame gray images, the difference image with the difference pixel points appearing for the first time is recorded as an initial difference image.
Furthermore, the difference images obtained in sequence are used as nodes, when the difference images comprise difference pixel points, the fact that dynamic changes exist in the gray level images is indicated, and therefore the motion trail needs to be tracked; clustering the difference pixel points in the initial difference image and the corresponding adjacent 10 difference images to obtain a plurality of clusters, wherein each cluster is an object which is in a local motion state, for example, the gray level image has the conditions of character movement, cart movement, smoke and the like; because the differential pixel points in the differential image are clustered, each cluster represents different moving objects, and the clustering algorithm is the existing known algorithm and is not repeated.
Step S300, obtaining an average gray value of each cluster in the initial differential image and the adjacent differential images, constructing a following function based on the average gray value and the distance between every two clusters, and obtaining following clusters and followed clusters in all clusters based on the following function.
In step S200, a plurality of clusters in the initial differential image and a plurality of adjacent differential images thereof are obtained, the coordinate of the center point of each cluster is used as the coordinate position of the cluster, the motion change of the same object or the same cluster in the adjacent frames is very small, and the moving distance of the same cluster in the coordinate position of the adjacent frames is also very small, that is, when a cluster of the current frame moves to the next frame, a cluster which is closest to the coordinate position of the cluster of the current frame in the next frame may be the same cluster; because the background pixel points can be removed in the differential process, the differential pixel points represent the pixel characteristics of the object, the differential pixel points hardly change in the motion process of the same object, but the smoke can not move, and can be diffused and expanded, so that the characteristic following is carried out by combining the similarity of average gray levels.
Respectively obtaining the average gray value of each cluster in the initial differential image and the adjacent differential images thereof, and constructing a following function based on the average gray value, wherein the following function is as follows:
Figure 221884DEST_PATH_IMAGE008
wherein,
Figure DEST_PATH_IMAGE009
representing a follow-up function; />
Figure 152931DEST_PATH_IMAGE010
Represents a fifth or fifth party>
Figure 766315DEST_PATH_IMAGE007
The ^ th or greater in the frame image>
Figure 782288DEST_PATH_IMAGE011
The average gray value of each cluster; />
Figure 587564DEST_PATH_IMAGE012
Indicates the fifth->
Figure 578261DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 632936DEST_PATH_IMAGE013
Average gray value of each cluster; />
Figure 845742DEST_PATH_IMAGE014
Represents a natural constant; />
Figure 941743DEST_PATH_IMAGE015
Indicates the fifth->
Figure 454674DEST_PATH_IMAGE007
In the frame image
Figure 668749DEST_PATH_IMAGE011
Coordinate positions of the clusters; />
Figure 481984DEST_PATH_IMAGE016
Indicates the fifth->
Figure 370175DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 610663DEST_PATH_IMAGE013
Coordinate positions of the clusters; />
Figure 30143DEST_PATH_IMAGE017
Representing a reference finding function.
In the embodiment of the invention, two adjacent frame differential images are divided into following and followed images, and if the current frame differential image is the followed image, the next frame differential image is the following image, namely the next frame differential image follows the current differential image, and the current frame differential image is followed by the next frame differential image; correspondingly, the same cluster in the adjacent two frame differential images is divided into a following cluster and a followed cluster;
Figure 647069DEST_PATH_IMAGE018
indicates a second->
Figure 419460DEST_PATH_IMAGE007
The ^ th or greater in the frame image>
Figure 830850DEST_PATH_IMAGE011
Each cluster and the fifth/fifth part>
Figure 534364DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 876352DEST_PATH_IMAGE013
Difference of mean gray value of each clusterThe absolute value is normalized, which means that the same cluster changes in the difference image of consecutive frames, but the difference between the average gray values of two clusters of the previous and next frames is the smallest, i.e. the ^ h ^ er>
Figure 552184DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 354049DEST_PATH_IMAGE013
Each cluster and the fifth/fifth part>
Figure 951384DEST_PATH_IMAGE007
The ^ th or greater in the frame image>
Figure 644533DEST_PATH_IMAGE011
The smaller the difference in average gradation value between the clusters, the more likely there is a following relationship.
For the first
Figure 424139DEST_PATH_IMAGE007
Frame image and/or->
Figure 442911DEST_PATH_IMAGE005
For the frame image, the ^ th->
Figure 324279DEST_PATH_IMAGE005
The frame image is the followed image, the ^ th->
Figure 821120DEST_PATH_IMAGE007
The frame image is a following image, and>
Figure 219346DEST_PATH_IMAGE019
representing the Euclidean distance between the coordinate positions of any two clusters in the differential images of two adjacent frames, and the judgment result is based on the characteristic that the motion change of the same object or cluster between the continuous frames is minimum>
Figure 877861DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 246525DEST_PATH_IMAGE013
First and second cluster as followed cluster>
Figure 281477DEST_PATH_IMAGE007
On the frame image>
Figure 301255DEST_PATH_IMAGE011
Each cluster and the fifth/fifth part>
Figure 130671DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 986631DEST_PATH_IMAGE013
The Euclidean distance between each cluster is far less than that of other clusters, then the ^ h or greater>
Figure 825274DEST_PATH_IMAGE007
On the frame image>
Figure 138706DEST_PATH_IMAGE011
An individual cluster is the first->
Figure 404602DEST_PATH_IMAGE005
The ^ th or greater in the frame image>
Figure 482280DEST_PATH_IMAGE013
A following cluster of clusters.
In two continuous frames of differential images, the differential result of the same object can have difference, so that the motion trail of the object needs to follow the change of the differential result, namely the cluster position of the next frame corresponding to each cluster, even if the shape and the size of the cluster in the continuous differential result are changed, the average gray value is minimum and the coordinate position of the cluster is relatively nearest, so that the following cluster and the followed cluster in each two adjacent frames of differential images are found according to the average gray value and the coordinate position; the larger the following function calculation value result is, the more likely the two clusters are in the following relationship.
Step S400, acquiring a cluster center of each cluster in the initial differential image and the adjacent differential images, acquiring position changes of a following cluster and a followed cluster based on the cluster center, and acquiring a suspicious region in each differential image based on the position changes.
Considering that the storage warehouse is an almost closed environment with minimal dynamic change in most of the time, smoke always performs diffusion movement upwards in the direct direction under the condition of no movement of other objects, and when there are moving objects around the smoke, the surrounding air pressure is low due to the movement, the original upwards-drifting movement of the smoke is changed, and the drifting direction of the smoke faces to the direction of an adjacent moving object in a short time; suspicious regions that are likely to be smoke are thus determined as changes in the coordinate locations of clusters in successive frame difference images.
Selecting any followed cluster and a corresponding followed cluster as a group of following groups, and when the position change of the initial differential image and the position change corresponding to the following groups in the adjacent multi-frame differential image are both greater than 0, determining that the clusters in the following groups are suspicious clusters; when the position change corresponding to the following group in the initial differential image and the adjacent multi-frame differential image is less than 0, recording the differential image with the position change being less than 0 for the first time as a marked image; acquiring a cluster which is closest to the cluster center in the following group in the marked image, marking the cluster as a marked cluster, and acquiring a first direction vector of the cluster and the marked cluster in the following group; acquiring a second direction vector between a cluster in a following group in the marker image and a corresponding cluster in a next frame of differential image of the marker image; acquiring the similarity degree based on the first direction vector and the second direction vector; when the similarity degree is larger than a preset threshold value, the cluster in the following group is a suspicious cluster; and acquiring all tracks with variable positions capable of clustering, and framing the tracks by using the maximum circumscribed rectangle to obtain a suspicious region.
Specifically, a plurality of groups of following clusters and followed clusters in the adjacent frame differential image are obtained in step S200, a group of following clusters and followed clusters are recorded as a group of following groups, and analysis is performed based on any following group:
Figure 373881DEST_PATH_IMAGE020
wherein,
Figure 853404DEST_PATH_IMAGE021
indicating that the ^ h or greater than the value in calculating any two adjacent frame differential images>
Figure 290202DEST_PATH_IMAGE022
The difference value of the corresponding vertical coordinates of the two clusters in the group following group; />
Figure 855175DEST_PATH_IMAGE023
Indicating if a fifth>
Figure 783424DEST_PATH_IMAGE022
And if the difference value of the corresponding vertical coordinates of the two clusters in the group following group is greater than 0, the cluster center of the cluster is represented to be monotonically increased upwards along the vertical axis, and the cluster is considered to be a suspicious cluster, namely the cluster is in accordance with the characteristic that smoke continuously makes upward diffusion motion. And acquiring a coordinate position change track of the suspicious cluster in the 10-frame differential image, and framing the suspicious cluster by using a maximum circumscribed rectangle, wherein the framed area is a suspicious area.
Further, when there is a difference in ordinate of the coordinate positions of two clusters in the same follower group in the 10-frame differential image of not more than 0, that is, the difference is not more than
Figure 117453DEST_PATH_IMAGE024
In the meantime, the upward motion trajectory of the cluster changes, and secondary suspicious region determination is required to determine whether the situation that the smoke drifts toward the nearest moving object may exist. Wherein +>
Figure 521890DEST_PATH_IMAGE025
Representing the frame number when the difference value of the vertical coordinates between two clusters in the same following group in the 10-frame differential image is not more than 0 for the first time, and representing the time node when the motion trail of the object changes; />
Figure 574159DEST_PATH_IMAGE026
Represents an angle; />
Figure 292717DEST_PATH_IMAGE027
Is indicated at the fifth->
Figure 996099DEST_PATH_IMAGE025
And the ^ th or ^ th in the frame difference image>
Figure 774699DEST_PATH_IMAGE028
Following the cluster closest to each other in the group, the term closest means that the Euclidean distance between the corresponding coordinate positions of the two clusters is closest and is used for ^ based on>
Figure 314265DEST_PATH_IMAGE029
Indicates and/or is>
Figure 836513DEST_PATH_IMAGE028
Following the closest cluster in the group; in the embodiment of the invention, when the cluster where the default smog is located changes from the upward direction, the cluster is influenced by a moving object closest to the cluster.
When the similar objects or the clustered clusters move, the ambient air pressure is reduced, the motion state of smoke is changed, and the motion direction tends to the objects moving nearby. I.e. in the first place
Figure 630288DEST_PATH_IMAGE025
During the frame, an object moves around the smoke in
Figure 579790DEST_PATH_IMAGE030
In the frame, the motion direction of smoke tends to move toward the object in the previous frame. />
Figure 341072DEST_PATH_IMAGE031
Represents a fifth or fifth party>
Figure 932590DEST_PATH_IMAGE025
The frame is adjacent thereto->
Figure 79407DEST_PATH_IMAGE030
Is between>
Figure 199810DEST_PATH_IMAGE028
Cluster center connecting line vector of each following group, < >>
Figure 713968DEST_PATH_IMAGE032
The angle representing the vector, i.e. the angle from the positive horizontal direction; />
Figure 578018DEST_PATH_IMAGE033
Represents a fifth->
Figure 525814DEST_PATH_IMAGE025
Th/h-in frame difference image>
Figure 331964DEST_PATH_IMAGE022
A connecting line vector between the cluster center in each following group and the cluster center which is closest to the cluster center; />
Figure 271102DEST_PATH_IMAGE034
An angle representing a vector; />
Figure 938843DEST_PATH_IMAGE035
Represents the ratio between two vector angles, the closer the ratio is to 1, the closer the two vector angles are, the closer together the combination is>
Figure 561717DEST_PATH_IMAGE036
Representing how close the ratio is to 1, a close threshold of 0.2 is set in this embodiment of the invention, considered when->
Figure 289501DEST_PATH_IMAGE037
Not greater than the similar threshold value of 0.2, on a fourth basis>
Figure 512672DEST_PATH_IMAGE028
The clusters in each follow-up group are suspicious clusters.
And further performing framing on the suspicious cluster at the moment to obtain a suspicious region, wherein the framing method also comprises the steps of obtaining a coordinate position change track of the suspicious cluster in a 10-frame differential image, performing framing by using a maximum circumscribed rectangle, and taking the region selected by the framing as the suspicious region.
Analyzing two conditions according to the drifting characteristics of the smoke, wherein the first condition is that no moving object is interfered, the smoke does diffusion motion directly upwards, namely the cluster center point always moves upwards and shifts towards the direction of a longitudinal axis; the second is that the smoke tends to the nearest moving object direction around due to the moving object interference, the cluster center displacement direction of the front and back frames is close to the moving object with the closest cluster distance, that is, the connection direction of the centers of the nearest adjacent clusters is close; through the analysis of the two conditions, the suspicious area corresponding to the smoke in the warehouse environment is judged more comprehensively.
And S500, converting the suspicious region into a frequency domain space, acquiring energy changes of a plurality of continuous suspicious regions based on the frequency domain space, determining a smoke region according to the energy changes, and performing fire alarm based on the smoke region.
Due to the fact that flame and smoldering phenomena occur at the goods accumulation part, a small amount of smoke drifts out, the smoke and other moving objects such as workers, carts and plastic bags can be obtained through display of the difference images, suspicious regions are selected according to the motion track characteristics of the coordinate positions of the clusters in the continuous frame difference images, and then the final smoke region is determined according to the energy change of each suspicious cluster in the framing selection region in the 10 frames of difference images.
Besides the visual characteristics of the smoke, the influence of the smoke on the original image is also reflected in that the smoke can blur the high-frequency information of the original image, and the blurring degree of the high-frequency information of the original image is larger as the smoke is more and more; according to the characteristic, fourier transform is carried out on the image of each suspicious region in the differential image, the image is converted into a frequency domain space, and the energy change of each frame of suspicious region image is calculated, wherein the energy change is as follows:
Figure 718526DEST_PATH_IMAGE038
wherein,
Figure 960020DEST_PATH_IMAGE039
to representA change in energy; />
Figure 655444DEST_PATH_IMAGE040
Represents the maximum number of frequency domain points; />
Figure DEST_PATH_IMAGE041
Indicating that the same suspect region is ^ based in the frequency domain space>
Figure 772435DEST_PATH_IMAGE042
A frequency domain point luminance; />
Figure 578717DEST_PATH_IMAGE005
Representing an arbitrary frame of differential image; />
Figure 438832DEST_PATH_IMAGE043
Representing a frame number interval; />
Figure 446103DEST_PATH_IMAGE044
Indicates the fifth->
Figure 643866DEST_PATH_IMAGE005
The average brightness of any suspicious region in the frame difference image is based on the interval ^ on the forward time sequence>
Figure 440789DEST_PATH_IMAGE043
The difference in average brightness of the corresponding suspect region in the difference image of the frame.
When the energy changes
Figure 673188DEST_PATH_IMAGE039
When the low-frequency information is more and more, the suspicious region can be judged to be a smoke region; after the smog area is identified, workers are called in time or a fire alarm is given, and the fire safety of the warehouse is improved.
In summary, in the embodiment of the present invention, through analyzing multiple frames of video images in the warehouse area, each frame of video image is converted into a corresponding gray image, and then a difference image is obtained according to a difference between two adjacent frames of gray images, and a difference image of a pixel point with a non-zero gray value appearing for the first time is recorded as an initial difference image, and multiple adjacent difference images after the initial difference image are obtained; clustering differential pixel points in each differential image to obtain a plurality of clusters, wherein the differential pixel points are pixel points with nonzero gray values in the differential images; acquiring an average gray value of each cluster in the initial differential image and the adjacent multiple differential images thereof, constructing a following function based on the average gray value and the distance between every two clusters, and acquiring a following cluster and a followed cluster in all clusters based on the following function; acquiring a cluster center of each cluster in the initial differential image and the adjacent differential images, acquiring position changes of a following cluster and a followed cluster based on each cluster center, and acquiring a suspicious region in each differential image based on the position changes; converting the suspicious region into a frequency domain space, acquiring energy changes of a plurality of continuous suspicious regions based on the frequency domain space, determining a smoke region according to the energy changes, and performing fire alarm based on the smoke region; the accuracy of regional discernment of smog has been improved, has ensured the fire control safety in warehouse.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit of the present invention are intended to be included therein.

Claims (4)

1. A fire fighting method based on the Internet of things is characterized by comprising the following steps:
acquiring a plurality of frames of video images in a warehouse area, wherein the warehouse area comprises a smoke generation point, and converting each frame of video image into a corresponding gray image;
acquiring a differential image between adjacent frame gray level images, recording a differential image of a pixel point with a nonzero gray level value for the first time as an initial differential image, and acquiring a plurality of adjacent differential images behind the initial differential image; clustering differential pixel points in each differential image to obtain a plurality of clusters, wherein the differential pixel points are pixel points with nonzero gray scale values in the differential images;
acquiring an average gray value of each cluster in the initial differential image and the adjacent multiple differential images thereof, constructing a following function based on the average gray value and the distance between every two clusters, and acquiring following clusters and followed clusters in all clusters based on the following function;
acquiring a cluster center of each cluster in the initial differential image and the adjacent differential images, acquiring position changes of the following cluster and the followed cluster based on each cluster center, and acquiring a suspicious region in each differential image based on the position changes;
converting the suspicious region into a frequency domain space, acquiring energy changes of a plurality of continuous suspicious regions based on the frequency domain space, determining a smoke region according to the energy changes, and performing fire alarm based on the smoke region;
the step of obtaining the suspicious region in each differential image based on the position change comprises:
selecting any followed cluster and a corresponding following cluster as a group of following groups, and when the position change corresponding to the following groups in the initial differential image and the adjacent multi-frame differential image is greater than 0, the clusters in the following groups are suspicious clusters;
when the position change corresponding to the following group in the initial differential image and the multi-frame differential images adjacent to the initial differential image is less than 0, recording the differential image with the position change less than 0 for the first time as a marked image;
acquiring a cluster which is closest to the cluster center in the following group in the marker image, and recording the cluster as a marker cluster, and acquiring a first direction vector of the cluster in the following group and the marker cluster;
acquiring a second direction vector between a cluster in the following group in the marked image and a corresponding cluster in a differential image of a next frame of the marked image;
acquiring a degree of similarity based on the first direction vector and the second direction vector; when the similarity degree is larger than a preset threshold value, the cluster in the following group is a suspicious cluster;
obtaining the track of the position change of all suspicious clusters, and framing the track by using a maximum circumscribed rectangle to obtain a suspicious region;
the step of obtaining the energy changes of a plurality of continuous suspicious regions based on the frequency domain space comprises the following steps:
acquiring a frequency domain area of each suspicious region in a frequency domain space, calculating the average brightness of all frequency domain points in the frequency domain area, and acquiring the energy change of the suspicious region based on the average brightness.
2. A fire fighting method based on internet of things according to claim 1, wherein the step of constructing a following function based on the average gray value and the distance between every two clusters includes:
obtaining the difference value of the average gray values between every two clusters; taking the coordinate of the cluster center of each cluster as the coordinate position of the cluster, and calculating the distance between two clusters according to the coordinate position between every two clusters;
and taking the negative number of the difference value and the negative number of the distance as powers of exponential functions with a natural constant as a base, and obtaining a following function based on the two exponential functions.
3. A fire fighting method based on internet of things according to claim 1, wherein the step of obtaining following clusters and followed clusters in all clusters based on the following function includes:
taking any cluster in the current frame differential image as a target cluster, and calculating a following function of any cluster in the next frame differential image and the target cluster, wherein when the following function is maximum, the cluster in the next frame differential image is a corresponding cluster of the target cluster;
the target cluster is a followed cluster, the corresponding cluster of the target cluster is a following cluster.
4. A fire fighting method based on internet of things as claimed in claim 1, wherein the step of obtaining the position change of the following cluster and the followed cluster based on each cluster center comprises:
respectively acquiring the vertical coordinates corresponding to the followed cluster and the following cluster, and calculating the vertical coordinate difference value between the followed cluster and the following cluster; the difference value of the vertical coordinates is position change.
CN202211195077.XA 2022-09-29 2022-09-29 Fire fighting method based on Internet of things Active CN115311604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211195077.XA CN115311604B (en) 2022-09-29 2022-09-29 Fire fighting method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211195077.XA CN115311604B (en) 2022-09-29 2022-09-29 Fire fighting method based on Internet of things

Publications (2)

Publication Number Publication Date
CN115311604A CN115311604A (en) 2022-11-08
CN115311604B true CN115311604B (en) 2023-04-18

Family

ID=83867554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211195077.XA Active CN115311604B (en) 2022-09-29 2022-09-29 Fire fighting method based on Internet of things

Country Status (1)

Country Link
CN (1) CN115311604B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363021B (en) * 2023-06-02 2023-07-28 中国人民解放军总医院第八医学中心 Intelligent collection system for nursing and evaluating wound patients
CN116912241B (en) * 2023-09-12 2023-12-12 深圳市艾为创科技有限公司 CNC machine adjustment optimization method and system based on machine learning
CN117541618B (en) * 2023-10-07 2024-08-16 建研防火科技有限公司 Fire control comprehensive treatment system based on internet of things

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005033527A (en) * 2003-07-14 2005-02-03 Ricoh Co Ltd Image processor, image processing method, program and recording medium
CN108520528B (en) * 2018-03-29 2021-05-11 中山大学新华学院 Mobile vehicle tracking method based on improved difference threshold and displacement matching model
CN112215182B (en) * 2020-10-21 2023-12-08 中国人民解放军火箭军工程大学 Smoke identification method suitable for forest fire

Also Published As

Publication number Publication date
CN115311604A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN115311604B (en) Fire fighting method based on Internet of things
KR101168760B1 (en) Flame detecting method and device
US7868772B2 (en) Flame detecting method and device
US7859419B2 (en) Smoke detecting method and device
Celik Fast and efficient method for fire detection using image processing
KR101353952B1 (en) Method for detecting wildfire smoke using spatiotemporal bag-of-features of smoke and random forest
JP4705090B2 (en) Smoke sensing device and method
KR101822924B1 (en) Image based system, method, and program for detecting fire
US20070019071A1 (en) Smoke detection
KR100659781B1 (en) Smoke Detecting Method and System using CCD Image
CN111626188B (en) Indoor uncontrollable open fire monitoring method and system
CN112699801B (en) Fire identification method and system based on video image
CN110322659A (en) A kind of smog detection method
CN107085714A (en) A kind of forest fire detection method based on video
CN107437318B (en) Visible light intelligent recognition algorithm
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
JP6240116B2 (en) Object detection device
CN108363992B (en) Fire early warning method for monitoring video image smoke based on machine learning
CN109377713A (en) A kind of fire alarm method and system
EP2000998B1 (en) Flame detecting method and device
CN114120171A (en) Fire smoke detection method, device and equipment based on video frame and storage medium
EP2000952A2 (en) Smoke detecting method and device
CN115601919A (en) Fire alarm method based on Internet of things equipment and video image comprehensive identification
CN108830161B (en) Smog identification method based on video stream data
JP2010015469A (en) Still area detection method, and apparatus, program and recording medium therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant