CN114999099B - Big data based collaborative early warning method, device, equipment and storage medium - Google Patents

Big data based collaborative early warning method, device, equipment and storage medium Download PDF

Info

Publication number
CN114999099B
CN114999099B CN202210838710.6A CN202210838710A CN114999099B CN 114999099 B CN114999099 B CN 114999099B CN 202210838710 A CN202210838710 A CN 202210838710A CN 114999099 B CN114999099 B CN 114999099B
Authority
CN
China
Prior art keywords
area
warned
early
warning
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210838710.6A
Other languages
Chinese (zh)
Other versions
CN114999099A (en
Inventor
叶礼鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202210838710.6A priority Critical patent/CN114999099B/en
Publication of CN114999099A publication Critical patent/CN114999099A/en
Application granted granted Critical
Publication of CN114999099B publication Critical patent/CN114999099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/10Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/002Generating a prealarm to the central station
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines

Abstract

The invention belongs to the technical field of computers, and discloses a collaborative early warning method, a device, equipment and a storage medium based on big data. The method comprises the steps of acquiring the positioning information of a reporter of the early warning message when the early warning message is received; determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned; identifying the monitoring video data through a preset fire identification model, and determining whether open fire exists in the area to be early warned; if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned; and scheduling the processing personnel according to the alarm situation level so as to process the alarm situation of the area to be pre-warned. When the early warning message is received, the monitoring video data in the area to be early warned can be called to be recognized and confirmed, and the scheduling personnel can be used for processing the warning condition only when the warning condition is determined, so that the waste of warning resources caused by various false warning phenomena is avoided.

Description

Big data based collaborative early warning method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a collaborative early warning method, a collaborative early warning device, collaborative early warning equipment and a storage medium based on big data.
Background
At present, the police strength resource is extremely limited, but in the process of warning condition management, various false alarms can often occur, and in order to ensure that the phenomenon of missing alarms does not occur, personnel need to be arranged for checking the early warning at every time, so that a large amount of human resources are wasted, the limited police strength resource is more stressed, even the real occurring warning condition can not be processed by arranging enough human hands, and therefore, how to avoid the waste of the police strength resource is a difficult problem.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a collaborative early warning method, a collaborative early warning device, collaborative early warning equipment and a storage medium based on big data, and aims to solve the technical problem of how to avoid the waste of police resources.
In order to achieve the above object, the present invention provides a collaborative early warning method based on big data, which comprises the following steps:
when an early warning message is received, acquiring the positioning information of a reporter of the early warning message;
determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned;
identifying the monitoring video data through a preset fire identification model to determine whether an open fire exists in the area to be early warned, wherein the preset fire identification model is a model which is trained by using a model training set in advance, and the model training set is constructed through data acquired by big data;
if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned;
and scheduling processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned.
Optionally, the step of determining an area to be early-warned according to the positioning information and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned includes:
determining a fuzzy positioning area according to a preset positioning error and the positioning information;
determining a center of an early warning area according to the fuzzy positioning area, and determining an area to be early warned according to the center of the early warning area and a preset area threshold;
acquiring equipment installation positions of each monitoring equipment;
determining target monitoring equipment according to the area to be early-warned and the equipment installation position;
and reading the monitoring video data collected by the target monitoring equipment.
Optionally, the step of determining a center of an early warning area according to the fuzzy positioning area, and determining an area to be early warned according to the center of the early warning area and a preset area threshold includes:
acquiring a central point of the fuzzy positioning area, and taking the central point as a center of an early warning area;
determining a plurality of region vertexes according to the region center and a preset region threshold;
performing semantic recognition on the early warning message, and determining whether a direction indication statement exists in the early warning message;
if the direction indication statement exists, determining a vertex adjustment direction according to the direction indication statement;
dividing the multiple regional vertexes into vertexes to be adjusted and invariant vertexes according to the vertex adjustment direction;
adjusting the position of the vertex to be adjusted according to a preset adjustment threshold;
and when the adjustment is completed, carrying out region division according to the adjusted vertex to be adjusted and the unchanged vertex, and determining a region to be early-warned.
Optionally, after the step of identifying the monitoring video data by using a preset fire identification model and determining whether an open fire image exists in the area to be early-warned, the method further includes:
if no open fire exists in the area to be early-warned, identifying the monitoring video data to obtain a smoke detection result and personnel movement information;
judging whether the area to be early-warned has warning conditions or not according to the smoke detection result and the personnel movement information;
if the alarm condition exists in the area to be pre-warned, acquiring a default scheduling rule;
and scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be pre-warned.
Optionally, the step of determining whether the warning condition exists in the area to be warned according to the smoke detection result and the personnel movement information includes:
if the smoke detection result indicates that a large amount of smoke exists, judging that the area to be pre-warned has a warning condition;
if the smoke detection result indicates that a small amount of smoke exists or no smoke exists, determining a personnel moving direction vector according to the personnel moving information and the installation angles of the monitoring devices;
determining a movement convergence point based on the person movement direction vector;
and if the aggregation number of the mobile aggregation points is larger than or equal to a preset aggregation threshold value, judging that the warning condition exists in the area to be pre-warned.
Optionally, the step of scheduling processing personnel according to the alert level to perform alert processing on the area to be pre-warned includes:
determining the number of the warning processing persons according to the warning level;
determining a target alarm condition processing unit according to the area to be pre-warned and the unit address of each alarm condition processing unit, and acquiring the idle number of processing personnel corresponding to the target alarm condition processing unit;
if the number of the idle processing personnel is more than or equal to the number of the warning processing personnel, acquiring the positioning information of each idle processing personnel in the target warning processing unit;
and generating a scheduling command according to the positioning information and the position information of the area to be pre-warned, and sending the scheduling command to the pre-warning terminal of the idle processing personnel.
Optionally, if there is an open fire in the area to be early-warned, the step of obtaining the warning situation level corresponding to the area to be early-warned includes:
if open fire exists in the area to be pre-warned, acquiring a recognition result of the preset fire recognition model;
determining an open fire video frame and open fire position information according to the identification result;
extracting an open fire image from the open fire video frame according to the open fire position information;
comparing the open fire image with fire images corresponding to all warning situation levels in a preset fire level library to obtain image similarity corresponding to all warning situation levels;
and determining the warning situation level corresponding to the area to be pre-warned according to the image similarity.
In addition, in order to achieve the above object, the present invention further provides a big data based collaborative early warning apparatus, which includes the following modules:
the message receiving module is used for acquiring the positioning information of a reporter of the early warning message when the early warning message is received;
the video acquisition module is used for determining an area to be early warned according to the positioning information and acquiring monitoring video data acquired by each monitoring device in the area to be early warned;
the data identification module is used for identifying the monitoring video data through a preset fire identification model and determining whether an open fire exists in the area to be early-warned, wherein the preset fire identification model is a model which is trained by using a model training set in advance, and the model training set is constructed through data acquired by big data;
the level determining module is used for acquiring the warning situation level corresponding to the area to be early warned if open fire exists in the area to be early warned;
and the personnel scheduling module is used for scheduling processing personnel according to the alarm situation level so as to process the alarm situation of the area to be early-warned.
In addition, in order to achieve the above object, the present invention further provides a collaborative early warning device based on big data, where the collaborative early warning device based on big data includes: the system comprises a processor, a memory and a big data-based collaborative early warning program which is stored on the memory and can run on the processor, wherein when being executed by the processor, the big data-based collaborative early warning program realizes the steps of the big data-based collaborative early warning method.
In addition, in order to achieve the above object, the present invention further provides a computer-readable storage medium, where a big data-based collaborative early warning program is stored on the computer-readable storage medium, and when the big data-based collaborative early warning program is executed, the steps of the big data-based collaborative early warning method are implemented.
The method comprises the steps of acquiring the positioning information of a reporter of the early warning message when the early warning message is received; determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned; identifying the monitoring video data through a preset fire identification model, and determining whether open fire exists in the area to be early warned; if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned; and scheduling the processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned. When the early warning message is received, the monitoring video data in the area to be early warned can be called to be recognized and confirmed, and the scheduling personnel can be used for processing the warning condition only when the warning condition is determined, so that the waste of warning resources caused by various false warning phenomena is avoided.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a collaborative early warning method based on big data according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of a collaborative early warning method based on big data according to the present invention;
fig. 4 is a block diagram of a first embodiment of a big data-based collaborative early warning apparatus according to the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a big data-based collaborative early warning device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001 described previously.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a big data-based collaborative early warning program.
In the electronic apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the electronic device of the present invention may be disposed in a big data based collaborative early warning device, and the electronic device calls the big data based collaborative early warning program stored in the memory 1005 through the processor 1001 and executes the big data based collaborative early warning method provided by the embodiment of the present invention.
An embodiment of the present invention provides a collaborative early warning method based on big data, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a collaborative early warning method based on big data according to the present invention.
In this embodiment, the collaborative early warning method based on big data includes the following steps:
step S10: and when the early warning message is received, acquiring the positioning information of the reporter of the early warning message.
It should be noted that the execution subject of this embodiment may be the big data-based collaborative early warning device, and the big data-based collaborative early warning device may be an electronic device such as a personal computer, a server, or other devices that can implement the same or similar functions.
It should be noted that the warning message may be a message sent by the user to the warning device when the user finds an alarm, for example: and the early warning software installed on the mobile terminal used by the user reports the early warning message to the early warning equipment. Of course, the warning message may also be forwarded to the warning device by other devices, for example: the user dials an alarm call, the telephone wiring center can inquire the relevant information of the user according to a preset template through the voice robot and record the information, and then the audio data obtained through recording is used as an early warning message and sent to the early warning equipment. The reporter may be a user reporting the warning message.
In actual use, the acquisition of the positioning information can also have different acquisition modes according to different sources of the early warning message, and when the early warning message is reported by early warning software installed on a mobile terminal by a user, the early warning software can call the GPS service of the mobile terminal, so that the positioning information of a reporter is obtained. If the warning message is forwarded to the warning device by other devices, the positioning information of the reporter can be acquired from the device forwarding the warning message, for example: the user dials the alarm call, the telephone wiring center can position the position of the terminal used by the user to dial the call, the positioning information of the reporter is obtained, and the early warning equipment can obtain the positioning information of the reporter from the telephone wiring center.
Step S20: and determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned.
It should be noted that, when the user reports the alarm, the user is generally located near the alarm occurrence area, so that the area to be warned may be determined according to the positioning information, for example: the method comprises the steps of presetting an alarm condition investigation radius, determining a circular area by taking the positioning information of a reporter as a circle center and the alarm condition investigation radius, and taking the circular area as an area to be pre-warned.
In actual use, the step of acquiring the monitoring video data acquired by each monitoring device in the area to be pre-warned may be to acquire all the monitoring devices whose installation positions are in the area to be pre-warned, and then call the monitoring video data acquired by the monitoring devices in the area to be pre-warned.
Therefore, when the monitoring video data collected by each monitoring device in the area to be pre-warned is obtained, the device identifiers of all the monitoring devices with installation positions in the area to be pre-warned can be obtained first, and then the corresponding monitoring video data can be searched in the monitoring center according to the device identifiers.
In specific implementation, when the monitoring video data is obtained, the generation time corresponding to the early warning message can be obtained, then a video interception time period is determined according to the generation time and the preset video interception duration, and then the corresponding monitoring video data is searched in the monitoring center according to the video interception time period and the equipment identification of the monitoring equipment in the area to be early warned, so that the data processing amount of the early warning equipment is reduced, and the data processing speed is increased. If the description information of the moment of occurrence of the alert condition still exists in the early warning message, the generation moment can be adjusted according to the description information, the time period starting moment is determined, and then the video capture time period is determined according to the time period starting moment and the preset video capture duration, for example: assuming that the preset video capture time duration is 10 minutes, the generation time is 9, and the description information is "five minutes ago", then the time period starting time can be determined to be 8 at this time, and then the video capture time period is 8-9.
Further, in order to ensure that the collected monitoring video data includes video data of a real alarm occurrence area as much as possible, the step S20 in this embodiment may include:
determining a fuzzy positioning area according to a preset positioning error and the positioning information;
determining a center of an early warning area according to the fuzzy positioning area, and determining an area to be early warned according to the center of the early warning area and a preset area threshold;
acquiring equipment installation positions of each monitoring equipment;
determining target monitoring equipment according to the area to be early-warned and the equipment installation position;
and reading the monitoring video data collected by the target monitoring equipment.
It should be noted that the preset positioning error may be an error parameter preset by a manager of the early warning device, and different preset positioning errors may be set according to different manners of obtaining the positioning information. The determining of the fuzzy positioning area according to the preset positioning error and the positioning information may be determining an area range where the reporter may be located according to the positioning information and the preset positioning error, so as to obtain the fuzzy positioning area. The preset region threshold value can be preset by a manager of the early warning device according to actual needs.
In practical use, the early warning area center is determined according to the fuzzy positioning area, and the fuzzy positioning area can be directly used as the early warning area center. Determining the area to be pre-warned according to the pre-warning area center and the preset area threshold value may be dividing the area according to the pre-warning area center and the preset area threshold value to obtain the area to be pre-warned, for example: and extending the boundary points of each region in the center of the early warning region outwards according to a preset region threshold value to obtain a plurality of adjusted region boundary points, and then performing region division according to a plurality of adjusted region convenient points to obtain a region to be early warned.
In a specific implementation, the target monitoring device is determined according to the area to be early-warned and the device installation position, and the device installation positions are compared with the area to be early-warned, and the monitoring device with the device installation position in the area to be early-warned is used as the target monitoring device. The reading of the surveillance video data collected by the target surveillance device may be the reading of the surveillance video data collected by the target surveillance device from a surveillance center.
Further, since there may be a certain distance between the position of the alarm person and the alarm occurrence position, the area division is directly performed by using a fixed threshold, and it is determined whether the area to be early-warned may or may not cause the actual occurrence position of the alarm to be included in the area to be early-warned, in order to avoid this phenomenon, the step of determining the center of the early-warning area according to the fuzzy positioning area and the step of determining the area to be early-warned according to the center of the early-warning area and the preset area threshold may include:
acquiring a central point of the fuzzy positioning area, and taking the central point as a center of an early warning area;
determining a plurality of region vertexes according to the region center and a preset region threshold;
performing semantic recognition on the early warning message, and determining whether a direction indication statement exists in the early warning message;
if a direction indication statement exists, determining a vertex adjustment direction according to the direction indication statement;
dividing the multiple regional vertexes into vertexes to be adjusted and invariant vertexes according to the vertex adjusting direction;
adjusting the position of the vertex to be adjusted according to a preset adjustment threshold;
and when the adjustment is finished, carrying out region division according to the adjusted vertex to be adjusted and the unchanged vertex, and determining a region to be early-warned.
It should be noted that the preset adjustment threshold may be preset by a manager of the early warning device according to actual needs, for example: the preset adjustment threshold is set to 30 meters. According to the shape of the fuzzy positioning area, there may be different center points of the fuzzy positioning area, for example: if the fuzzy positioning area is a circular area, the center of the circle of the circular area can be used as the central point of the fuzzy positioning area; if the fuzzy positioning area is a rectangular area, the center of gravity of the rectangular area may be used as the center point of the fuzzy positioning area. Determining the plurality of area vertexes according to the area center and the preset area threshold may be to extend the area center to a plurality of different directions by a distance of the preset area threshold, thereby obtaining the plurality of area vertexes, wherein the plurality of different directions may be preset by a manager of the early warning device according to actual needs, for example: the north direction is taken as the reference, and the angle is taken in one direction every 3 degrees.
In a specific implementation, the direction indication statement may be a descriptive statement used by the user to describe the location of the alarm, for example: in the northeast direction. If the direction instruction sentence exists in the early warning message, the direction instruction sentence can be identified, and then the vertex adjustment direction is determined according to the identification result, for example: assuming that the direction indication sentence is "northeast direction", all directions between the true north direction and the true east direction can be taken as the vertex adjustment direction at this time. The dividing of the plurality of region vertices into to-be-adjusted vertices and invariant vertices according to the vertex adjustment direction may be dividing the region vertices in the vertex adjustment direction into to-be-adjusted vertices, and dividing the region positions not in the vertex adjustment direction into invariant vertices. The position of the vertex to be adjusted may be adjusted by extending the vertex to be adjusted by the preset adjustment threshold along the direction corresponding to the vertex to be adjusted.
When the adjustment is completed, area division is performed according to the adjusted to-be-adjusted vertex and the adjusted invariant vertex, the determined to-be-early-warning area can be a plurality of circular areas by taking the area center as the circle center and respectively taking the distance between each to-be-adjusted vertex and the invariant vertex and the area center as the radius, the intersection area of each circular area is taken, and the intersection area is taken as the to-be-early-warning area.
Step S30: and identifying the monitoring video data through a preset fire identification model to determine whether open fire exists in the area to be early-warned, wherein the preset fire identification model is a model which is trained by using a model training set in advance, and the model training set is constructed through data acquired by big data.
It should be noted that the preset fire recognition model may be a model trained in advance through a model training set, and the preset fire recognition model may recognize whether an open fire occurs in the video data, and output information such as an occurrence time of the open fire and a region where the open fire occurs in the video image. The model training set may be a fire sample set obtained from a professional website, for example: the kexijia fire data set may, of course, also be constructed from data collected by big data, such as: a large number of pictures of a fire scene are acquired through big data, then the pictures of the fire scene are marked, a large number of model training samples are generated, and then the model training samples are aggregated into a model training set.
In practical use, because the manual marking mode has a high labor cost and the number of samples in the professional fire sample set is possibly small, the professional fire sample set can be expanded by using the model training samples generated by the big data acquisition and manual marking modes, and of course, the fire sample set can be expanded by using the data enhancement mode, for example: and the data is enhanced by using modes such as image translation, image scaling, mosaic, mixing, horizontal turning and the like, so that the quantity of the samples in the fire sample set is enriched.
In a specific implementation, the preset fire recognition model can be a yolov5+ U-net combined model. When a preset fire recognition model is trained, model training can be performed through a model training framework (such as a Pytorch framework), and the specific training process can be as follows:
the pictures in the sample set are converted into PNG images, the corresponding labeling frame information is converted into TXT files, the learning rate is set to be 0.01, the batch size is 6, the past generation number is set to be 200, the size of the input images is set to be 416x416 or 1024 x 1024, and then the Yolov5 model is trained.
Then setting the learning rate to be 0.001, selecting an Adam optimizer as a model optimizer, setting the batch size to be 4, converting the pictures in the sample set into a preset image format (such as images in formats of PNG (pseudo noise) or JPEG (joint photographic experts group) and the like), extracting an image binary mask, taking the image and the binary mask as the input of a U-net model, and calculating a loss value (LS) by adopting a formula obtained by combining a loss function of a Dice Coefficient (DC) and a Binary Cross Entropy (BCE), wherein the specific formula is as follows:
Figure 697460DEST_PATH_IMAGE001
wherein BCE can be calculated by the following formula:
Figure 636466DEST_PATH_IMAGE002
in the formula, L is a BCE value obtained by calculation, and the base number of log is e; yi represents the label of sample i, the positive class (fire present) is 1, the negative class (fire absent) is 0 (note: BCE can not only learn label of 0 or 1), and one _ hot code is used; pi represents the probability that the sample i is predicted to be a positive class, and is the number after being activated by the sigmoid function; n represents the number of samples.
When a training period is finished during model training, the Recall rate (Recall), the average precision (MAP), the Dice Coefficient (DC) and the Accuracy (Accuracy) can be respectively calculated, and then when the four calculated parameters are all larger than corresponding threshold values (indexes corresponding to each parameter can be set according to actual needs, for example, when the threshold value corresponding to (Recall) is set to be 0.8, the threshold value corresponding to the average precision (MAP) is set to be 0.7, the threshold value corresponding to the Dice Coefficient (DC) is set to be 0.9 and the threshold value corresponding to the Accuracy (Accuracy) is set to be 0.95), model training convergence can be judged, and at the moment, the model obtained by training can be converted into a tensorrx format and then into an engine format for storage, so that subsequent deployment and use are facilitated;
if any one of the calculated parameters is less than or equal to the corresponding threshold, the model can be continuously trained in the next training period.
Wherein, the calculation of the recall rate can use the formula:
Figure 85028DEST_PATH_IMAGE003
wherein TP is the number determined to be correct when a fire is determined to exist; FN is the number of fires that are determined to be not present, but actually present.
The average accuracy may be an average value of APs at different levels, and the calculation of APs at different levels may use the formula:
Figure 414378DEST_PATH_IMAGE004
wherein r and p are recall and precision at the m-level threshold, precision = TP/(TP + FP), TP is the number determined to be fire and the determination result is correct; FP is the number of fires that are determined to be fire, but there are actually no fires.
The calculation of the Die Coefficient (DC) may be calculated using the following formula:
Figure 230587DEST_PATH_IMAGE005
wherein TP, FP and FN are the same as described above.
The Accuracy (Accuracy) can be calculated using the following formula:
Figure 961783DEST_PATH_IMAGE006
in the formula, TP, FP and FN are the same as described above, and TN is the number of fires judged to be absent, and actually not present.
Step S40: and if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned.
It can be understood that if there is an open fire in the area to be warned, it indicates that the area has actually appeared the warning situation, and it needs to arrange personnel to process it, at this time, the warning situation level of the warning situation appearing in the area to be warned can be obtained, so as to ensure reasonable personnel scheduling and fully utilize the warning resources.
In a specific implementation, in order to ensure that the alert level can be determined quickly, step S40 in this embodiment may include:
if open fire exists in the area to be pre-warned, acquiring a recognition result of the preset fire recognition model;
determining an open fire video frame and open fire position information according to the identification result;
extracting an open fire image from the open fire video frame according to the open fire position information;
comparing the open fire image with fire images corresponding to all alarm levels in a preset fire level library to obtain image similarity corresponding to all alarm levels;
and determining the warning situation level corresponding to the area to be pre-warned according to the image similarity.
It should be noted that the preset fire level library may be a pre-constructed database storing a large number of fire images with different alert levels, where the alert level of each fire image may be defined in advance by an administrator of the early warning device.
In actual use, the recognition result of the preset fire recognition model includes information of the time of occurrence of the open fire and the area where the open fire is located in the video image, and the determination of the video frame of the open fire and the position information of the open fire according to the recognition result can be realized by reading the time of occurrence of the open fire and the area where the open fire is located from the recognition result, then extracting the video frame of the open fire from the monitoring video data according to the time of occurrence of the open fire, and determining the position information of the open fire according to the area vertex of the area where the open fire is located. The extraction of the open fire image from the open fire video frame according to the open fire position information may be that an image capture frame is constructed according to the position coordinates of the vertices of each region included in the open fire position information, and then image capture is performed in the open fire video frame according to the image capture frame, so as to obtain the open fire image.
In specific implementation, the comparison between the open fire image and the fire image corresponding to each alarm level in the preset fire level library can be determined through a pre-trained similarity detection model, and the image similarity between the open fire image and each fire image is determined, so that the image similarity corresponding to each alarm level is obtained. The similarity detection model may be a pre-trained neural network model, such as: a CNN model.
In actual use, in order to ensure the accuracy of determining the alarm level, determining the alarm level corresponding to the area to be pre-warned according to the image similarity may be to obtain a difference value between a maximum value and a minimum value in the image similarities corresponding to the alarm levels, and if the difference value is greater than a preset difference threshold, taking the alarm level corresponding to the maximum value as the alarm level corresponding to the area to be pre-warned; if the difference is smaller than or equal to the preset difference threshold, the corresponding warning situation level of the area to be pre-warned can not be determined, at the moment, an artificial confirmation task can be generated according to the naked flame image, and the artificial confirmation task is pushed to the corresponding responsible person for artificial confirmation.
Step S50: and scheduling processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned.
It should be noted that, the scheduling of the people according to the alert level to perform the alert processing on the area to be pre-warned may be to estimate the number of people required for processing the alert according to the alert level, and then schedule the alert processing people according to the number of people to perform the alert processing on the area to be pre-warned.
Further, in order to ensure that the alert can be processed quickly, step S50 in this embodiment may include:
determining the number of the warning processing persons according to the warning level;
determining a target warning condition processing unit according to the area to be warned and the unit address of each warning condition processing unit, and acquiring the number of idle processing personnel corresponding to the target warning condition processing unit;
if the number of the idle processing personnel is more than or equal to the number of the warning processing personnel, acquiring the positioning information of each idle processing personnel in the target warning processing unit;
and generating a scheduling command according to the positioning information and the position information of the area to be pre-warned, and sending the scheduling command to the pre-warning terminal of the idle processing personnel.
It should be noted that determining the number of alert process persons according to the alert level may be to search a corresponding number of alert process persons in a preset level number mapping table according to the alert level, where the preset level number mapping table may be a data table including a mapping relationship between the alert level and the number of alert process persons, and the mapping relationship may be preset by a manager of the early warning device. The idle handler may be a warning handler that is not currently assigned a task. The early warning terminal can be a terminal device used by the warning situation processing personnel, for example: provided is an interphone.
In a specific implementation, the determining of the target warning processing unit according to the unit addresses of the area to be warned and each warning processing unit may be determining the warning processing unit closest to the area to be warned according to the unit addresses of the area to be warned and each warning processing unit, and using the warning processing unit as the target warning processing unit. The obtaining of the idle number of the processing personnel corresponding to the target alert processing unit may be to find the total number of the processing personnel that do not currently execute the task in the personnel management system of the target alert processing unit, so as to obtain the idle number of the processing personnel.
It can be understood that, if the number of idle processing personnel is greater than or equal to the number of alert processing personnel, it indicates that the target alert processing unit can process the alert condition occurring in the area to be pre-warned, and therefore, personnel scheduling can be directly performed from the target alert processing unit, and the currently idle alert processing personnel may have just processed other alert conditions or are performing outwork at the moment, and may not be in the unit, in order to make each alert processing personnel know how to reach the area to be pre-warned, positioning information of each idle processing personnel can be obtained at the moment, then a scheduling command is generated according to the positioning information and the position information of the area to be pre-warned, and the scheduling command is sent to the pre-warning terminal of each idle processing personnel to make it clearly how to reach the area to be pre-warned.
In practical use, the step of obtaining the positioning information of each idle processing person in the target warning situation processing unit may be to determine the position information of each idle processing person through a GPS positioning service in an early warning terminal used by the idle processing person. The step of generating the scheduling command according to the positioning information and the position information of the area to be pre-warned may be to generate mobile navigation data according to the positioning information and the position information of the area to be pre-warned, and construct the scheduling command according to the mobile navigation data.
In a specific implementation, if the number of idle processing personnel is less than the number of alert processing personnel, the search range of unit search can be expanded, and more alert processing units can be searched.
In the embodiment, when the early warning message is received, the positioning information of the reporter of the early warning message is acquired; determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned; identifying the monitoring video data through a preset fire identification model, and determining whether open fire exists in the area to be early warned; if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned; and scheduling the processing personnel according to the alarm situation level so as to process the alarm situation of the area to be pre-warned. When the early warning message is received, the monitoring video data in the area to be early warned can be called to identify and confirm, and the scheduling personnel can process the warning condition when the warning condition is determined, so that the waste of warning power resources caused by various false warning phenomena is avoided.
Referring to fig. 3, fig. 3 is a flowchart illustrating a collaborative early warning method based on big data according to a second embodiment of the present invention.
Based on the first embodiment, after the step S30, the collaborative early warning method based on big data in this embodiment further includes:
step S40': and if no open fire exists in the area to be early warned, identifying the monitoring video data to obtain a smoke detection result and personnel movement information.
It should be noted that, because some fire alarms may occur indoors and the outside may not observe the open fire directly, there is no open fire in the area to be warned, and it does not mean that there is no alarm, and in order to avoid erroneous judgment, the monitoring video data may be identified, and the smoke detection result and the personnel movement information may be obtained for further judgment.
In actual use, the smoke detection model can be adopted to respectively identify the monitoring video data to obtain a smoke detection result, and then the target tracking model is adopted to identify the monitoring video data to obtain the personnel movement information. The smoke detection model and the target tracking model are pre-trained deep learning models, the smoke detection model can identify smoke existing in the monitoring video data and divide the concentration, range and the like of the smoke, and the target tracking model identifies personnel in a moving state in the monitoring video data so as to obtain personnel moving information.
In the specific implementation, in order to avoid the occurrence of the phenomenon of misjudgment, the state of the smoke alarm in the area to be early-warned can be obtained, and whether a large amount of smoke exists in the area to be early-warned is determined together according to the state of the smoke alarm and the recognition result of the smoke detection model.
Step S50': and judging whether the area to be early warned is in a warning state or not according to the smoke detection result and the personnel movement information.
It should be noted that, whether the phenomena of danger avoidance and smoke occurrence of the people exist in the area to be early-warned can be determined according to the smoke detection result and the people movement information, so as to determine whether the warning situation exists in the area to be early-warned.
Further, in order to accurately determine whether there is a warning situation, the step S50' in this embodiment may include:
if the smoke detection result indicates that a large amount of smoke exists, judging that the area to be pre-warned has a warning condition;
if the smoke detection result indicates that a small amount of smoke exists or no smoke exists, determining a personnel moving direction vector according to the personnel moving information and the installation angles of the monitoring devices;
determining a moving convergence point based on the person movement direction vector;
and if the aggregation number of the mobile aggregation points is greater than or equal to a preset aggregation threshold value, judging that the warning condition exists in the area to be pre-warned.
It should be noted that if the smoke detection result indicates that a large amount of smoke exists, it indicates that a fire disaster obviously occurs in the area to be early-warned, resulting in a large amount of smoke, and therefore, it can be directly determined that the area to be early-warned has a warning condition. If the smoke detection result indicates that a small amount of smoke exists or no smoke exists, whether the smoke is caused by other reasons cannot be determined, whether the alarm condition exists cannot be directly judged, and whether the alarm condition exists can be judged by combining the personnel movement information.
It can be understood that if the aggregation number of the mobile aggregation points is greater than or equal to the preset aggregation threshold, it indicates that a large number of people move in the same direction, and emergency risk avoidance may be performed, so that it can be determined that an alarm condition exists in the area to be pre-warned, and alarm condition processing personnel are arranged to further confirm, so as to perform processing.
Step S60': and if the warning condition exists in the area to be pre-warned, acquiring a default scheduling rule.
It should be noted that, only by performing the judgment according to the smoke detection result and the personnel movement information, it is only possible to determine whether there is a warning situation in the area to be warned, but it is not possible to determine a specific warning situation level, and then it is not possible to schedule the personnel according to the warning situation level.
Wherein, the default scheduling rule can be set by the managers of the early warning device according to actual needs, for example: and setting the default scheduling rule as a scheduling condition processor closest to the area to be pre-warned for confirmation.
Step S70': and scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be pre-warned.
It should be noted that, the scheduling of the personnel according to the default scheduling rule may be position information and task execution state of each warning situation processing personnel around the area to be pre-warned, and then the personnel are called from each warning situation processing personnel around the area to be pre-warned according to the default scheduling rule to process the warning situation of the area to be pre-warned.
In actual use, the default scheduling rule can set the range of the calling personnel, the number of the calling personnel, the personnel distance weight and the busyness weight, when personnel scheduling is carried out according to the default scheduling rule, firstly, alarm condition processing personnel meeting the standard can be searched according to the area to be pre-warned and the range of the calling personnel, then, the distance between each alarm condition processing personnel and the area to be pre-warned and the task execution state are obtained, then, the scheduling score corresponding to each alarm condition processing personnel is determined according to the distance from the area to be pre-warned, the task execution state, the personnel distance weight and the busyness weight, then, the alarm condition processing personnel are sequenced from large to small according to the corresponding scheduling scores, and the alarm condition processing personnel with the number being consistent with the number of the calling personnel are selected from front to back in the sequencing result to carry out alarm condition processing on the area to be pre-warned.
In the specific implementation, in order to avoid the phenomenon of alarm condition missing, when the smoke detection result is a small amount of smoke or no smoke, and then the aggregation number of the mobile aggregation points is not less than or equal to the preset aggregation threshold value, an alarm condition confirmation task can be established according to the early warning message and the area to be early warned, and then the alarm condition confirmation task is displayed, so that the alarm condition processing units around the area to be early warned confirm when idle alarm condition processing personnel exist.
In the embodiment, if no open fire exists in the area to be early-warned, the monitoring video data is identified to obtain a smoke detection result and personnel movement information; judging whether the area to be early warned is in a warning state or not according to the smoke detection result and the personnel movement information; if the alarm condition exists in the area to be pre-warned, acquiring a default scheduling rule; and scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be pre-warned. The monitoring video data can be identified when no open fire exists in the area to be pre-warned, further judgment is carried out according to the smoke detection result and the personnel movement information obtained by identification, whether the alarm condition exists in the area to be pre-warned is determined, the alarm condition that the open fire cannot be detected such as indoor fire and the like can be identified, and the phenomenon of missing alarm is avoided as much as possible.
In addition, an embodiment of the present invention further provides a storage medium, where a big data-based collaborative early warning program is stored in the storage medium, and when executed by a processor, the big data-based collaborative early warning program implements the steps of the big data-based collaborative early warning method described above.
Referring to fig. 4, fig. 4 is a block diagram illustrating a first embodiment of a big data-based collaborative early warning apparatus according to the present invention.
As shown in fig. 4, the collaborative early warning apparatus based on big data according to the embodiment of the present invention includes:
the message receiving module 10 is configured to, when receiving the warning message, obtain positioning information of a reporter of the warning message;
the video acquisition module 20 is configured to determine an area to be early-warned according to the positioning information, and acquire monitoring video data acquired by each monitoring device in the area to be early-warned;
the data identification module 30 is configured to identify the monitoring video data through a preset fire identification model, and determine whether an open fire exists in the area to be early-warned, where the preset fire identification model is a model trained in advance by using a model training set, and the model training set is constructed by using data acquired by big data;
the level determining module 40 is configured to, if an open fire exists in the area to be early warned, obtain a warning situation level corresponding to the area to be early warned;
and the personnel scheduling module 50 is used for scheduling processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned.
In the embodiment, when the early warning message is received, the positioning information of the reporter of the early warning message is acquired; determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned; identifying the monitoring video data through a preset fire identification model, and determining whether open fire exists in the area to be early-warned; if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned; and scheduling the processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned. When the early warning message is received, the monitoring video data in the area to be early warned can be called to be recognized and confirmed, and the scheduling personnel can be used for processing the warning condition only when the warning condition is determined, so that the waste of warning resources caused by various false warning phenomena is avoided.
Further, the video capture module 20 is further configured to determine a fuzzy positioning area according to a preset positioning error and the positioning information; determining a center of an early warning area according to the fuzzy positioning area, and determining an area to be early warned according to the center of the early warning area and a preset area threshold; acquiring equipment installation positions of each monitoring equipment; determining target monitoring equipment according to the area to be early-warned and the equipment installation position; and reading the monitoring video data collected by the target monitoring equipment.
Further, the video acquisition module 20 is further configured to acquire a central point of the fuzzy positioning area, and use the central point as a center of the early warning area; determining a plurality of region vertexes according to the region center and a preset region threshold; performing semantic recognition on the early warning message, and determining whether a direction indication statement exists in the early warning message; if a direction indication statement exists, determining a vertex adjustment direction according to the direction indication statement; dividing the multiple regional vertexes into vertexes to be adjusted and invariant vertexes according to the vertex adjustment direction; adjusting the position of the vertex to be adjusted according to a preset adjustment threshold; and when the adjustment is completed, carrying out region division according to the adjusted vertex to be adjusted and the unchanged vertex, and determining a region to be early-warned.
Further, the data identification module 30 is further configured to identify the monitoring video data to obtain a smoke detection result and personnel movement information if there is no open fire in the area to be early warned; judging whether the area to be early warned is in a warning state or not according to the smoke detection result and the personnel movement information; if the warning condition exists in the area to be pre-warned, acquiring a default scheduling rule; and scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be pre-warned.
Further, the data identification module 30 is further configured to determine that an alarm condition exists in the area to be pre-warned if the smoke detection result indicates that a large amount of smoke exists; if the smoke detection result indicates that a small amount of smoke exists or no smoke exists, determining a personnel moving direction vector according to the personnel moving information and the installation angles of the monitoring devices; determining a moving convergence point based on the person movement direction vector; and if the aggregation number of the mobile aggregation points is greater than or equal to a preset aggregation threshold value, judging that the warning condition exists in the area to be pre-warned.
Further, the staff scheduling module 50 is further configured to determine the number of warning handling staff according to the warning level; determining a target alarm condition processing unit according to the area to be pre-warned and the unit address of each alarm condition processing unit, and acquiring the idle number of processing personnel corresponding to the target alarm condition processing unit; if the number of the idle processing personnel is more than or equal to the number of the warning processing personnel, acquiring the positioning information of each idle processing personnel in the target warning processing unit; and generating a scheduling command according to the positioning information and the position information of the area to be pre-warned, and sending the scheduling command to the pre-warning terminal of the idle processing personnel.
Further, the level determining module 40 is further configured to obtain an identification result of the preset fire identification model if an open fire exists in the area to be early warned; determining an open fire video frame and open fire position information according to the identification result; extracting an open fire image from the open fire video frame according to the open fire position information; comparing the open fire image with fire images corresponding to all warning situation levels in a preset fire level library to obtain image similarity corresponding to all warning situation levels; and determining the warning situation level corresponding to the area to be pre-warned according to the image similarity.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
It should be noted that the above-mentioned work flows are only illustrative and do not limit the scope of the present invention, and in practical applications, those skilled in the art may select some or all of them according to actual needs to implement the purpose of the solution of the present embodiment, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to the big data based collaborative early warning method provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (6)

1. A big data-based collaborative early warning method is characterized by comprising the following steps:
when an early warning message is received, acquiring the positioning information of a reporter of the early warning message;
determining an area to be early-warned according to the positioning information, and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned;
identifying the monitoring video data through a preset fire identification model to determine whether open fire exists in the area to be early-warned, wherein the preset fire identification model is a model which is trained by using a model training set in advance, and the model training set is constructed through data acquired by big data;
if open fire exists in the area to be early-warned, acquiring the warning situation level corresponding to the area to be early-warned;
scheduling processing personnel according to the alarm level so as to process the alarm condition of the area to be early-warned;
after the step of identifying the monitoring video data through a preset fire identification model and determining whether an open fire image exists in the area to be early-warned, the method further comprises the following steps of:
if no open fire exists in the area to be early warned, identifying the monitoring video data through a smoke detection model to obtain a smoke detection result, wherein the smoke detection result comprises smoke concentration and a smoke range;
identifying the monitoring video data through a target tracking model to obtain personnel movement information;
judging whether the area to be early-warned has warning conditions or not according to the smoke detection result and the personnel movement information;
if the alarm condition exists in the area to be pre-warned, acquiring a default scheduling rule;
scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be pre-warned;
the step of judging whether the warning condition exists in the area to be pre-warned according to the smoke detection result and the personnel movement information comprises the following steps:
if the smoke detection result indicates that a large amount of smoke exists, judging that the area to be pre-warned has a warning condition;
if the smoke detection result indicates that a small amount of smoke exists or no smoke exists, determining a personnel moving direction vector according to the personnel moving information and the installation angle of each monitoring device;
determining a movement convergence point based on the person movement direction vector;
if the aggregation number of the mobile aggregation points is larger than or equal to a preset aggregation threshold value, judging that the area to be pre-warned has a warning condition;
the step of determining the area to be early-warned according to the positioning information and acquiring the monitoring video data acquired by each monitoring device in the area to be early-warned comprises the following steps:
determining a fuzzy positioning area according to a preset positioning error and the positioning information;
acquiring a central point of the fuzzy positioning area, and taking the central point as a center of an early warning area;
determining a plurality of region vertexes according to the region center and a preset region threshold;
performing semantic recognition on the early warning message, and determining whether a direction indication statement exists in the early warning message;
if a direction indication statement exists, determining a vertex adjustment direction according to the direction indication statement;
dividing the vertexes of the plurality of regions in the vertex adjusting direction into vertexes to be adjusted, and dividing the vertexes of the plurality of regions not in the vertex adjusting direction into invariant vertexes;
extending the vertex to be adjusted along the corresponding direction thereof by a preset adjustment threshold;
when the positions of the vertexes to be adjusted are adjusted, taking the center of the area as the center of a circle, and taking the distance between each vertex to be adjusted and each invariant vertex and the center of the area as the radius, and determining a plurality of circular areas;
taking the intersection area of each circular area as an area to be pre-warned;
acquiring equipment installation positions of each monitoring equipment;
determining target monitoring equipment according to the area to be early-warned and the equipment installation position;
and reading the monitoring video data collected by the target monitoring equipment.
2. The collaborative early warning method based on big data as claimed in claim 1, wherein the step of scheduling processing personnel according to the alarm level to process the alarm of the area to be early warned comprises:
determining the number of the warning processing persons according to the warning level;
determining a target alarm condition processing unit according to the area to be pre-warned and the unit address of each alarm condition processing unit, and acquiring the idle number of processing personnel corresponding to the target alarm condition processing unit;
if the number of the idle processing personnel is more than or equal to the number of the warning processing personnel, acquiring the positioning information of each idle processing personnel in the target warning processing unit;
and generating a scheduling command according to the positioning information and the position information of the area to be pre-warned, and sending the scheduling command to the pre-warning terminal of the idle processing personnel.
3. The collaborative early warning method based on big data according to any one of claims 1-2, wherein the step of obtaining the warning situation level corresponding to the area to be early warned if there is an open fire in the area to be early warned comprises:
if open fire exists in the area to be pre-warned, acquiring a recognition result of the preset fire recognition model;
determining an open fire video frame and open fire position information according to the identification result;
extracting an open fire image from the open fire video frame according to the open fire position information;
comparing the open fire image with fire images corresponding to all warning situation levels in a preset fire level library to obtain image similarity corresponding to all warning situation levels;
and determining the warning situation level corresponding to the area to be pre-warned according to the image similarity.
4. The big data based collaborative early warning device is characterized by comprising the following modules:
the message receiving module is used for acquiring the positioning information of the reporter of the early warning message when the early warning message is received;
the video acquisition module is used for determining an area to be early-warned according to the positioning information and acquiring monitoring video data acquired by each monitoring device in the area to be early-warned;
the data identification module is used for identifying the monitoring video data through a preset fire identification model and determining whether an open fire exists in the area to be early-warned, wherein the preset fire identification model is a model which is trained by using a model training set in advance, and the model training set is constructed through data acquired by big data;
the level determining module is used for acquiring the warning situation level corresponding to the area to be early warned if open fire exists in the area to be early warned;
the personnel scheduling module is used for scheduling processing personnel according to the alarm level so as to process the alarm condition of the area to be pre-warned;
the level determining module is further configured to identify the monitoring video data through a smoke detection model if an open fire does not exist in the area to be warned, and obtain a smoke detection result, where the smoke detection result includes a smoke concentration and a smoke range; identifying the monitoring video data through a target tracking model to obtain personnel movement information; judging whether the area to be early-warned has warning conditions or not according to the smoke detection result and the personnel movement information; if the warning condition exists in the area to be pre-warned, acquiring a default scheduling rule; scheduling personnel according to the default scheduling rule so as to process the warning condition of the area to be early warned;
the level determining module is further configured to determine that an alarm condition exists in the area to be pre-warned if the smoke detection result indicates that a large amount of smoke exists; if the smoke detection result indicates that a small amount of smoke exists or no smoke exists, determining a personnel moving direction vector according to the personnel moving information and the installation angle of each monitoring device; determining a movement convergence point based on the person movement direction vector; if the aggregation number of the mobile aggregation points is larger than or equal to a preset aggregation threshold value, judging that the area to be pre-warned has a warning condition;
the video acquisition module is also used for determining a fuzzy positioning area according to a preset positioning error and the positioning information; acquiring a central point of the fuzzy positioning area, and taking the central point as a center of an early warning area; determining a plurality of region vertexes according to the region center and a preset region threshold; performing semantic recognition on the early warning message, and determining whether a direction indication statement exists in the early warning message; if a direction indication statement exists, determining a vertex adjustment direction according to the direction indication statement; dividing the vertexes of the plurality of regions in the vertex adjusting direction into vertexes to be adjusted, and dividing the vertexes of the plurality of regions not in the vertex adjusting direction into invariable vertexes; extending the vertex to be adjusted along the corresponding direction thereof by a preset adjustment threshold; when the position of the vertex to be adjusted is adjusted, determining a plurality of circular areas by taking the area center as a circle center and respectively taking the distance between each vertex to be adjusted and the distance between each invariant vertex and the area center as a radius; taking the intersection region of each circular region as a region to be pre-warned; acquiring equipment installation positions of each monitoring equipment; determining target monitoring equipment according to the area to be early warned and the equipment installation position; and reading the monitoring video data collected by the target monitoring equipment.
5. The big data based collaborative early warning device is characterized by comprising: a processor, a memory, and a big-data based collaborative early warning program stored on the memory and executable on the processor, the big-data based collaborative early warning program, when executed by the processor, implementing the steps of the big-data based collaborative early warning method according to any one of claims 1-3.
6. A computer-readable storage medium, wherein a big-data-based collaborative early warning program is stored on the computer-readable storage medium, and when executed, the big-data-based collaborative early warning program implements the steps of the big-data-based collaborative early warning method according to any one of claims 1 to 3.
CN202210838710.6A 2022-07-18 2022-07-18 Big data based collaborative early warning method, device, equipment and storage medium Active CN114999099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210838710.6A CN114999099B (en) 2022-07-18 2022-07-18 Big data based collaborative early warning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210838710.6A CN114999099B (en) 2022-07-18 2022-07-18 Big data based collaborative early warning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114999099A CN114999099A (en) 2022-09-02
CN114999099B true CN114999099B (en) 2022-12-30

Family

ID=83022230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210838710.6A Active CN114999099B (en) 2022-07-18 2022-07-18 Big data based collaborative early warning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114999099B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392596B (en) * 2023-09-07 2024-04-30 中关村科学城城市大脑股份有限公司 Data processing method, electronic device, and computer-readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1351316A (en) * 2000-10-30 2002-05-29 深圳市宏成电脑有限公司 Automatic alarm and processing network system for fire accident in urban area
CN104635611A (en) * 2015-02-12 2015-05-20 梧州市家安安防有限责任公司 Video monitoring-based networking safe-guard intelligent processing system for quickly fighting terrorist activities
CN205287366U (en) * 2015-02-12 2016-06-08 覃坚成 Long -range high efficiency intelligence of networking fire extinguishing systems
CN105723427A (en) * 2016-01-06 2016-06-29 冯旋宇 Fire alarm control method and system for intelligent city
CN105869341A (en) * 2016-04-20 2016-08-17 中国石油化工股份有限公司 Visual fire alarm integrated monitoring method
CN111815893A (en) * 2020-07-14 2020-10-23 深圳云谷星辰信息技术有限公司 Fire-fighting early warning method based on Internet of things and storage medium
CN112767644A (en) * 2020-12-31 2021-05-07 千方捷通科技股份有限公司 Method and device for early warning of fire in highway tunnel based on video identification
CN114257791A (en) * 2021-12-21 2022-03-29 广州西麦科技股份有限公司 Digital fire safety assessment and visual operation and maintenance method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0030330D0 (en) * 2000-12-13 2001-01-24 Hewlett Packard Co Idiom handling in voice service systems
CN101539951A (en) * 2008-03-19 2009-09-23 徐东伟 Construction method for urban disaster prevention and fighting informationalized rescue command comprehensive intelligent platform
CN103297751A (en) * 2013-04-23 2013-09-11 四川天翼网络服务有限公司 Wisdom skynet video behavior analyzing system
CN106845389B (en) * 2017-01-18 2019-09-06 山东大学 It is a kind of to be become separated in flight event detecting method and system based on direction-sociability model crowd
KR101871441B1 (en) * 2018-02-27 2018-06-27 뉴비전엔지니어링(주) Voice recognition field survey and road linear generation system for high definition electronic map production for autonomous vehicle
CN113643497A (en) * 2020-05-11 2021-11-12 广东毓秀科技有限公司 Based on subway equipment smog identification system that catches fire
CN112396815A (en) * 2020-11-09 2021-02-23 张容连 Network mobile terminal emergency rescue intelligent alarm positioning system and use method thereof
CN114511982B (en) * 2022-04-19 2022-07-08 亿慧云智能科技(深圳)股份有限公司 Smoke alarm method and intelligent smoke alarm

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1351316A (en) * 2000-10-30 2002-05-29 深圳市宏成电脑有限公司 Automatic alarm and processing network system for fire accident in urban area
CN104635611A (en) * 2015-02-12 2015-05-20 梧州市家安安防有限责任公司 Video monitoring-based networking safe-guard intelligent processing system for quickly fighting terrorist activities
CN205287366U (en) * 2015-02-12 2016-06-08 覃坚成 Long -range high efficiency intelligence of networking fire extinguishing systems
CN105723427A (en) * 2016-01-06 2016-06-29 冯旋宇 Fire alarm control method and system for intelligent city
CN105869341A (en) * 2016-04-20 2016-08-17 中国石油化工股份有限公司 Visual fire alarm integrated monitoring method
CN111815893A (en) * 2020-07-14 2020-10-23 深圳云谷星辰信息技术有限公司 Fire-fighting early warning method based on Internet of things and storage medium
CN112767644A (en) * 2020-12-31 2021-05-07 千方捷通科技股份有限公司 Method and device for early warning of fire in highway tunnel based on video identification
CN114257791A (en) * 2021-12-21 2022-03-29 广州西麦科技股份有限公司 Digital fire safety assessment and visual operation and maintenance method

Also Published As

Publication number Publication date
CN114999099A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN110807429B (en) Construction safety detection method and system based on tiny-YOLOv3
US20180349685A1 (en) Identity verification via validated facial recognition and graph database
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN111815916A (en) Monitoring and early warning method and system for target area and mobile terminal
US20120256745A1 (en) Public safety analysis system
CN114999099B (en) Big data based collaborative early warning method, device, equipment and storage medium
CN110619738B (en) Joint defense warning method and device
CN111259840A (en) Land occupation early warning method, device, equipment and storage medium
CN111814725A (en) Early warning method for judging ignition of monitoring video based on CNN + LSTM + MLP combined neural network
CN110839051B (en) Service providing method, device, robot and storage medium
CN114386869A (en) Operation and maintenance work order distribution method and device, electronic equipment and storage medium
CN112633661A (en) BIM-based emergency dispatching command method, system, computer equipment and readable medium
JP2013156912A (en) Maintenance support system for equipment and equipment maintenance server
CN112489351A (en) School fire early warning method based on picture recognition
CN111125290B (en) Intelligent river patrol method and device based on river growth system and storage medium
CN116170566A (en) Intelligent building monitoring management method and device, electronic equipment and storage medium
CN111860187A (en) High-precision worn mask identification method and system
CN112749894A (en) Defect detection model evaluation method and device
CN115966061B (en) Disaster early warning processing method, system and device based on 5G message
CN111667450A (en) Ship quantity counting method and device and electronic equipment
CN111242010A (en) Method for judging and identifying identity of litter worker based on edge AI
CN111047731A (en) AR technology-based telecommunication room inspection method and system
JPWO2019187107A1 (en) Information processing equipment, control methods, and programs
CN113379163A (en) Teaching assistance method, teaching assistance device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant