WO2023125588A1 - Procédé et appareil de détermination de niveau de danger d'incendie - Google Patents

Procédé et appareil de détermination de niveau de danger d'incendie Download PDF

Info

Publication number
WO2023125588A1
WO2023125588A1 PCT/CN2022/142553 CN2022142553W WO2023125588A1 WO 2023125588 A1 WO2023125588 A1 WO 2023125588A1 CN 2022142553 W CN2022142553 W CN 2022142553W WO 2023125588 A1 WO2023125588 A1 WO 2023125588A1
Authority
WO
WIPO (PCT)
Prior art keywords
risk factor
flame
fire
scene
risk
Prior art date
Application number
PCT/CN2022/142553
Other languages
English (en)
Chinese (zh)
Inventor
孙占辉
陈涛
黄丽达
杨欢
刘罡
王晓萌
刘春慧
史盼盼
狄文杰
刘连顺
赵晨阳
秦阳阳
Original Assignee
北京辰安科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京辰安科技股份有限公司 filed Critical 北京辰安科技股份有限公司
Publication of WO2023125588A1 publication Critical patent/WO2023125588A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and in particular to a method, device, computer equipment and storage medium for determining a fire hazard level.
  • the temperature information and smoke concentration information of the fire scene can be obtained based on the sensing temperature and smoke detection technology, so as to make an alarm for the fire.
  • the level of danger at the scene can be obtained based on the sensing temperature and smoke detection technology, so as to make an alarm for the fire.
  • the present disclosure aims to solve one of the technical problems in the related art at least to a certain extent.
  • the present disclosure provides a method, device, system and storage medium for determining a fire hazard level.
  • a method for determining a fire hazard level including:
  • the video data determine the number of people at the fire scene, flame color, flame trend and scene type
  • a device for determining a fire hazard level including:
  • the acquisition module is used to acquire the video data of the fire scene
  • the first determination module is used to determine the number of people, flame color, flame trend and scene type at the fire scene according to the video data;
  • the second determination module is configured to determine the danger level of the fire according to the number of people, flame color, flame trend and scene type.
  • an electronic device including:
  • the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the instructions described in any one of the first aspects.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to execute the method described in any one of the first aspects.
  • a computer program product is provided.
  • the instruction processor in the computer program product executes, the method for determining the fire hazard level proposed by the embodiment of the first aspect of the present disclosure is executed.
  • the video data of the fire scene is first obtained, and then according to the video data, the number of people, flame color, flame trend and scene type of the fire scene are determined, and then according to the number of people, flame color, flame Trend and scenario type to determine the hazard level of the fire in question. Therefore, based on the method of computer vision, static video summary, target detection and scene recognition can be carried out on the fire scene, and effective evaluation factors such as the number of people at the fire scene, flame color, flame trend and scene type can be determined, and then according to each risk assessment The characteristics of the factors determine the danger of the fire scene, and can accurately and real-time classify the danger of the fire scene, which is beneficial to the actual management of the fire scene.
  • FIG. 1 is a schematic flowchart of a method for determining a fire hazard level proposed by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for determining a fire hazard level proposed by another embodiment of the present disclosure
  • FIG. 3 is a structural block diagram of a device for determining a fire hazard level provided by the present disclosure
  • FIG. 4 is a block diagram of an electronic device used to implement the fire hazard level determination method of the present disclosure.
  • the execution subject of the method for determining the fire hazard level in this embodiment is the device for determining the fire hazard level, which can be realized by software and/or hardware, and which can be configured in computer equipment,
  • the computer equipment may include but not limited to a terminal, a server, etc.
  • the method for determining the fire hazard level proposed in the present disclosure will be described below with the device for determining the fire hazard level as the execution subject.
  • Fig. 1 is a schematic flowchart of a method for determining a fire hazard level proposed by an embodiment of the present disclosure.
  • the method for determining the fire hazard level includes steps S101 to S103.
  • the videos and images of the fire scene record a lot of effective information on the scene, such as the type of scene, the scale of the fire, the location of the fire, the type of combustibles, and the number of people.
  • An important element of the initial strategy is that when a fire breaks out, the videos and images of the fire scene record a lot of effective information on the scene, such as the type of scene, the scale of the fire, the location of the fire, the type of combustibles, and the number of people.
  • the video data of the fire can be obtained through the camera device, which can be a video stream, and then the pictures in the video are detected and analyzed to extract key effective information, and then analyze the danger of the fire.
  • key frames can be extracted from the video data, for example, the representative frames at the beginning or end of the scene transition can be predicted from the video frame sequence, so that the device can extract these Representative frames are analyzed as keyframes.
  • the long short-term memory model (Long Short-Term Memory, LSTM) can be used to discover the relationship between the front and back image samples in the video frame sequence, and then mine the image representative frame in the sample as the key frame.
  • LSTM Long Short-Term Memory
  • the number of key frames may vary according to different settings, and those skilled in the art may select according to actual needs.
  • the device can analyze each key frame to determine the number of people at the fire scene corresponding to each key frame, Flame color, flame tendency, and scene type.
  • the flame color may be colorless, white, gray and black.
  • the scene types can be divided into commercial areas, offices, residences, stadium areas, street areas, outdoor activity areas, and natural environments.
  • the keyframes may be detected through the pre-trained neural network model to determine the number of people, flame color, scene type and diagonal length of the flame detection frame corresponding to each keyframe.
  • the Faster R-CNN algorithm can be used to use Resnet-50 as the basic neural network to image images of commercial areas, offices, residences, stadium areas, street areas, outdoor activity areas, and natural environments in any scene dataset. Training is carried out, and then the trained neural network is used as the scene classification neural network model of the present invention.
  • the scene type label corresponding to each keyframe can be determined, such as commercial area, office room, and residence.
  • the keyframes can also be detected by using the pre-trained object recognition model to output the time information corresponding to each keyframe, the number of people, and the diagonal length of the flame detection frame.
  • the yolov4 algorithm can be used to detect smoke, flames, and people.
  • any flame and smoke scene data set can be used as the basis for pre-training, and the basic The neural network model is trained to obtain the target recognition model.
  • the keyframes can be input into the target recognition model to output the time information corresponding to each keyframe, the number of people and the diagonal length of the flame detection frame.
  • the flame trend corresponding to each key frame can be determined, such as the initial phase combustion, the development phase combustion, the overall combustion and the decline phase burning.
  • each key frame its corresponding adjacent reference key frame can be determined, such as the first two frames, according to the current key frame and the diagonal length of the flame detection frame adjacent to the reference key frame, that is, the flame With the size and timing information, the device can determine the flame trend corresponding to each keyframe.
  • S103 Determine the danger level of the fire according to the number of people, flame color, flame trend and scene type.
  • the emergency and the carrier jointly determine the degree of danger of the accident.
  • Fire is an emergency, and the location of the incident and the crowd at the location of the incident as the carrier jointly affect the danger of the fire.
  • the fire is determined by the flame trend and smoke, which can be judged by the color of the smoke at the scene. Therefore, the degree of fire danger can be determined by the color of smoke, the number of people, the trend of flames and the type of scene.
  • corresponding risk coefficients can be determined for the number of people, flame color, flame trend and scene type, and then the final risk assessment value can be determined according to the weight corresponding to each indicator.
  • the risk level can be reclassified to determine the final risk level of the fire scene, such as general risk, major risk, major risk, and extremely serious risk.
  • the video data of the fire scene is first obtained, and then according to the video data, the number of people, flame color, flame trend and scene type of the fire scene are determined, and then according to the number of people, flame color, flame Trend and scenario type to determine the hazard level of the fire in question. Therefore, based on the method of computer vision, static video summary, target detection and scene recognition can be carried out on the fire scene, and effective evaluation factors such as the number of people at the fire scene, flame color, flame trend and scene type can be determined, and then according to each risk assessment The characteristics of the factors determine the danger of the fire scene, and can accurately and real-time classify the danger of the fire scene, which is beneficial to the actual management of the fire scene.
  • Fig. 2 is a schematic flowchart of a method for determining a fire hazard level proposed by an embodiment of the present disclosure.
  • the method for determining the fire hazard level includes steps S201 to S208.
  • step S201 may refer to the foregoing embodiments, and details are not described here.
  • the video data may correspond to a sequence of video frames, that is, video stream information.
  • redundant and fuzzy frames in the video may be filtered and screened using the video static summary technology, so as to obtain the video information.
  • Valid frames also known as key frames.
  • the key frames in the sequence of video frames can be determined according to the clarity and content of each frame, that is, the amount of information and the time interval between each frame of images.
  • the key frame is also a representative picture, which is more conducive to reflecting various factors of the fire scene.
  • the LSTM algorithm can be used to perform static video summarization on the video stream.
  • time information corresponding to key frames and the number of key frames need to be recorded.
  • a certain threshold should be set for the time interval between key frames, so that it can be used to prevent excessive information reduction and affect judgment.
  • key frames can be detected by pre-training the generated neural network model to determine the number of people, flame color, scene type, and diagonal length of the flame detection frame corresponding to each key frame .
  • the Faster R-CNN algorithm can be used to use Resnet-50 as the basic neural network to image images of commercial areas, offices, residences, stadium areas, street areas, outdoor activity areas, and natural environments in any scene dataset. Training is carried out, and then the trained neural network is used as the scene classification neural network model of the present invention.
  • the scene type label corresponding to each keyframe can be determined, such as commercial area, office room, and residence.
  • the keyframes can also be detected by using the pre-trained object recognition model to output the time information corresponding to each keyframe, the number of people, and the diagonal length of the flame detection frame.
  • the yolov4 algorithm can be used to detect smoke, flames, and people.
  • any flame and smoke scene data set can be used as the basis for pre-training, and the basic neural network can be trained by using a large number of labeled smoke and flame pictures The model is trained to obtain a target recognition model.
  • the keyframes can be input into the target recognition model, so as to output the time information corresponding to each keyframe, the number of people, and the diagonal length of the flame detection frame.
  • each key frame can be parsed to determine the diagonal length corresponding to the flame detection frame contained in each key frame, and then according to the time interval between each key frame and each key frame The length of the diagonal corresponding to the included flame detection box determines the flame trend.
  • the judgment of the flame trend can be judged according to the size change of the flame detection frame in the picture after the fire starts.
  • the diagonal size of the flame detection frame in the first two key frames of the current frame can be extracted as a reference. For example, if it is assumed that the time corresponding to the current key frame is T y , the diagonal length of the flame detection frame is D y , and the time corresponding to the first two frames of the current frame is T y-1 , T y-2 , the diagonal length is is D y-1 , D y-2 .
  • the initial stage corresponds to a concave function rising in a monotonous interval with a low slope. Since the slope is low, a slope lower than a certain threshold is regarded as the initial stage of flame combustion; the change in the size of the flame frame in the development stage corresponds to a monotonous interval The ascending concave function; while the full combustion stage corresponds to a convex function or a linear function parallel to the time axis; the descending stage corresponds to a monotone interval descending function.
  • is a preset slope threshold.
  • the color of the smoke depends on the type of combustibles, and the color of the smoke can assist in judging the degree of fire burning and the degree of danger at the scene.
  • the first risk coefficient may be a coefficient determined according to the danger of flame color.
  • white smoke with the lowest temperature and small fire intensity, is set as the general risk factor.
  • the gray smoke should not be underestimated, it is very likely to be smoldering, or it may be high-temperature waiting to burn, so it is set to a higher risk factor.
  • the yellow-green smoke may be the burning of toxic chemicals, which is set as a major risk factor.
  • black smoke has the highest temperature and usually occurs when the fire is burning the most violently.
  • the smoke is also mixed with raging flames. It is the most dangerous period in the fire and is set as an extremely serious risk factor.
  • the general risk coefficient can correspond to a value in the range of [0,0.25], and for a larger risk coefficient, it can correspond to (0.25,0.5]
  • stages of the flame correspond to different degrees of danger.
  • the initial stage of the fire has a low degree of danger, but it is necessary to be alert to the flashover of the flame; In this stage, the fire will gradually become smaller, the temperature will gradually decrease, and the degree of danger will also decrease.
  • the risk factor corresponding to the descending stage of the flame trend can be determined as a general risk factor
  • the risk factor corresponding to the initial stage of the flame trend can be determined as a relatively large risk factor
  • the risk factor corresponding to the development stage of the flame trend can be determined as a major risk
  • the risk factor corresponding to the comprehensive combustion stage of the flame trend is determined as the extremely serious risk factor.
  • the general risk coefficient can correspond to a value in the range of [0,0.25], and for a larger risk coefficient, it can correspond to (0.25,0.5]
  • the third risk factor may be determined according to the risk corresponding to the scene type.
  • the third risk coefficient can be set according to the randomness of the distribution and types of combustibles, the randomness of fire sources, and human activity conditions under the standards of the disclosed coefficient system, specifically for commercial areas, offices, and residential areas. , venue area, street area, natural environment and outdoor activity area.
  • the first-level risk coefficient can correspond to a value in the range of [0,0.25]
  • the second-level risk coefficient can correspond to (0.25,0.5 ]
  • the third-level risk coefficient it can correspond to the value in the range of (0.5,0.75]
  • the fourth-level risk factor it can correspond to the value in the range of (0.75,1].
  • the fourth risk factor may be a risk factor determined according to the number of persons.
  • the corresponding risk coefficient can be set based on the number of people at the fire scene. For example, after determining the number of people at the fire scene as P, if P is less than 10, the risk coefficient can be determined as the primary risk coefficient. If P is in [10, 50), the risk factor can be determined as an intermediate risk factor, if P is in [50,100), the risk factor can be determined as a high-level risk factor, and if P is greater than or equal to 100, the risk factor can be determined as an extremely serious risk factor.
  • the primary risk coefficient can correspond to a value in the range of [0,0.25]
  • the intermediate risk coefficient can correspond to a value in the range of (0.25,0.5).
  • the high risk coefficient it can correspond to the value in the range of (0.5,0.75]
  • the extremely serious risk coefficient it can correspond to the value in the range of (0.75,1].
  • the risk level can be determined according to a preset reference weight, and the first risk factor, the second risk factor, the third risk factor and the fourth risk factor.
  • a corresponding reference weight can be preset. For example, for the first risk factor, a reference weight corresponding to the flame color can be set, and for the second risk factor, a corresponding flame color can be set. For the reference weight of the trend, for the third risk factor, a reference weight corresponding to the number of people can be set, and for the fourth risk factor, a reference weight corresponding to the scene type can be set.
  • the first risk factor is A
  • its corresponding reference weight is a1
  • the second risk factor is B
  • its corresponding reference weight is a2
  • the third risk factor is C
  • its corresponding reference weight is a3
  • the fourth risk factor is D
  • the corresponding risk level can be determined according to the scope of S, such as general risk, major risk, major risk, and extremely serious risk.
  • the video data of the fire scene is first obtained, and then the key frames in the video data are determined according to the clarity of each frame of image in the video data, the content included and the time interval between each frame of images, Analyzing the key frame to determine the number of people at the fire scene, flame color, flame trend and scene type, and then determine the first risk factor according to the flame color, and determine the second risk factor according to the flame trend coefficient, according to the scene type, determine the third risk coefficient, determine the fourth risk coefficient according to the number of people, according to the first risk coefficient, the second risk coefficient, the third risk coefficient and the The fourth risk factor determines the level of risk.
  • the effective frames and effective reference information of the video data of the fire scene can be extracted, and the danger level of the fire scene can be classified according to important factors such as the number of people, flame color, flame trend and scene type, so as to help decision makers make timely decisions. Make the right decision.
  • the device 300 for determining the fire hazard level includes: an acquisition module 310 , a first determination module 320 , and a second determination module 330 .
  • the acquisition module is used to acquire the video data of the fire scene.
  • the first determination module is configured to determine the number of people at the fire scene, flame color, flame trend and scene type according to the video data.
  • the second determination module is configured to determine the danger level of the fire according to the number of people, flame color, flame trend and scene type.
  • the first determining module includes a first determining unit and an analyzing unit.
  • the first determination unit is configured to determine the key frame in the video data according to the definition and content of each frame image in the video data and the time interval between each frame image.
  • the parsing unit is configured to parse the key frames to determine the number of people, flame color, flame trend and scene type at the fire scene.
  • the analysis unit is specifically configured to: analyze each of the key frames to determine the length of the diagonal line corresponding to the flame detection frame contained in each of the key frames; The time interval between the key frames and the diagonal length corresponding to the flame detection frame included in each key frame determine the flame trend.
  • the second determination module includes a second determination unit, a third determination unit, a fourth determination unit, a fifth determination unit and a sixth determination unit.
  • the second determining unit is configured to determine a first risk factor according to the flame color.
  • the third determining unit is configured to determine a second risk factor according to the flame trend.
  • a fourth determining unit configured to determine a third risk factor according to the scene type.
  • the fifth determining unit is configured to determine a fourth risk factor according to the number of people.
  • a sixth determining unit configured to determine a risk level according to the first risk factor, the second risk factor, the third risk factor, and the fourth risk factor.
  • the sixth determining unit is specifically configured to: determine the risk level according to a preset reference weight, and the first risk factor, the second risk factor, the third risk factor, and the fourth risk factor .
  • the video data of the fire scene is first obtained, and then according to the video data, the number of people, flame color, flame trend and scene type of the fire scene are determined, and then according to the number of people, flame color, flame Trend and scenario type to determine the hazard level of the fire in question. Therefore, based on the method of computer vision, static video summary, target detection and scene recognition can be carried out on the fire scene, and effective evaluation factors such as the number of people at the fire scene, flame color, flame trend and scene type can be determined, and then according to each risk assessment The characteristics of the factors determine the danger of the fire scene, and can accurately and real-time classify the danger of the fire scene, which is beneficial to the actual management of the fire scene.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor.
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can implement the method for determining the fire hazard level in any of the above embodiments.
  • a computer-readable storage medium is provided.
  • the server can perform the determination of the fire hazard level in any of the above-mentioned embodiments. method.
  • a computer program product including computer programs/instructions is provided.
  • the method for determining the fire hazard level in any of the above embodiments is realized.
  • FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 400 includes a computing unit 401 that can execute according to a computer program stored in a read-only memory (ROM) 402 or loaded from a storage unit 408 into a random-access memory (RAM) 403. Various appropriate actions and treatments. In the RAM 403, various programs and data necessary for the operation of the device 400 can also be stored.
  • the computing unit 401, ROM 402, and RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • the I/O interface 405 includes: an input unit 406, such as a keyboard, a mouse, etc.; an output unit 407, such as various types of displays, speakers, etc.; a storage unit 408, such as a magnetic disk, an optical disk, etc. ; and a communication unit 409, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 409 allows the device 400 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 401 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 401 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the calculation unit 401 executes various methods and processes described above, such as a method for determining a fire hazard level.
  • the fire hazard level determination method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 408 .
  • part or all of the computer program may be loaded and/or installed on the device 400 via the ROM 402 and/or the communication unit 409.
  • the computer program When the computer program is loaded into the RAM 403 and executed by the computing unit 401, one or more steps of the method for determining the fire hazard level described above may be performed.
  • the computing unit 401 may be configured in any other appropriate way (for example, by means of firmware) to execute the method for determining the fire hazard level.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a A user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: local area networks (LANs), wide area networks (WANs), the Internet, and blockchain networks.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the problem of traditional physical host and VPS service ("Virtual Private Server", or "VPS”) Among them, there are defects such as difficult management and weak business scalability.
  • the server can also be a server of a distributed system, or a server combined with a blockchain.
  • the video data of the fire scene is first obtained, and then according to the video data, the number of people, flame color, flame trend and scene type of the fire scene are determined, and then according to the number of people, flame color, flame Trend and scenario type to determine the hazard level of the fire in question. Therefore, based on the method of computer vision, static video summary, target detection and scene recognition can be carried out on the fire scene, and effective evaluation factors such as the number of people at the fire scene, flame color, flame trend and scene type can be determined, and then according to each risk assessment The characteristics of the factors determine the danger of the fire scene, and can accurately and real-time classify the danger of the fire scene, which is beneficial to the actual management of the fire scene.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved.

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Fire Alarms (AREA)

Abstract

La présente divulgation se rapporte au domaine technique de l'intelligence artificielle, et concerne un procédé et un appareil de détermination de niveau de danger d'incendie, un dispositif et un support de stockage. Une solution spécifique comprend : l'obtention de données vidéo du site d'un incendie ; la détermination du nombre de personnel, d'une couleur des flammes, d'une tendance des flammes et d'un type d'environnement au niveau du site de l'incendie selon les données vidéo ; et la détermination d'un niveau de danger pour l'incendie en fonction du nombre de personnel, de la couleur des flammes, de la tendance des flammes et du type d'environnement.
PCT/CN2022/142553 2021-12-29 2022-12-27 Procédé et appareil de détermination de niveau de danger d'incendie WO2023125588A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111640215.6 2021-12-29
CN202111640215.6A CN114494944A (zh) 2021-12-29 2021-12-29 火灾危险等级的确定方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023125588A1 true WO2023125588A1 (fr) 2023-07-06

Family

ID=81507858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/142553 WO2023125588A1 (fr) 2021-12-29 2022-12-27 Procédé et appareil de détermination de niveau de danger d'incendie

Country Status (2)

Country Link
CN (1) CN114494944A (fr)
WO (1) WO2023125588A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597603A (zh) * 2023-07-19 2023-08-15 山东哲远信息科技有限公司 一种智能消防火灾报警系统及其控制方法
CN117409193A (zh) * 2023-12-14 2024-01-16 南京深业智能化系统工程有限公司 一种烟雾场景下的图像识别方法、装置及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494944A (zh) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 火灾危险等级的确定方法、装置、设备及存储介质
CN116824462B (zh) * 2023-08-30 2023-11-07 贵州省林业科学研究院 一种基于视频卫星的森林智能防火方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324064A (zh) * 2011-08-25 2012-01-18 陈�光 基于传感带的动态火灾风险评估方法与系统
CN103065413A (zh) * 2012-12-13 2013-04-24 中国电子科技集团公司第十五研究所 获取火灾等级信息的方法及装置
US20160275087A1 (en) * 2015-03-20 2016-09-22 Tata Consultancy Services, Ltd. Computer implemented system and method for determining geospatial fire hazard rating of an entity
CN109800961A (zh) * 2018-12-27 2019-05-24 深圳市中电数通智慧安全科技股份有限公司 一种火灾救援决策方法、装置、存储介质及终端设备
CN111681355A (zh) * 2020-06-03 2020-09-18 安徽沧浪网络科技有限公司 一种适用智慧校园的安防系统
CN114494944A (zh) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 火灾危险等级的确定方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324064A (zh) * 2011-08-25 2012-01-18 陈�光 基于传感带的动态火灾风险评估方法与系统
CN103065413A (zh) * 2012-12-13 2013-04-24 中国电子科技集团公司第十五研究所 获取火灾等级信息的方法及装置
US20160275087A1 (en) * 2015-03-20 2016-09-22 Tata Consultancy Services, Ltd. Computer implemented system and method for determining geospatial fire hazard rating of an entity
CN109800961A (zh) * 2018-12-27 2019-05-24 深圳市中电数通智慧安全科技股份有限公司 一种火灾救援决策方法、装置、存储介质及终端设备
CN111681355A (zh) * 2020-06-03 2020-09-18 安徽沧浪网络科技有限公司 一种适用智慧校园的安防系统
CN114494944A (zh) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 火灾危险等级的确定方法、装置、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597603A (zh) * 2023-07-19 2023-08-15 山东哲远信息科技有限公司 一种智能消防火灾报警系统及其控制方法
CN116597603B (zh) * 2023-07-19 2023-10-10 山东哲远信息科技有限公司 一种智能消防火灾报警系统及其控制方法
CN117409193A (zh) * 2023-12-14 2024-01-16 南京深业智能化系统工程有限公司 一种烟雾场景下的图像识别方法、装置及存储介质
CN117409193B (zh) * 2023-12-14 2024-03-12 南京深业智能化系统工程有限公司 一种烟雾场景下的图像识别方法、装置及存储介质

Also Published As

Publication number Publication date
CN114494944A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2023125588A1 (fr) Procédé et appareil de détermination de niveau de danger d'incendie
CN107547555B (zh) 一种网站安全监测方法及装置
US20210034819A1 (en) Method and device for identifying a user interest, and computer-readable storage medium
Bauman et al. Using social sensors for detecting emergency events: a case of power outages in the electrical utility industry
CN112948897B (zh) 一种基于drae与svm相结合的网页防篡改检测方法
CN112765485A (zh) 网络社会事件预测方法、系统、终端、计算机设备及介质
Singh et al. Methods to detect an event using artificial intelligence and Machine Learning
CN112153373A (zh) 明厨亮灶设备的故障识别方法、装置及存储介质
KR101685334B1 (ko) 키워드 관련도 기반의 재난 이슈별 재난 탐지 기술 및 이를 이용한 재난대처 방법
CN116384736A (zh) 一种智慧城市的风险感知方法及系统
Liu et al. Effects of governmental data governance on urban fire risk: A city-wide analysis in China
CN114238330A (zh) 一种数据处理方法、装置、电子设备和存储介质
CN113141370B (zh) 一种内部网络流量的恶意dns隧道识别方法
Pohl et al. Supporting crisis management via sub-event detection in social networks
CN114461763B (zh) 一种基于突发词聚类的网络安全事件抽取方法
CN111160025A (zh) 一种基于公安文本的主动发现案件关键词的方法
CN115619245A (zh) 一种基于数据降维方法的画像构建和分类方法及系统
Liu et al. Research on design of intelligent background differential model for training target monitoring
Dao et al. Leveraging Knowledge Graphs for CheapFakes Detection: Beyond Dataset Evaluation
CN113505217A (zh) 基于大数据实现工程造价数据库快速形成的方法和系统
CN113343010A (zh) 一种图像识别方法、电子设备及计算机可读存储介质
Ye et al. GAN-enabled framework for fire risk assessment and mitigation of building blueprints
CN116361463B (zh) 一种地震灾情信息提取方法、装置、设备及介质
Sari et al. Threshold value optimization to improve fire performance classification using HOG and SVM
KR20200108937A (ko) 가짜 뉴스 판단 시스템, 판단 방법 및 이를 실행시키기 위한 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914840

Country of ref document: EP

Kind code of ref document: A1