CN114120089A - Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium - Google Patents

Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114120089A
CN114120089A CN202111409784.XA CN202111409784A CN114120089A CN 114120089 A CN114120089 A CN 114120089A CN 202111409784 A CN202111409784 A CN 202111409784A CN 114120089 A CN114120089 A CN 114120089A
Authority
CN
China
Prior art keywords
fire
fire fighting
data
fighting equipment
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111409784.XA
Other languages
Chinese (zh)
Inventor
冯斌
左晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dafu Safety Technology Co ltd
Original Assignee
Beijing Dafu Safety Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dafu Safety Technology Co ltd filed Critical Beijing Dafu Safety Technology Co ltd
Priority to CN202111409784.XA priority Critical patent/CN114120089A/en
Publication of CN114120089A publication Critical patent/CN114120089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/12Indexing scheme for image data processing or generation, in general involving antialiasing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Fire Alarms (AREA)

Abstract

The application discloses a fire fighting equipment auxiliary installation method and system, electronic equipment and a storage medium. The method comprises the steps of obtaining a target image in a preset fire-fighting space; obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space; and obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment. The fire control aassessment and installation process have been solved to this application, and the treatment effeciency is low and there is the mistake and the technical problem who omits. The target recognition model based on the neural network realizes the installation and evaluation of the auxiliary fire fighting equipment.

Description

Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium
Technical Field
The application relates to the field of machine learning and fire safety, in particular to a fire fighting equipment auxiliary installation method and device based on a neural network, electronic equipment and a storage medium.
Background
In order to ensure fire safety in computer rooms, equipment rooms, advanced physical and chemical laboratories, fire fighting equipment needs to be installed and deployed in these target fire spaces.
In the related art, installation evaluation of fire fighting equipment basically depends on experience and calculation of technicians, and usually has the problems of omission, calculation error, long time consumption and the like.
Aiming at the problems of low processing efficiency and errors and omissions in the fire fighting evaluation and installation process in the related technology, no effective solution is provided at present.
Disclosure of Invention
The application mainly aims to provide a fire fighting equipment auxiliary installation method and system based on a neural network, electronic equipment and a storage medium, so as to solve the problems of low processing efficiency, errors and omission in fire fighting evaluation and installation processes.
In order to achieve the above object, according to one aspect of the present application, there is provided a neural network-based fire fighting equipment auxiliary installation method.
The fire fighting equipment auxiliary installation method based on the neural network comprises the following steps: acquiring a target image in a preset fire fighting space; obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space; and obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
Further, the pre-trained target detection model comprises: obtaining a plurality of groups of image data which are marked and used as training samples according to the three-dimensional grid or point cloud map in the map reconstruction data; and training by machine learning by using the multiple groups of data to obtain the target detection model based on the R-CNN or the YOLO neural network.
Further, the labeled sets of image data serving as training samples include: the entity material and the structural characteristics of the map data in the preset fire-fighting space are marked in advance; and/or the entity specific heat capacity information characteristics of the map data in the preset fire-fighting space are marked in advance.
Further, the labeled sets of image data serving as training samples include: and pre-marking the fire safety risk point characteristics in the preset fire-fighting space.
Further, after obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment, the method further comprises the following steps: judging whether the installation positions and the number of the fire fighting equipment meet fire fighting safety conditions or not according to the actual installation positions and the number of the fire fighting equipment; if the installation positions and the number of the fire fighting equipment are judged to meet the fire fighting safety condition, outputting an evaluation result; and if the installation positions and the number of the fire fighting equipment are judged not to meet the fire fighting safety condition, outputting an installation and deployment suggestion.
Further, the acquiring of the target image in the preset fire-fighting space comprises: acquiring image information acquired in the preset fire-fighting space through an image acquisition device; screening the image information to obtain the target image according to preset fire-fighting installation conditions, wherein the preset fire-fighting installation conditions at least comprise one of the following conditions: space, structure, material, fire point.
Further, the obtaining of the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment includes: judging whether each installation position is matched with the type information of the fire fighting equipment or not according to the installation positions in the identification result and the type information of the fire fighting equipment; if not, re-determining the installation position; and if so, determining the quantity of the fire fighting equipment to be installed according to the matching result.
In order to achieve the above object, according to another aspect of the present application, there is provided a neural network-based fire fighting equipment auxiliary installation apparatus.
The fire fighting equipment auxiliary installation device based on the neural network comprises the following components: the acquisition module is used for acquiring a target image in a preset fire fighting space; the target detection module is used for obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space; and the result output module is used for obtaining the positions to be installed and the quantity of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
According to another aspect of the present application, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for parsing industrial internet identification information based on a block chain when executing the program.
According to still another aspect of the present application, there is provided a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the method for parsing industrial internet identification information based on a block chain.
In the embodiment of the application, the fire fighting equipment auxiliary installation method and device based on the neural network adopt a mode of acquiring a target image in a preset fire fighting space, and obtain an identification result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire fighting space, and the purpose of obtaining the positions to be installed and the quantity of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment is achieved, so that the technical effect of assisting the installation of the fire fighting equipment by a target identification model based on a neural network is achieved, and the technical problems of low processing efficiency, errors and omission in the fire fighting evaluation and installation process are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic diagram of a system architecture for implementing a neural network-based fire fighting equipment auxiliary installation method according to an embodiment of the present application;
FIG. 2 is a flow chart of a neural network-based fire fighting equipment auxiliary installation method according to an embodiment of the application;
FIG. 3 is a schematic structural diagram of a neural network-based fire fighting equipment auxiliary installation device according to an embodiment of the present application;
fig. 4 is a flow chart of a neural network-based fire fighting equipment auxiliary installation method according to the preferred embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
SLAM (Simultaneous Localization And Mapping) is mainly used for solving the problem of performing positioning navigation And Mapping when a mobile device runs in an unknown environment.
As shown in fig. 1, a system of a neural network-based fire fighting equipment auxiliary installation method in an embodiment of the present application includes: the method comprises the steps of presetting a fire-fighting space 100, a target detection model 200, a target image 300 and an image 400 with a target identification frame. In the embodiment of the application, for the preset fire-fighting space 100 such as a computer room, an equipment room, an advanced physics and chemistry laboratory and the like, a robot is firstly used for positioning and scanning the fire-fighting space to build a map by using the current SLAM positioning technology. The target detection model 200 is a neural network model trained in advance based on a machine learning model. The target image 300 includes a target image acquired by an image acquisition device, and may include a monocular image, a binocular image, and an RGBD image. The image 400 with the target recognition frame includes the optimal number of deployment installations and installation points.
As shown in fig. 2, the method includes steps S201 to S203 as follows:
step S201, acquiring a target image in a preset fire-fighting space;
step S202, obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and S203, obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
From the above description, it can be seen that the following technical effects are achieved by the present application:
the method comprises the following steps of obtaining a target image in a preset fire fighting space by adopting a mode of obtaining the target image, obtaining a recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using a plurality of groups of data through machine learning training, and each group of data in the plurality of groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire fighting space, and the purpose of obtaining the positions to be installed and the quantity of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment is achieved, so that the technical effect of assisting the installation of the fire fighting equipment by a target identification model based on a neural network is achieved, and the technical problems of low processing efficiency, errors and omission in the fire fighting evaluation and installation process are solved. In addition, the method is an efficient and economical auxiliary installation method of the fire fighting equipment.
In step S201, when the user needs to install the fire fighting equipment in one fire fighting space, the target image in the fire fighting space is obtained.
As an alternative implementation, the target image may be an RGBD image sequence, i.e. a depth image sequence.
As an alternative embodiment, the pre-defined fire-fighting space includes, but is not limited to, a computer room, an equipment room, and an advanced physical and chemical laboratory.
In the step S202, feature points in the target image are identified through a pre-trained target detection model, and an identification result of the target image is obtained. Generally, the target object is framed in a block diagram manner in the recognition result of the target image.
As an alternative embodiment, the object detection model is derived by machine learning training using multiple sets of data. The target detection model may include, but is not limited to, R-CNN, YOLO, etc., and is not particularly limited in the embodiments of the present application.
As an optional implementation, each of the plurality of sets of data includes: the image processing method comprises a sample image and a preset feature label in the sample image. The preset feature labels can be feature points obtained by manual marking in advance.
As an alternative embodiment, the sample image is a map reconstruction data created by the robot through SLAM data in the preset fire space. The fire extinguishing space can be located and scanned and mapped by using a robot by utilizing the current SLAM locating technology. The resulting map reconstruction data may include topological or metric map data.
In the step S203, the positions to be installed and the number of the fire fighting devices are obtained according to the identification result and the known model information of the fire fighting devices, it should be noted that there may be a plurality of or one positions to be installed of the fire fighting devices, and there may be a plurality of or one numbers of the fire fighting devices. Generally, for a scene with complex terrain and crowded equipment, there are multiple installation locations and multiple numbers of different models of fire fighting equipment.
As a preferable feature in this embodiment, the pre-trained target detection model includes: obtaining a plurality of groups of image data which are marked and used as training samples according to the three-dimensional grid or point cloud map in the map reconstruction data; and training by machine learning by using the multiple groups of data to obtain the target detection model based on the R-CNN or the YOLO neural network.
And in specific implementation, marking the grid data or the point cloud data according to the three-dimensional grid or the point cloud map in the map reconstruction data to obtain a plurality of groups of image data. And training the multiple groups of image data to obtain a target detection model. Preferably, the object detection model may be based on an R-CNN neural network.
Preferably, the target detection model is also based on a YOLO neural network.
As a preferred example in this embodiment, the labeled sets of image data serving as training samples include: the entity material and the structural characteristics of the map data in the preset fire-fighting space are marked in advance; and/or the entity specific heat capacity information characteristics of the map data in the preset fire-fighting space are marked in advance.
During specific implementation, the installation needs information such as abundant assessment space, structure, material, condition of a fire point when deploying extinguishing device. Therefore, entity materials and structural features of the map data in the preset fire-fighting space are marked in advance and used as multiple groups of image data of the training sample. Or, the entity specific heat capacity information characteristics of the map data in the preset fire-fighting space, which are marked in advance, are used as multiple sets of image data of the training samples.
As a preferred example in this embodiment, the labeled sets of image data serving as training samples include: and pre-marking the fire safety risk point characteristics in the preset fire-fighting space.
During concrete implementation, need fully to assess information such as space, structure, material, condition of a fire point when installation deploys extinguishing device, what mark in advance the fire safety risk point characteristic in the predetermined fire control space is the characteristic point of condition of a fire risk point. It can be understood that the fire risk points can be labeled through priori knowledge and learned through a machine learning model.
As a preferable example in this embodiment, after obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment, the method further includes: judging whether the installation positions and the number of the fire fighting equipment meet fire fighting safety conditions or not according to the actual installation positions and the number of the fire fighting equipment; if the installation positions and the number of the fire fighting equipment are judged to meet the fire fighting safety condition, outputting an evaluation result; and if the installation positions and the number of the fire fighting equipment are judged not to meet the fire fighting safety condition, outputting an installation and deployment suggestion.
In specific implementation, after the actual installation of the fire fighting equipment in a preset fire fighting space is finished, whether the position of the actual installation meets the fire fighting requirement is evaluated. Whether the installation position and the number of the fire fighting equipment meet fire safety conditions or not needs to be judged according to the actual installation position and the number of the fire fighting equipment.
Further, if the installation positions and the number of the fire fighting equipment are judged to meet the fire fighting safety condition, the evaluation result is output and recorded and stored in the model. And if the installation positions and the number of the fire fighting equipment are judged not to meet the fire safety condition, outputting installation and deployment suggestions (providing alternative installation positions and the number), and performing installation rectification.
As a preferable option in this embodiment, the acquiring a target image in a preset fire-fighting space includes: acquiring image information acquired in the preset fire-fighting space through an image acquisition device; screening the image information to obtain the target image according to preset fire-fighting installation conditions, wherein the preset fire-fighting installation conditions at least comprise one of the following conditions: space, structure, material, fire point.
During specific implementation, image information acquired in the preset fire-fighting space is acquired through an image acquisition device, the image information at least comprises image information of a group of continuous frames, and the image information can be screened out in the group of continuous frames according to conditions such as space, structure, material, fire point and the like to obtain the target image. Preferably, the target image is determined after converting into three-dimensional point cloud data based on the RGBD image sequence.
As a preferable example in this embodiment, the obtaining, according to the identification result and the model information of the fire fighting equipment, the to-be-installed positions and the number of the fire fighting equipment includes: judging whether each installation position is matched with the type information of the fire fighting equipment or not according to the installation positions in the identification result and the type information of the fire fighting equipment; if not, re-determining the installation position; and if so, determining the quantity of the fire fighting equipment to be installed according to the matching result.
When the method is specifically implemented, the installation positions in the identification result and the model information of the fire fighting equipment judge whether each installation position is matched with the model information of the fire fighting equipment, and for the fire fighting equipment of different models, if the installation positions are not matched, the installation positions are determined again.
Further, if the fire fighting equipment installation conditions are matched, the number of the fire fighting equipment to be installed is determined according to the matching result, and therefore the method is suitable for the installation scene of the fire fighting equipment with complex terrain and crowded equipment.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a neural network-based fire fighting equipment auxiliary installation apparatus for implementing the above method, as shown in fig. 3, the apparatus including:
an obtaining module 301, configured to obtain a target image in a preset fire fighting space;
a target detection module 302, configured to obtain a recognition result of the target image through a pre-trained target detection model, where the target detection model is obtained through machine learning training by using multiple sets of data, and each set of data in the multiple sets of data includes: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and the result output module 303 is configured to obtain positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
In the acquiring module 301, when a user needs to install fire fighting equipment in a fire fighting space, a target image in the fire fighting space is acquired.
As an alternative implementation, the target image may be an RGBD image sequence, i.e. a depth image sequence.
As an alternative embodiment, the pre-defined fire-fighting space includes, but is not limited to, a computer room, an equipment room, and an advanced physical and chemical laboratory.
In the target detection module 302, the feature points in the target image are identified through a pre-trained target detection model, and an identification result of the target image is obtained. Generally, the target object is framed in a block diagram manner in the recognition result of the target image.
As an alternative embodiment, the object detection model is derived by machine learning training using multiple sets of data. The target detection model may include, but is not limited to, R-CNN, YOLO, etc., and is not particularly limited in the embodiments of the present application.
As an optional implementation, each of the plurality of sets of data includes: the image processing method comprises a sample image and a preset feature label in the sample image. The preset feature labels can be feature points obtained by manual marking in advance.
As an alternative embodiment, the sample image is a map reconstruction data created by the robot through SLAM data in the preset fire space. The fire extinguishing space can be located and scanned and mapped by using a robot by utilizing the current SLAM locating technology. The resulting map reconstruction data may include topological or metric map data.
In the result output module 303, the positions to be installed and the number of the fire fighting equipment are obtained according to the identification result and the known model information of the fire fighting equipment, it should be noted that there may be a plurality of or one positions to be installed of the fire fighting equipment, and there may be a plurality of or one number of the fire fighting equipment. Generally, for a scene with complex terrain and crowded equipment, there are multiple installation locations and multiple numbers of different models of fire fighting equipment.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
In order to better understand the above flow, the following explains the above technical solutions with reference to preferred embodiments, but the technical solutions of the embodiments of the present invention are not limited.
The fire fighting equipment auxiliary installation method based on the neural network is particularly suitable for scenes with complex terrain and crowded equipment.
Firstly, acquiring a target image in a preset fire fighting space; then, obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space; and finally, according to the identification result and the model information of the fire fighting equipment.
Through make full use of SLAM location technique, use the robot earlier to fix a position and scan the space of putting out a fire and build the picture, the material structure and the specific heat capacity information of the corresponding entity of manual marking, the artificial marking risk point, through the neural network model that trains well at last, combine the model of specific fire extinguishing apparatus, determine best deployment installation quantity and mounting point. Further, after the actual installation is finished, it is evaluated whether the location at the actual installation meets the fire-fighting requirements.
As shown in fig. 4, which is a schematic flow chart of a neural network-based fire fighting equipment auxiliary installation method in the embodiment of the present application, in a scene with a complex terrain and crowded equipment, the method specifically includes the following steps:
in step S401, a target image is acquired.
When a user needs to install fire fighting equipment in a fire fighting space, a target image in the fire fighting space is obtained. The target image may be an RGBD image sequence, i.e. a depth image sequence.
And step S402, determining a preset fire-fighting space.
And training in advance to obtain a neural network model based on the determined preset fire fighting space. The robot can be used for positioning, scanning and mapping the preset fire-fighting space.
In step S403, a target detection model is input.
And identifying the characteristic points in the target image through a pre-trained target detection model, and obtaining the identification result of the target image. Generally, the target object is framed in a block diagram manner in the recognition result of the target image.
The target detection model is derived through machine learning training using multiple sets of data. The target detection model may include, but is not limited to, R-CNN, YOLO, etc., and is not particularly limited in the embodiments of the present application.
Each set of data in the plurality of sets of data includes: the image processing method comprises a sample image and a preset feature label in the sample image. The preset feature labels can be feature points obtained by manual marking in advance.
The sample image is map reconstruction data established by SLAM data of the robot in the preset fire-fighting space. The fire extinguishing space can be located and scanned and mapped by using a robot by utilizing the current SLAM locating technology. The resulting map reconstruction data may include topological or metric map data.
Step S404, feature point extraction.
Need fully to assess information such as space, structure, material, condition of a fire point when installation deploys extinguishing device, the characteristic point draws including but not limited to the material structure of entity, specific heat capacity information, conflagration risk point.
In addition, the images used for training are labeled manually in advance with material structures, specific heat capacity information, risk points, and the like of the entities.
And S405, outputting the positions to be installed and the number of the fire fighting equipment.
And finally outputting the positions to be installed and the quantity of the fire fighting equipment through the target identification model.
Further, after the actual installation of the fire fighting equipment in a preset fire fighting space is finished, it is evaluated whether the location of the actual installation meets fire fighting requirements. Whether the installation position and the number of the fire fighting equipment meet fire safety conditions or not needs to be judged according to the actual installation position and the number of the fire fighting equipment. And if the installation positions and the number of the fire fighting equipment are judged to meet the fire fighting safety conditions, outputting an evaluation result, and recording and storing the evaluation result. And if the installation positions and the number of the fire fighting equipment are judged not to meet the fire fighting safety condition, outputting an installation and deployment suggestion, and carrying out installation rectification.
Embodiments of the present application further provide a storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a target image in a preset fire-fighting space;
s2, obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and S3, obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
Optionally, the storage medium is further arranged to store a computer program for performing the steps of:
s1, obtaining a plurality of groups of image data which are marked and used as training samples according to the three-dimensional grid or point cloud map in the map reconstruction data;
s2, training the multiple groups of data through machine learning to obtain the target detection model based on the R-CNN or the YOLO neural network.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present application further provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a target image in a preset fire-fighting space;
s2, obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and S3, obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An auxiliary installation method of fire fighting equipment, comprising:
acquiring a target image in a preset fire fighting space;
obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and obtaining the positions to be installed and the number of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
2. The method of claim 1, wherein the pre-trained object detection model comprises:
obtaining a plurality of groups of image data which are marked and used as training samples according to the three-dimensional grid or point cloud map in the map reconstruction data;
and training by machine learning by using the multiple groups of data to obtain the target detection model based on the R-CNN or the YOLO neural network.
3. The method of claim 2, wherein the sets of image data labeled as training samples comprise:
the entity material and the structural characteristics of the map data in the preset fire-fighting space are marked in advance;
and/or the entity specific heat capacity information characteristics of the map data in the preset fire-fighting space are marked in advance.
4. The method of claim 2, wherein the sets of image data labeled as training samples comprise:
and pre-marking the fire safety risk point characteristics in the preset fire-fighting space.
5. The method of claim 1, wherein after obtaining the location and the number of the fire fighting equipment to be installed according to the identification result and the model information of the fire fighting equipment, the method further comprises:
judging whether the installation positions and the number of the fire fighting equipment meet fire fighting safety conditions or not according to the actual installation positions and the number of the fire fighting equipment;
if the installation positions and the number of the fire fighting equipment are judged to meet the fire fighting safety condition, outputting an evaluation result;
and if the installation positions and the number of the fire fighting equipment are judged not to meet the fire fighting safety condition, outputting an installation and deployment suggestion.
6. The method of claim 1, wherein the acquiring the target image within the preset fire space comprises:
acquiring image information acquired in the preset fire-fighting space through an image acquisition device;
screening the image information to obtain the target image according to preset fire-fighting installation conditions, wherein the preset fire-fighting installation conditions at least comprise one of the following conditions: space, structure, material, fire point.
7. The method of claim 1, wherein the obtaining of the positions and the number of the fire fighting equipment to be installed according to the identification result and the model information of the fire fighting equipment comprises:
judging whether each installation position is matched with the type information of the fire fighting equipment or not according to the installation positions in the identification result and the type information of the fire fighting equipment;
if not, re-determining the installation position;
and if so, determining the quantity of the fire fighting equipment to be installed according to the matching result.
8. A fire apparatus accessory mounting system, comprising:
the acquisition module is used for acquiring a target image in a preset fire fighting space;
the target detection module is used for obtaining the recognition result of the target image through a pre-trained target detection model, wherein the target detection model is obtained by using multiple groups of data through machine learning training, and each group of data in the multiple groups of data comprises: the system comprises a sample image and a preset feature tag in the sample image, wherein the sample image is map reconstruction data established by SLAM data of a robot in a preset fire-fighting space;
and the result output module is used for obtaining the positions to be installed and the quantity of the fire fighting equipment according to the identification result and the model information of the fire fighting equipment.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method of assisted installation of fire fighting equipment according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for auxiliary installation of a fire fighting device according to any one of claims 1 to 7.
CN202111409784.XA 2021-11-24 2021-11-24 Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium Pending CN114120089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111409784.XA CN114120089A (en) 2021-11-24 2021-11-24 Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111409784.XA CN114120089A (en) 2021-11-24 2021-11-24 Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114120089A true CN114120089A (en) 2022-03-01

Family

ID=80372561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111409784.XA Pending CN114120089A (en) 2021-11-24 2021-11-24 Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114120089A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079401A (en) * 2023-08-15 2023-11-17 江苏鑫赛德智慧建设有限公司 Remote monitoring and early warning method based on fire-fighting Internet of things
CN118095867A (en) * 2024-04-24 2024-05-28 江苏省建筑工程质量检测中心有限公司 Fire-fighting equipment detection evaluation system and method based on video processing and deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178643A1 (en) * 2017-12-11 2019-06-13 Hexagon Technology Center Gmbh Automated surveying of real world objects
CN110427022A (en) * 2019-07-08 2019-11-08 武汉科技大学 A kind of hidden fire-fighting danger detection robot and detection method based on deep learning
CN111651862A (en) * 2020-05-11 2020-09-11 珠海格力电器股份有限公司 Air conditioner, method and device for determining installation position of air conditioner, storage medium and mobile terminal
CN112101181A (en) * 2020-09-10 2020-12-18 湖北烽火平安智能消防科技有限公司 Automatic hidden danger scene recognition method and system based on deep learning
CN113553943A (en) * 2021-07-19 2021-10-26 江苏共知自动化科技有限公司 Target real-time detection method and device, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178643A1 (en) * 2017-12-11 2019-06-13 Hexagon Technology Center Gmbh Automated surveying of real world objects
CN110427022A (en) * 2019-07-08 2019-11-08 武汉科技大学 A kind of hidden fire-fighting danger detection robot and detection method based on deep learning
CN111651862A (en) * 2020-05-11 2020-09-11 珠海格力电器股份有限公司 Air conditioner, method and device for determining installation position of air conditioner, storage medium and mobile terminal
CN112101181A (en) * 2020-09-10 2020-12-18 湖北烽火平安智能消防科技有限公司 Automatic hidden danger scene recognition method and system based on deep learning
CN113553943A (en) * 2021-07-19 2021-10-26 江苏共知自动化科技有限公司 Target real-time detection method and device, storage medium and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079401A (en) * 2023-08-15 2023-11-17 江苏鑫赛德智慧建设有限公司 Remote monitoring and early warning method based on fire-fighting Internet of things
CN117079401B (en) * 2023-08-15 2024-06-07 江苏鑫赛德智慧建设有限公司 Remote monitoring and early warning method based on fire-fighting Internet of things
CN118095867A (en) * 2024-04-24 2024-05-28 江苏省建筑工程质量检测中心有限公司 Fire-fighting equipment detection evaluation system and method based on video processing and deep learning

Similar Documents

Publication Publication Date Title
CN114120089A (en) Fire fighting equipment auxiliary installation method and system, electronic equipment and storage medium
CN108960124A (en) The image processing method and device identified again for pedestrian
CN113052295B (en) Training method of neural network, object detection method, device and equipment
CN109461134A (en) A kind of power transmission line unmanned machine method for inspecting, device, terminal and storage medium
CN114463308B (en) Visual inspection method, device and processing equipment for visual angle photovoltaic module of unmanned aerial vehicle
CN112037146A (en) Medical image artifact automatic correction method and device and computer equipment
CN105430394A (en) Video data compression processing method, apparatus and equipment
CN115649501B (en) Unmanned aerial vehicle night lighting system and method
JP2021039625A (en) Object number estimation device, object number estimation method, and object number estimation program
CN114880730A (en) Method and device for determining target equipment and photovoltaic system
CN109740527B (en) Image processing method in video frame
CN117593766B (en) Investigation method for wild animal population number based on unmanned aerial vehicle shooting image processing
CN110427998A (en) Model training, object detection method and device, electronic equipment, storage medium
CN108764248B (en) Image feature point extraction method and device
CN117218553A (en) Control method of plant protection monitoring system and related equipment
CN113505653A (en) Object detection method, device, equipment, medium and program product
CN110378241B (en) Crop growth state monitoring method and device, computer equipment and storage medium
CN114909999A (en) Three-dimensional measurement system and method based on structured light
CN115606575A (en) Intelligent bird repelling method and system based on target detection network
CN108804652A (en) Generation method, device, storage medium and the electronic device of cover picture
CN114742955A (en) Flood early warning method and device, electronic equipment and storage medium
CN110516641B (en) Construction method of environment map and related device
CN114494224A (en) Mine environment monitoring method, device and system
CN109241811B (en) Scene analysis method based on image spiral line and scene target monitoring system using same
CN112733582A (en) Crop yield determination method and device and nonvolatile storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination