CN113674423A - Fire cause determination method, device, server and readable storage medium - Google Patents

Fire cause determination method, device, server and readable storage medium Download PDF

Info

Publication number
CN113674423A
CN113674423A CN202110995100.2A CN202110995100A CN113674423A CN 113674423 A CN113674423 A CN 113674423A CN 202110995100 A CN202110995100 A CN 202110995100A CN 113674423 A CN113674423 A CN 113674423A
Authority
CN
China
Prior art keywords
fire
scene
point
dimensional model
reason
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110995100.2A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202110995100.2A priority Critical patent/CN113674423A/en
Publication of CN113674423A publication Critical patent/CN113674423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application is applicable to the technical field of spatial data processing, and provides a fire cause judgment method, a fire cause judgment device, a server and a readable storage medium, wherein the method comprises the following steps: acquiring a fire scene three-dimensional model; identifying the trace type in the three-dimensional model of the fire scene; determining a fire point and a fire spreading direction according to the trace type; deciding the reason of the fire according to the fire point and the fire spreading direction; and sending the fire reason to the user terminal to instruct the user terminal to display the fire reason to the user. Therefore, after the fire scene is modeled, the fire scene three-dimensional model is further identified to judge the fire reason and pushed to a user for checking, and the effect of automatically judging the fire reason is achieved.

Description

Fire cause determination method, device, server and readable storage medium
Technical Field
The application belongs to the technical field of spatial data processing, and particularly relates to a fire cause determination method, a fire cause determination device, a fire cause determination server and a readable storage medium.
Background
The fire rescue department is an important guarantee for maintaining the social security of people, and the fire scene investigation is systematic investigation work which is carried out by the fire rescue department on the fire scene, related places, articles, corpses and all objects capable of proving the fire cause, the fire property and the fire responsibility by using scientific means and an investigation and research method within the scope of authority specified by laws and regulations and makes fire conclusions through field analysis. However, due to the influence of factors such as artificial damage, fire suppression, the means of the prior art and the like, the difficulty of fire scene investigation is increased, and fire regulators can only judge the fire scene artificially by the past experience.
Disclosure of Invention
The embodiment of the application provides a fire cause judgment method, a fire cause judgment device, a server and a readable storage medium, and can solve the problem that the fire cause is judged only manually in the prior art.
In a first aspect, an embodiment of the present application provides a fire cause determination method, including: acquiring a fire scene three-dimensional model;
identifying the trace type in the three-dimensional model of the fire scene;
determining a fire point and a fire spreading direction according to the trace type;
deciding the reason of the fire according to the fire point and the fire spreading direction;
and sending the fire reason to a user terminal to instruct the user terminal to display the fire reason to a user.
In one possible implementation manner of the first aspect, obtaining a three-dimensional model of a fire scene includes:
acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
generating a point cloud according to the panoramic image to be processed;
and reconstructing according to the point cloud to obtain a fire scene three-dimensional model.
In one possible implementation manner of the first aspect, identifying a trace type in the three-dimensional model of the fire scene includes:
inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model into a pre-trained example segmentation neural network model to obtain point cloud areas corresponding to the same scene object;
and determining the trace type in the scene object according to the geometrical characteristics of the point cloud area.
In one possible implementation manner of the first aspect, the example-segmented neural network model includes an input conversion module, a feature conversion module, a max-pooling module, and a classification module;
inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model into a pre-trained example segmentation neural network model to obtain a point cloud area corresponding to the same scene object, wherein the point cloud area comprises the following steps:
based on the three-dimensional coordinates of the point cloud, adjusting the segmentation direction of the three-dimensional model according to the input conversion module;
extracting point clouds of the three-dimensional model with the adjusted segmentation direction according to the feature conversion model and aligning local features;
extracting global features according to the maximum pooling module based on the local features;
and outputting point cloud areas corresponding to the same scene object according to the classification module based on the local features and the global features.
In a possible implementation manner of the first aspect, determining a trace type in the scene object according to a geometric feature of the point cloud region includes:
extracting geometric features of the point cloud area;
storing the geometric features into a preset K-d tree index structure, performing nearest neighbor search with a scene descriptor, and determining the geometric features matched with the scene descriptor;
and determining the trace type in the scene object according to the scene information corresponding to the scene descriptor matched with the geometric features.
In a possible implementation manner of the first aspect, the deciding a cause of a fire according to the fire point and a fire spreading direction includes:
and inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point into a preset rule engine as event data, and outputting the fire reason.
In a possible implementation manner of the first aspect, the preset rule engine includes a rule base, a pattern matcher, and an agent component;
inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point as event data into a preset rule engine, and outputting the reason of the fire, wherein the method comprises the following steps:
calling a rule condition in the rule base;
matching the event data with the rule conditions according to the pattern matcher, determining candidate fire reasons as execution actions according to matching results, and outputting the event data and the rule conditions with matching relations and the execution actions corresponding to the event data and the rule conditions with matching relations to an agent component;
and carrying out the decision of the execution sequence on the event data and the rule condition with the matching relationship and the execution action corresponding to the event data and the rule condition with the matching relationship according to the logic conflict decision strategy by the agent component, and outputting the fire reason.
In a second aspect, an embodiment of the present application provides a fire cause determining apparatus, including:
the acquisition module is used for acquiring a three-dimensional model of a fire scene;
the identification module is used for identifying the trace type in the three-dimensional model of the fire scene;
the determining module is used for determining a fire point and a fire spreading direction according to the trace type;
the decision module is used for deciding the reason of the fire according to the fire point and the fire spreading direction;
and the sending module is used for sending the fire reason to a user terminal so as to indicate the user terminal to display the fire reason to a user.
In a possible implementation manner of the second aspect, the obtaining module includes:
the acquisition submodule is used for acquiring a panoramic image to be processed, and the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
the generation submodule is used for generating a point cloud according to the panoramic image to be processed;
and the reconstruction submodule is used for reconstructing to obtain a fire scene three-dimensional model according to the point cloud.
In one possible implementation manner of the second aspect, the identification module includes:
the instance segmentation submodule is used for inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model to a pre-trained instance segmentation neural network model to obtain point cloud areas corresponding to the same scene object;
and the judgment sub-module is used for determining the trace type in the scene object according to the geometrical characteristics of the point cloud area.
In one possible implementation manner of the second aspect, the example-segmented neural network model includes an input conversion module, a feature conversion module, a max-pooling module, and a classification module;
the instance partitioning sub-module includes:
the adjusting unit is used for adjusting the segmentation direction of the three-dimensional model according to the input conversion module based on the three-dimensional coordinates of the point cloud;
the conversion unit is used for extracting and aligning local features of the point cloud of the three-dimensional model with the adjusted segmentation direction according to the feature conversion model;
the pooling unit is used for extracting global features according to the maximum pooling module based on the local features;
and the classification unit is used for outputting the point cloud areas corresponding to the same scene object according to the classification module based on the local features and the global features.
In a possible implementation manner of the second aspect, the determining sub-module includes:
the extraction unit is used for extracting the geometric features of the point cloud area;
the matching unit is used for storing the geometric features into a preset K-d tree index structure, performing nearest neighbor search on the geometric features and the scene descriptors, and determining the geometric features matched with the scene descriptors;
and the judging unit is used for determining the trace type in the scene object according to the scene information corresponding to the scene descriptor matched with the geometric features.
In one possible implementation manner of the second aspect, the decision module includes:
and the decision submodule is used for inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point into a preset rule engine as event data and outputting the fire reason.
In one possible implementation manner, the preset rule engine includes a rule base, a pattern matcher and an agent component;
the decision sub-module comprises:
the calling unit is used for calling the rule conditions in the rule base;
the execution unit is used for matching the event data with the rule conditions according to the pattern matcher, determining candidate fire reasons as execution actions according to matching results, and outputting the event data with matching relations and the rule conditions as well as the execution actions corresponding to the event data with matching relations and the rule conditions to the agent component;
and the conflict processing unit is used for carrying out the decision of the execution sequence on the event data and the rule condition with the matching relationship and the execution action corresponding to the event data and the rule condition with the matching relationship according to the logic conflict decision strategy by the agent component and outputting the fire reason.
In a third aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, a three-dimensional model of a fire scene is obtained, the trace type in the three-dimensional model of the fire scene is identified, a fire point and a fire spreading direction are determined according to the trace type, a fire reason is determined according to the fire point and the fire spreading direction, and the fire reason is sent to a user terminal so as to indicate the user terminal to display the fire reason to a user. Therefore, after the fire scene is modeled, the fire scene three-dimensional model is further identified to judge the fire reason and pushed to a user for checking, and the effect of automatically judging the fire reason is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a fire cause determination method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an implementation of step S102 in fig. 1 of a fire cause determination method according to an embodiment of the present application;
fig. 3 is a detailed flowchart of step S104 in fig. 1 of a fire cause determination method according to an embodiment of the present application;
fig. 4 is a detailed flowchart of step S302 in fig. 3 of a fire cause determination method according to an embodiment of the present application;
fig. 5 is a detailed flowchart of step S304 in fig. 3 of a fire cause determination method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a fire cause determination device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The technical solutions provided in the embodiments of the present application will be described below with specific embodiments.
Example one
Referring to fig. 1, a schematic flow chart of a fire cause determination method provided in the embodiment of the present application is, by way of example and not limitation, applicable to the fire cause determination method, and the method may be applied to a server, preferably a cloud server, which is connected to a depth camera and a user terminal, respectively, and the method may include the following steps:
and S102, acquiring a three-dimensional model of the fire scene.
As shown in fig. 2, a flowchart of a specific implementation of step S102 in fig. 1 of the method for determining a fire cause according to an embodiment of the present application is obtained, where the method includes:
and step S202, acquiring a panoramic image to be processed.
The panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene. The depth camera of the embodiment of the application can be an eight-eye camera, namely the eight-eye camera is composed of an upper group of fish-eye lenses and a lower group of fish-eye lenses, and the four lenses collect four groups of lens images respectively and are spliced into a 360-degree panorama.
It can be understood that the user collects the spatial information of the fire scene through the depth camera at the fire scene, so that the server can reconstruct a three-dimensional model of the fire scene according to the spatial information of the fire scene sent by the depth camera, and the server sends the three-dimensional model of the fire scene to the user terminal for the user to view.
For example, the operation flow of the user operating at the fire scene may be: installing and fixing the depth camera, starting the depth camera, opening a WiFi (wireless fidelity) connection function of the mobile phone or the ipad, starting a small program of the mobile phone or the ipad, planning a shooting route by a user, moving the depth camera by the user for shooting, storing space data by the depth camera, and automatically uploading the space data to the server by the depth camera.
And S204, generating a point cloud according to the panoramic image to be processed.
In specific application, feature points of a panoramic image to be processed are extracted according to a preset feature extraction algorithm (such as a Harris corner detection algorithm, a FAST corner detection algorithm, a SIFI extraction algorithm or a SURF extraction algorithm), target feature points with a matching relationship are screened out, the target feature points with the matching relationship are processed according to an SFM algorithm, depth information and position information of a depth camera are calculated, and three-dimensional coordinates of point cloud are obtained according to the following formula:
Figure 806110DEST_PATH_IMAGE001
wherein, (u, v) is the pixel coordinate of each target feature point in the panoramic image to be processed, d is the depth value of each target feature point in the panoramic image to be processed, K is the internal reference of the depth camera, and (X, Y, Z) is the three-dimensional coordinate of the point cloud. Illustratively, the internal parameters of the depth camera may be calculated using the Zhang-friend scaling method.
And S206, reconstructing according to the point cloud to obtain a fire scene three-dimensional model.
In specific application, the sfm algorithm carries out off-line processing on point clouds to obtain a fire scene three-dimensional model.
And S104, identifying the trace type in the three-dimensional model of the fire scene.
It will be appreciated that the three-dimensional model of the fire scene includes scene objects, such as tables, chairs, burnt objects, and particularly traces. Wherein the traces include, but are not limited to, one or more of the following: smoke marks, combustion marks, carbonization marks, deformation marks, and friction marks.
As shown in fig. 3, a detailed flowchart of step S104 in fig. 1 of the method for determining a cause of a fire according to the embodiment of the present application is shown, and the identifying of the trace type in the three-dimensional model of the fire scene includes:
and S302, inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model into a pre-trained example segmentation neural network model to obtain a point cloud area corresponding to the same scene object.
The example segmentation neural network model comprises an input conversion module, a feature conversion module, a maximum pooling module and a classification module.
As shown in fig. 4, a specific flowchart of the method for determining a fire cause provided in the embodiment of the present application, which is shown in step S302 in fig. 3, is that a three-dimensional coordinate of a point cloud in a three-dimensional model of a fire scene is input to a pre-trained example segmentation neural network model, so as to obtain a point cloud area corresponding to a same scene object, where the method includes:
and S402, adjusting the segmentation direction of the three-dimensional model according to the input conversion module based on the three-dimensional coordinates of the point cloud.
And S404, extracting point clouds of the three-dimensional model for adjusting the segmentation direction and aligning local features according to the feature conversion model.
And S406, extracting global features according to a maximum pooling module based on the local features.
And step S408, outputting the point cloud areas corresponding to the same scene object according to the classification module based on the local features and the global features.
And S304, determining the type of the trace in the scene object according to the geometrical characteristics of the point cloud area.
It should be noted that windows, natural objects, power line fusing, etc. in the scene object can also be determined.
As shown in fig. 5, a specific flowchart of the method for determining a cause of fire according to the embodiment of the present application, which is shown in step S304 in fig. 3, is to determine a type of a trace in a scene object according to a geometric feature of a point cloud area, where the method includes:
and step S502, extracting geometric features of the point cloud area.
Wherein the geometric features include, but are not limited to, surface normal features, feature histogram features, fast point feature histogram features, or orientation histogram features.
And step S504, storing the geometric features into a preset K-d tree index structure, performing nearest neighbor search with the scene descriptor, and determining the geometric features matched with the scene descriptor.
The scene descriptor is pre-trained and comprises an incidence relation between the standard geometric features and the corresponding scene objects.
And S506, determining the trace type in the scene object according to the scene information corresponding to the scene descriptor matched with the geometric features.
And S106, determining a fire point and a fire spreading direction according to the trace type.
In the specific application, when the trace type is a smoke trace, the deepest smoke trace is determined as an ignition point, and the direction of the smoke trace from shallow to deep is determined as a fire spreading direction; when the trace type is a combustion trace, the bottom of the position where the mark is positioned like a V-shaped mark can be determined as a fire point, and the direction of the combustion trace from shallow to deep is determined as a fire spreading direction; when the trace type is a combustion trace, determining the deepest charring trace as an ignition point, and determining the direction of the charring trace from shallow to deep as a fire spreading direction; when the trace type is a deformation trace, determining that the deepest deformation trace is an ignition point, and determining that the direction of the deformation trace from shallow to deep is a fire spreading direction; when the trace type is a friction trace, the deepest friction trace is determined as a fire point, and the direction from the shallow to the deep friction trace is determined as a fire spreading direction.
And S108, determining the reason of the fire according to the fire point and the fire spreading direction.
In specific application, a fire point, a fire spreading direction and scene information corresponding to the fire point are used as event data and input into a preset rule engine, and a fire reason is output.
Illustratively, the pre-defined rules engine includes a rule base, a pattern matcher, and a broker component.
Inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point as event data into a preset rule engine, and outputting the reason of the fire, wherein the method comprises the following steps:
and step S8-1, calling the rule condition in the rule base.
The rule conditions include a first rule condition, a second rule condition and a third rule condition, the first rule condition corresponds to a first rule action, the second rule condition corresponds to a second rule action, and the third rule condition corresponds to a third rule action.
Specifically, rule condition 1 is: in the three-dimensional model of the fire scene, with the fire point as a starting point, in a scene object within a preset distance range along the fire spreading direction, if a window is in an open state, the execution action 1 corresponding to the rule condition 1 is as follows: and confirming that the candidate fire cause is a fire releasing factor. Rule condition 2 is: in the three-dimensional model of the fire scene, if natural objects exist in scene objects within a preset distance range along the fire spreading direction with the fire point as a starting point, the execution action 2 corresponding to the rule condition 2 is as follows: the candidate cause of fire was identified as a self-ignition factor. Rule condition 3 is: in the three-dimensional model of the fire scene, if the power line is fused in the scene object within the preset distance range along the fire spreading direction with the fire point as the starting point, the execution action 3 corresponding to the rule condition 3 is as follows: the cause of the candidate fire is confirmed to be a short circuit of the electric appliance line.
And step S8-2, matching the event data with rule conditions according to a pattern matcher, determining candidate fire reasons as execution actions according to matching results, and outputting the event data and the rule conditions with matching relations and the execution actions corresponding to the event data and the rule conditions with matching relations to an agent component.
And step S8-3, carrying out execution sequence decision on the event data and the rule conditions with the matching relationship and the execution actions corresponding to the event data and the rule conditions with the matching relationship according to the logic conflict decision strategy by the agent component, and outputting the fire reason.
The logic conflict decision strategy comprises a priority strategy, a complexity strategy, a breadth strategy, a depth strategy, a random strategy and the like.
It can be understood that, in the embodiment of the present application, the fire cause is determined according to the rules preset in the rule engine, and compared with the prior art that a large amount of training data is required to train the neural network model to determine the fire cause, no training data is required.
Step S110, sending the fire reason to the user terminal to instruct the user terminal to display the fire reason to the user.
In specific application, the server sends the fire reason to a user terminal corresponding to a fire dispatching person for checking.
In the embodiment of the application, a three-dimensional model of a fire scene is obtained, the trace type in the three-dimensional model of the fire scene is identified, a fire point and a fire spreading direction are determined according to the trace type, a fire reason is determined according to the fire point and the fire spreading direction, and the fire reason is sent to a user terminal so as to indicate the user terminal to display the fire reason to a user. Therefore, after the fire scene is modeled, the fire scene three-dimensional model is further identified to judge the fire reason and pushed to a user for checking, and the effect of automatically judging the fire reason is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of a fire cause determination device according to an embodiment of the present application, which corresponds to the method described in the above embodiment, and only the relevant portions of the embodiment of the present application are shown for convenience of explanation.
Referring to fig. 6, the apparatus includes:
the acquiring module 61 is used for acquiring a fire scene three-dimensional model;
the identification module 62 is used for identifying the trace type in the three-dimensional model of the fire scene;
a determining module 63, configured to determine a fire point and a fire spreading direction according to the trace type;
a decision module 64, configured to decide a fire reason according to the fire point and the fire spreading direction;
a sending module 65, configured to send the fire reason to a user terminal, so as to instruct the user terminal to display the fire reason to a user.
In one possible implementation manner, the obtaining module includes:
the acquisition submodule is used for acquiring a panoramic image to be processed, and the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
the generation submodule is used for generating a point cloud according to the panoramic image to be processed;
and the reconstruction submodule is used for reconstructing to obtain a fire scene three-dimensional model according to the point cloud.
In one possible implementation, the identification module includes:
the instance segmentation submodule is used for inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model to a pre-trained instance segmentation neural network model to obtain point cloud areas corresponding to the same scene object;
and the judgment sub-module is used for determining the trace type in the scene object according to the geometrical characteristics of the point cloud area.
In one possible implementation, the example segmented neural network model includes an input transformation module, a feature transformation module, a max-pooling module, and a classification module;
the instance partitioning sub-module includes:
the adjusting unit is used for adjusting the segmentation direction of the three-dimensional model according to the input conversion module based on the three-dimensional coordinates of the point cloud;
the conversion unit is used for extracting and aligning local features of the point cloud of the three-dimensional model with the adjusted segmentation direction according to the feature conversion model;
the pooling unit is used for extracting global features according to the maximum pooling module based on the local features;
and the classification unit is used for outputting the point cloud areas corresponding to the same scene object according to the classification module based on the local features and the global features.
In one possible implementation, the determining sub-module includes:
the extraction unit is used for extracting the geometric features of the point cloud area;
the matching unit is used for storing the geometric features into a preset K-d tree index structure, performing nearest neighbor search on the geometric features and the scene descriptors, and determining the geometric features matched with the scene descriptors;
and the judging unit is used for determining the trace type in the scene object according to the scene information corresponding to the scene descriptor matched with the geometric features.
In one possible implementation, the decision module includes:
and the decision submodule is used for inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point into a preset rule engine as event data and outputting the fire reason.
In one possible implementation manner, the preset rule engine includes a rule base, a pattern matcher and an agent component;
the decision sub-module comprises:
the calling unit is used for calling the rule conditions in the rule base;
the execution unit is used for matching the event data with the rule conditions according to the pattern matcher, determining candidate fire reasons as execution actions according to matching results, and outputting the event data with matching relations and the rule conditions as well as the execution actions corresponding to the event data with matching relations and the rule conditions to the agent component;
and the conflict processing unit is used for carrying out the decision of the execution sequence on the event data and the rule condition with the matching relationship and the execution action corresponding to the event data and the rule condition with the matching relationship according to the logic conflict decision strategy by the agent component and outputting the fire reason.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 6, the server 6 of this embodiment includes: at least one processor 60, a memory 61, and a computer program 62 stored in the memory 61 and executable on the at least one processor 60, the processor 60 implementing the steps in any of the various method embodiments described above when executing the computer program 62.
The server 6 may be a computing device such as a cloud server. The server may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of the server 6, and does not constitute a limitation on the server 6, and may include more or less components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
The Processor 60 may be a Central Processing Unit (CPU), and the Processor 60 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the server 6, such as a hard disk or a memory of the server 6. The memory 61 may also be an external storage device of the server 6 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the server 6. Further, the memory 61 may also include both an internal storage unit of the server 6 and an external storage device. The memory 61 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a readable storage medium, which is preferably a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps in the above-mentioned method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a server, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A fire cause determination method, comprising:
acquiring a fire scene three-dimensional model;
identifying the trace type in the three-dimensional model of the fire scene;
determining a fire point and a fire spreading direction according to the trace type;
deciding the reason of the fire according to the fire point and the fire spreading direction;
and sending the fire reason to a user terminal to instruct the user terminal to display the fire reason to a user.
2. A fire cause determination method according to claim 1, wherein obtaining a three-dimensional model of a fire scene comprises:
acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
generating a point cloud according to the panoramic image to be processed;
and reconstructing according to the point cloud to obtain a fire scene three-dimensional model.
3. A fire cause determination method according to claim 1, wherein identifying a type of trace in the three-dimensional model of the fire scene comprises:
inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model into a pre-trained example segmentation neural network model to obtain point cloud areas corresponding to the same scene object;
and determining the trace type in the scene object according to the geometrical characteristics of the point cloud area.
4. A fire cause determination method according to claim 3, wherein the instance-segmented neural network model includes an input conversion module, a feature conversion module, a max-pooling module, and a classification module;
inputting the three-dimensional coordinates of the point cloud in the fire scene three-dimensional model into a pre-trained example segmentation neural network model to obtain a point cloud area corresponding to the same scene object, wherein the point cloud area comprises the following steps:
based on the three-dimensional coordinates of the point cloud, adjusting the segmentation direction of the three-dimensional model according to the input conversion module;
extracting point clouds of the three-dimensional model with the adjusted segmentation direction according to the feature conversion model and aligning local features;
extracting global features according to the maximum pooling module based on the local features;
and outputting point cloud areas corresponding to the same scene object according to the classification module based on the local features and the global features.
5. A fire cause determination method according to claim 3, wherein determining a type of a trace in the scene object based on a geometric feature of the point cloud area comprises:
extracting geometric features of the point cloud area;
storing the geometric features into a preset K-d tree index structure, performing nearest neighbor search with a scene descriptor, and determining the geometric features matched with the scene descriptor;
and determining the trace type in the scene object according to the scene information corresponding to the scene descriptor matched with the geometric features.
6. The fire cause determination method according to any one of claims 1 to 5, wherein determining the cause of the fire based on the ignition point and a fire propagation direction includes:
and inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point into a preset rule engine as event data, and outputting the fire reason.
7. A fire cause determination method according to claim 6, wherein the preset rules engine includes a rule base, a pattern matcher and an agent component;
inputting the fire point, the fire spreading direction and the scene information corresponding to the fire point as event data into a preset rule engine, and outputting the reason of the fire, wherein the method comprises the following steps:
calling a rule condition in the rule base;
matching the event data with the rule conditions according to the pattern matcher, determining candidate fire reasons as execution actions according to matching results, and outputting the event data and the rule conditions with matching relations and the execution actions corresponding to the event data and the rule conditions with matching relations to an agent component;
and carrying out the decision of the execution sequence on the event data and the rule condition with the matching relationship and the execution action corresponding to the event data and the rule condition with the matching relationship according to the logic conflict decision strategy by the agent component, and outputting the fire reason.
8. A fire cause determination device, comprising:
the acquisition module is used for acquiring a three-dimensional model of a fire scene;
the identification module is used for identifying the trace type in the three-dimensional model of the fire scene;
the determining module is used for determining a fire point and a fire spreading direction according to the trace type;
the decision module is used for deciding the reason of the fire according to the fire point and the fire spreading direction;
and the sending module is used for sending the fire reason to a user terminal so as to indicate the user terminal to display the fire reason to a user.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A readable storage medium, storing a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1 to 7.
CN202110995100.2A 2021-08-27 2021-08-27 Fire cause determination method, device, server and readable storage medium Pending CN113674423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110995100.2A CN113674423A (en) 2021-08-27 2021-08-27 Fire cause determination method, device, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110995100.2A CN113674423A (en) 2021-08-27 2021-08-27 Fire cause determination method, device, server and readable storage medium

Publications (1)

Publication Number Publication Date
CN113674423A true CN113674423A (en) 2021-11-19

Family

ID=78546900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110995100.2A Pending CN113674423A (en) 2021-08-27 2021-08-27 Fire cause determination method, device, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN113674423A (en)

Similar Documents

Publication Publication Date Title
CN109146892B (en) Image clipping method and device based on aesthetics
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
CN109815843B (en) Image processing method and related product
US10043097B2 (en) Image abstraction system
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN110008943B (en) Image processing method and device, computing equipment and storage medium
CN111753826B (en) Vehicle and license plate association method, device and electronic system
CN111553302B (en) Key frame selection method, device, equipment and computer readable storage medium
CN107832598B (en) Unlocking control method and related product
CN113052754B (en) Method and device for blurring picture background
CN109840885A (en) Image interfusion method and Related product
CN113947613B (en) Target area detection method, device, equipment and storage medium
CN111144425A (en) Method and device for detecting screen shot picture, electronic equipment and storage medium
CN109816628A (en) Face evaluation method and Related product
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN111353325A (en) Key point detection model training method and device
JP5931646B2 (en) Image processing device
CN111291611A (en) Pedestrian re-identification method and device based on Bayesian query expansion
CN114913246B (en) Camera calibration method and device, electronic equipment and storage medium
CN110929721A (en) Text cutting method and device, computer equipment and storage medium
CN113674423A (en) Fire cause determination method, device, server and readable storage medium
CN110674817B (en) License plate anti-counterfeiting method and device based on binocular camera
EP3076370A1 (en) Method and system for selecting optimum values for parameter set for disparity calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination