CN118097058A - Fire point positioning method based on digital twinning - Google Patents

Fire point positioning method based on digital twinning Download PDF

Info

Publication number
CN118097058A
CN118097058A CN202410513076.8A CN202410513076A CN118097058A CN 118097058 A CN118097058 A CN 118097058A CN 202410513076 A CN202410513076 A CN 202410513076A CN 118097058 A CN118097058 A CN 118097058A
Authority
CN
China
Prior art keywords
scene
fire
flame
model
smoke
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410513076.8A
Other languages
Chinese (zh)
Other versions
CN118097058B (en
Inventor
郭晨
李奕昕
王玮
张圣朝
马小明
吴迪
石少杰
王连铁
孟庆山
刘术军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Fire Research Institute of MEM
Original Assignee
Shenyang Fire Research Institute of MEM
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Fire Research Institute of MEM filed Critical Shenyang Fire Research Institute of MEM
Priority to CN202410513076.8A priority Critical patent/CN118097058B/en
Priority claimed from CN202410513076.8A external-priority patent/CN118097058B/en
Publication of CN118097058A publication Critical patent/CN118097058A/en
Application granted granted Critical
Publication of CN118097058B publication Critical patent/CN118097058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Fire-Detection Mechanisms (AREA)

Abstract

The invention relates to a digital twin-based fire spot positioning method, which belongs to the field of fire scene investigation. According to the method, the problems that a professional modeling staff cannot quickly perform fire scene modeling, and on the condition that the fire cannot be directly shot and the scene is seriously damaged, the fire cannot be judged by performing a scene simulation experiment based on a monitoring video picture are solved, and the capability of assisting in judging the fire can be greatly improved.

Description

Fire point positioning method based on digital twinning
Technical Field
The invention belongs to the field of fire scene investigation, and particularly relates to a fire scene three-dimensional modeling method, digital twin reproduction software and technology of a fire scene camera picture and a digital twin positioning method of a fire point.
Background
The current fire investigation has wide application to monitoring video analysis, a large number of field simulation experiments are generally involved in the processing process to determine the range of the fire point according to historical experience, the method is strongly dependent on the residual scene of the fire scene, and when the wall breaking residue of the scene is insufficient to support and restore the light shadow condition of the scene before the fire occurs, the range of the fire point is difficult to distinguish; on the other hand, a large amount of useful shadow information can be omitted only for monitoring video picture analysis, and the method is mainly carried out according to experience, and is not accurate.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a digital twinning-based fire point positioning method, which is characterized in that a basic object model such as an electric bicycle, an automobile and the like is preset, data are realized by utilizing internal coordinate input of software, a scene is quickly placed in, meanwhile, fire key factors including flame and smoke are placed in the scene, and a scene positioning device and a virtual camera are utilized for carrying out fire monitoring video picture reproduction and scene position calibration, so that the range of the fire point is obtained.
The invention is realized by the following technical scheme: the digital twinning-based fire point positioning method comprises the following steps:
1) Video acquisition and analysis: and the fire investigation personnel copies the video recordings shot by the monitoring equipment around the fire scene to the computer, and searches the light and shadow condition caused by the fire light at the moment of fire to form an initial fire scene.
2) Drawing a field plan and recording the height: and plotting all the buildings or indoor articles in the view angle range of the monitoring camera in the plan, and recording the heights of all the buildings or indoor articles.
3) Plotting all buildings or indoor articles in a plan view, manufacturing a digital model in a scene, inputting the dimensions into a virtual scene, and completing digital twinning of the scene of the fire scene to form a stereoscopic model scene of the fire scene.
In the step 3), the specific method for manufacturing the digital model in the virtual scene comprises the following steps: and after the empty scene is loaded, a preset object model switch is opened in a user interface, the size, the position and the angle parameters consistent with the position of the fire scene are input, the loading of the model is completed through the virtual engine loading, the functions of the position, the angle and the wide-angle parameters of a plurality of camera modules are respectively loaded, and the camera modules are switched to a camera to be analyzed and are stored.
In the step 3), the digital twin is data such as the size and the position of a physical model formed by using objects in a scene of a fire disaster, and the like, is input into software, and a simulation process of flame jump is integrated, so that mapping is completed in a virtual engine space, and a periodic process of forming jump light shadow by a fire point in the scene of the fire disaster is reflected.
4) Adding flame and smoke into a three-dimensional model scene of a fire scene, setting position coordinates, observing the difference between the generated light and shadow condition and the monitoring video picture, performing first checking, deleting the flame and smoke positions which can be observed and have obvious difference between the model scene and the monitoring video picture, and obtaining an initial virtual flame and smoke data set.
5) And (3) moving the virtual flame and the smoke obtained in the step (4), adjusting the position and the intensity, checking whether the difference exists between the virtual flame and the smoke and the monitoring video picture, adding the position and the intensity data which are indistinguishable from the monitoring video picture after adjustment into a virtual flame and smoke data set to obtain a complete virtual flame and smoke data set, and inputting the complete virtual flame and smoke data set into a model scene.
6) Superposing abnormal brightness and shadow formed by flame and smoke in the model scene in the step 5) on corresponding positions of the abnormal brightness and shadow of the monitoring video through opacity modification by a computer, observing superposition conditions of the abnormal brightness and shadow and the monitoring video, and eliminating non-superposition and unreasonable flame and smoke positions of the monitoring video to obtain a final virtual flame and smoke coordinate position data set;
In the step 6), the specific method for modifying the abnormal brightness and shadow formed by the flame and the smoke in the model scene through the opacity is as follows: taking an initial ignition picture as a background, improving the contrast of an abnormal bright position in the background, reducing the overall transparency, putting the computer picture on top, overlapping the computer picture with picture contents formed by a camera module, checking whether the scene model position is correct, and further observing whether the superposition position of abnormal bright formed by flame and smoke and shadow is consistent.
7) And marking all the position ranges of the final virtual flame and smoke data sets, which can form abnormal brightness and shadows in the model scene, on the plane diagrams, recording the corresponding height of each plane diagram position, and finally determining the range of the ignition point.
The beneficial effects of the invention are mainly realized in the following aspects:
1) By utilizing a virtual engine and a digital twin technology, the problem that a three-dimensional scanning scene cannot be added into a camera module, so that the three-dimensional scanning scene cannot be watched from different angles and a specific angle of a field camera is solved.
2) Through geometric modeling, the modeling process can be simplified, and fire investigation personnel can conveniently and rapidly model and analyze the range of the fire.
3) The method can reproduce the fire scene in a very short time, and reproduce the scene picture shot by the camera, thereby enabling a large number of fire cases which cannot directly observe the fire point to judge the range of the fire point and enabling the case detection to be possible.
4) The construction speed of the fire scene model can be greatly increased, and non-professional modeling staff can quickly perform fire scene modeling.
Drawings
Fig. 1: a picture shot by a fire scene monitoring camera;
Fig. 2: the fire scene comprises a plan view of a camera point bitmap;
fig. 3: 007 in-counter layout in FIG. 2;
Fig. 4: a top view of a scene of a fire scene after digital twinning;
Fig. 5: adding default intensity and a top view light shadow situation diagram formed after the fire light of the first position in the scene;
fig. 6: a top view light shadow situation diagram formed after adding default intensity and fire light at a second position in the scene;
fig. 7: a top view light shadow situation diagram formed after adding default intensity and fire light at a third position in the scene;
fig. 8: a light and shadow situation diagram generated by the fire light at the first position under the visual angle of the virtual camera;
Fig. 9: a light and shadow situation diagram generated by the fire light at the second position under the visual angle of the virtual camera;
fig. 10: a light and shadow situation diagram generated by fire light at a third position under the visual angle of the virtual camera;
Fig. 11: the virtual camera and the scene of the fire scene camera have different transparency superposition graphs;
fig. 12: the red mark position is a position diagram of a range in which a fire point possibly exists in a plan view;
Fig. 13: a digital modeling process flow diagram in a virtual scene;
Fig. 14: the opacity collation and fire point location determination dataset flow chart is modified.
Detailed Description
Example 1: the software used in the invention is the software which is developed and registered by the company, and the software name is a fire checking ignition point auxiliary analysis system, version number V1.0, software registration number 2024SR0169834 and certificate number 12573707.
The digital twinning-based fire point positioning method comprises the following steps:
1) Video acquisition and analysis: and the fire investigation personnel copies the video recordings shot by the monitoring equipment around the fire scene to the computer, and searches the light and shadow condition caused by the fire light at the moment of fire to form an initial fire scene.
2) Drawing a field plan and recording the height: and plotting all the buildings or indoor articles in the view angle range of the monitoring camera in the plan, and recording the heights of all the buildings or indoor articles.
3) Plotting all buildings or indoor articles in a plan view, manufacturing a digital model in a scene, inputting the dimensions into a virtual scene, and completing digital twinning of the scene of the fire scene to form a stereoscopic model scene of the fire scene.
The specific method for making a digital model in a virtual scene is shown in fig. 13: after loading the empty scene, opening a switch of a preset object model in a user interface, as in an operation interface of fig. 4, and opening the switch by clicking and selecting before the control of a block 1 and the like; and inputting parameters of the size, the position and the angle consistent with the position of the fire scene, such as 7 digital controls in the back of a block 1 in fig. 4, and inputting parameters of the size, the position, the angle and the like; loading a preset model through virtual engine loading; the functions of the position, the angle and the wide angle parameters of the plurality of camera modules are respectively loaded, and data of three parameters of the right side position, the angle and the focal length of an interface in fig. 4 are respectively input into a square control, and after the completion, the camera control can be clicked, and the camera control is switched to a camera picture to be analyzed; finally, the storage is completed, for example, in fig. 4, and the storage can be completed by clicking the lower right corner "save".
The digital twin is data such as the size and the position of a physical model formed by using objects in a scene of a fire disaster, and the like, is input into software, and a simulation process of flame jump is integrated, so that mapping is completed in a virtual engine space, and a periodic process of forming jump shadows by fire points in the scene of the fire disaster is reflected.
4) Adding flame and smoke into a three-dimensional model scene of a fire scene, setting position coordinates, observing the difference between the generated light and shadow condition and the monitoring video picture, performing first checking, deleting the flame and smoke positions which can be observed and have obvious difference between the model scene and the monitoring video picture, and obtaining an initial virtual flame and smoke data set.
For example, FIG. 6, loading of the flame module may be completed after a left click at the "flame" position on the left side of the interface; the flame can be added in the virtual scene by inputting data into the box on the right side of the flame, the flame can form light shadows on different objects in the virtual scene, whether the flame is positioned in a range where a fire point is possible can be judged by judging whether the light shadows are consistent with an actual fire video picture, and all possible fire point positions form a coordinate data set, so that the fire point range can be determined.
5) And (3) moving the virtual flame and the smoke obtained in the step (4), adjusting the position and the intensity, checking whether the difference exists between the virtual flame and the smoke and the monitoring video picture, adding the position and the intensity data which are indistinguishable from the monitoring video picture after adjustment into a virtual flame and smoke data set to obtain a complete virtual flame and smoke data set, and inputting the complete virtual flame and smoke data set into a model scene.
6) And (3) superposing abnormal brightness and shadow formed by flame and smoke in the model scene in the step 5) on corresponding positions of the abnormal brightness and shadow of the monitoring video through opacity modification by a computer, observing the superposition condition of the abnormal brightness and shadow with the monitoring video, and eliminating the non-superposition condition with the monitoring video, namely unreasonable flame and smoke positions, so as to obtain a final virtual flame and smoke coordinate position data set.
The method of modifying the abnormal light and shadow of flame and smoke formation in the model scene by opacity to be superimposed on the corresponding position of the abnormal light and shadow of the surveillance video is shown in fig. 14. Taking an initial ignition picture as a background, improving the contrast of an abnormal bright position in the background, reducing the overall transparency, putting the computer picture on top, overlapping the computer picture with picture contents formed by a camera module, checking whether the scene model position is correct, and further observing whether the superposition position of abnormal bright formed by flame and smoke and shadow is consistent.
7) And marking all the position ranges of the final virtual flame and smoke data sets, which can form abnormal brightness and shadows in the model scene, on the plane diagrams, recording the corresponding height of each plane diagram position, and finally determining the range of the ignition point.
As shown in fig. 6, clicking the "save" button on the interface can input the set of coordinates to the system background, and finally reading all coordinate points in the system background file to form a set, manually marking on the plan, and recording the height data of the possible range.
Specific: according to the above concept, the flow of the present embodiment is described as follows:
a) Aiming at the step 1), intercepting a key frame of a fire scene monitoring video as shown in fig. 1, wherein the red circle marking position is a position for generating abnormal brightness, and the visible abnormal brightness position is positioned in a key frame picture.
B) For step 2), a fire scene plan is drawn, as shown in fig. 2 and 3, wherein fig. 2 is a plan view of the fire scene including a camera point bitmap, and fig. 3 is an in-counter layout of 007 in fig. 2.
C) Aiming at the step 3), the refrigerator, the horizontal refrigerator, the vertical fresh-keeping cabinet and the iron frame meat table in the scene in fig. 3 are manufactured into rectangular cube models, are placed in the virtual scene one by one according to the input coordinates of the positions of the plan views, are placed with the virtual scene cameras, and are input with the coordinates and angles of the cameras, as shown in fig. 4.
D) For step 4), adding a position where a fire is likely to occur in the scene, and adjusting the flame size, as shown in fig. 5 and 6 and 7; the camera of the virtual scene is used for observing different light and shadow effects generated by the camera, as shown in fig. 8-10, the abnormal brightness generated at the position of fig. 10 is basically consistent with the abnormal brightness shown in fig. 1, the abnormal brightness generated at the positions of fig. 8 and 9 is inconsistent with the light and shadow situation shown in fig. 1, and obvious redundant brightness exists at the ground marking position.
E) For step 6), fig. 10 is superimposed with fig. 1 by different transparency, as shown in fig. 11.
F) For steps 5) and 7), the aggregate is all possible to form the fire location of the picture of fig. 1, the final fire range being ABCDE forming a plan view as shown in fig. 12, with a height of between 700mm and 870 mm.

Claims (4)

1. The digital twinning-based fire point positioning method is characterized by comprising the following steps of:
1) Video acquisition and analysis: the fire investigation personnel copies the video recordings shot by the monitoring equipment around the fire scene to the computer, and searches the light shadow condition caused by the fire light at the moment of fire to form an initial fire scene;
2) Drawing a field plan and recording the height: plotting all buildings or indoor articles within the view angle range of the monitoring camera in a plan, and recording the heights of all buildings or indoor articles;
3) Plotting all buildings or indoor articles in a plan view, manufacturing a digital model in a scene, inputting the dimensions into a virtual scene, completing digital twinning of the scene of the fire scene, and forming a three-dimensional model scene of the fire scene;
4) Adding flame and smoke into a three-dimensional model scene of a fire scene, setting position coordinates, observing the difference between the generated light and shadow situation and the monitoring video picture, performing first checking, deleting the flame and smoke positions which can be observed and have obvious difference between the model scene and the monitoring video picture, and obtaining an initial virtual flame and smoke data set;
5) Moving the virtual flame and the smog obtained in the step 4), adjusting the position and the intensity, checking whether the video picture is different from the monitoring video picture, adding the position and the intensity data which are not different from the monitoring video picture after adjustment into a virtual flame and smog data set to obtain a complete virtual flame and smog data set, and inputting the complete virtual flame and smog data set into a model scene;
6) Superposing abnormal brightness and shadow formed by flame and smoke in the model scene in the step 5) on corresponding positions of the abnormal brightness and shadow of the monitoring video through opacity modification by a computer, observing superposition conditions of the abnormal brightness and shadow and the monitoring video, and eliminating non-superposition and unreasonable flame and smoke positions of the monitoring video to obtain a final virtual flame and smoke coordinate position data set;
7) And marking all the position ranges of the final virtual flame and smoke data sets, which can form abnormal brightness and shadows in the model scene, on the plane diagrams, recording the corresponding height of each plane diagram position, and finally determining the range of the ignition point.
2. The digital twinning-based fire positioning method according to claim 1, wherein in the step 3), the specific method for making the digital model in the virtual scene is as follows: and after the empty scene is loaded, a preset object model switch is opened in a user interface, the size, the position and the angle parameters consistent with the position of the fire scene are input, the loading of the model is completed through the virtual engine loading, the functions of the position, the angle and the wide-angle parameters of a plurality of camera modules are respectively loaded, and the camera modules are switched to a camera to be analyzed and are stored.
3. The method for positioning fire points based on digital twinning according to claim 1, wherein in the step 3), the digital twinning is the data such as the size and the position of a physical model formed by using objects in a scene of a fire scene, and the data are input into software, and the simulation process of flame jump is integrated, so that mapping is completed in a virtual engine space, and the periodic process of forming jump light shadow by the fire points in the scene of the fire scene is reflected.
4. The method for locating fire based on digital twinning according to claim 1, wherein in the step 6), the specific method for modifying the abnormal brightness and shadow formed by the flame and smoke in the model scene by opacity and overlapping the abnormal brightness and shadow in the corresponding position of the monitoring video is as follows: taking an initial ignition picture as a background, improving the contrast of an abnormal bright position in the background, reducing the overall transparency, putting the computer picture on top, overlapping the computer picture with picture contents formed by a camera module, checking whether the scene model position is correct, and further observing whether the superposition position of abnormal bright formed by flame and smoke and shadow is consistent.
CN202410513076.8A 2024-04-26 Fire point positioning method based on digital twinning Active CN118097058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410513076.8A CN118097058B (en) 2024-04-26 Fire point positioning method based on digital twinning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410513076.8A CN118097058B (en) 2024-04-26 Fire point positioning method based on digital twinning

Publications (2)

Publication Number Publication Date
CN118097058A true CN118097058A (en) 2024-05-28
CN118097058B CN118097058B (en) 2024-07-09

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307740A (en) * 2023-05-16 2023-06-23 苏州和歌信息科技有限公司 Fire point analysis method, system, equipment and medium based on digital twin city
US20240037295A1 (en) * 2022-07-29 2024-02-01 Electronics And Telecommunications Research Institute Authoring system and method for high-precision fire-dynamics-simulation
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240037295A1 (en) * 2022-07-29 2024-02-01 Electronics And Telecommunications Research Institute Authoring system and method for high-precision fire-dynamics-simulation
CN116307740A (en) * 2023-05-16 2023-06-23 苏州和歌信息科技有限公司 Fire point analysis method, system, equipment and medium based on digital twin city
CN117876874A (en) * 2024-01-15 2024-04-12 西南交通大学 Forest fire detection and positioning method and system based on high-point monitoring video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
洪洋: ""某双层岛式地铁车站火灾烟气蔓延规律研究"", 《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》, 15 March 2019 (2019-03-15), pages 1 - 79 *
王连铁 等: ""视频侦查技术在火灾现场勘查中的应用"", 《2019中国消防协会科学技术年会论文集》, 4 December 2019 (2019-12-04), pages 412 - 414 *
鄂大志: ""火灾现场起火部位辅助识别系统"", pages 1 - 3, Retrieved from the Internet <URL:http://www.doc88.com/p-6951338825079.html> *

Similar Documents

Publication Publication Date Title
Deng et al. Automatic indoor construction process monitoring for tiles based on BIM and computer vision
US20130063563A1 (en) Transprojection of geometry data
CN108527362A (en) Equipment, robot setting method and computer readable recording medium storing program for performing is arranged in robot
CN106791700B (en) Enterprise critical area personnel path safety monitoring system and method
CN108446585A (en) Method for tracking target, device, computer equipment and storage medium
CN109559381B (en) Transformer substation acceptance method based on AR space measurement technology
CN107832564B (en) A kind of shaken based on PZT surveys the aerial drainage structure BIM non-destructive tests information system of signal
CN111222190B (en) Ancient building management system
CN110579191A (en) target object inspection method, device and equipment
CN111985112A (en) Blast furnace digital twin system based on Unity3D
CN112950789A (en) Method, device and storage medium for displaying object through virtual augmented reality
JP2010256252A (en) Image capturing device for three-dimensional measurement and method therefor
CN112562092A (en) Electrical test intelligent interaction system based on AR technology
CN110084797A (en) Plane monitoring-network method, apparatus, electronic equipment and storage medium
CN105378573B (en) The computational methods of information processor, examination scope
CN108663373A (en) A kind of electric energy meter surface structure and component information acquisition comparison method and system
CN118097058B (en) Fire point positioning method based on digital twinning
CN118097058A (en) Fire point positioning method based on digital twinning
CN116935192A (en) Data acquisition method and system based on computer vision technology
CN117173373A (en) Panoramic display method and system for power transmission and transformation equipment based on digital twin base
CN110472551A (en) A kind of across mirror method for tracing, electronic equipment and storage medium improving accuracy
CN114971249A (en) Underground engineering risk early warning and interactive analysis method based on mixed reality
CN113286119A (en) Unity 3D-based warehouse digital twinning system, method and apparatus
CN108776963B (en) Reverse image authentication method and system
CN112184673A (en) Tablet target detection method for medication compliance management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant