CN117689802A - Fire escape simulation method, device, server and computer readable storage medium - Google Patents

Fire escape simulation method, device, server and computer readable storage medium Download PDF

Info

Publication number
CN117689802A
CN117689802A CN202211068814.XA CN202211068814A CN117689802A CN 117689802 A CN117689802 A CN 117689802A CN 202211068814 A CN202211068814 A CN 202211068814A CN 117689802 A CN117689802 A CN 117689802A
Authority
CN
China
Prior art keywords
dimensional model
fire
point
generating
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211068814.XA
Other languages
Chinese (zh)
Inventor
崔岩
钟汉明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, Guangdong Siwei Kanan Intelligent Equipment Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202211068814.XA priority Critical patent/CN117689802A/en
Publication of CN117689802A publication Critical patent/CN117689802A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Human Resources & Organizations (AREA)
  • Remote Sensing (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a fire escape simulation method, a device, a server and a computer readable storage medium, wherein the method comprises the following steps: acquiring a three-dimensional model of a scene to be simulated; marking basic building information of the three-dimensional model; randomly generating an animated character in the three-dimensional model; taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the obstacle position as influencing factors, taking the outlet point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm; and generating video information of the motion of the animated character along the escape route, and sending the video information to the user terminal so as to instruct the user terminal to play the video information to the user. Therefore, the method and the device can automatically generate flame, demonstrate the escape process of the animated figure, and truly display the fire escape process for a user to watch.

Description

Fire escape simulation method, device, server and computer readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a fire escape simulation method, a device, a server and a computer readable storage medium.
Background
The fire safety training enables experimenters to conduct treatment escape exercise in a simulation environment, so that fire consciousness and self-rescue capability of the experimenters are improved. However, the current fire safety training is simply and manually performed for scene restoration, and an experienter cannot truly feel the fire escape process.
Disclosure of Invention
The embodiment of the application provides a fire escape simulation method, a fire escape simulation device, a fire escape simulation server and a computer readable storage medium, which can solve the problem that fire safety training in the prior art is simply and manually performed in scene restoration and cannot enable an experienter to truly feel the fire escape process.
In a first aspect, an embodiment of the present application provides a fire escape simulation method, including:
acquiring a three-dimensional model of a scene to be simulated;
marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations;
generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction;
randomly generating an animated character in said three-dimensional model;
taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the barrier position as influencing factors, taking the exit point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm;
and generating video information of the animated character moving along the escape route, and sending the video information to a user terminal so as to instruct the user terminal to play the video information to a user.
In a possible implementation manner of the first aspect, obtaining a three-dimensional model of a scene to be simulated includes:
acquiring an image to be processed; the image to be processed is obtained by shooting a scene to be checked by a depth camera;
generating a point cloud according to the image to be processed;
and processing the point cloud to obtain a three-dimensional model.
In a possible implementation manner of the first aspect, the processing the point cloud to obtain a three-dimensional model includes:
smoothing the point cloud;
and (3) carrying out geometric structure recovery on the point cloud after the smoothing treatment to obtain a three-dimensional model.
In a possible implementation manner of the first aspect, before generating the flame particles on the target object in the three-dimensional model, the method further includes:
and determining the target object in the three-dimensional model.
In a second aspect, an embodiment of the present application provides a fire escape simulation device, including:
the acquisition module is used for acquiring a three-dimensional model of a scene to be simulated;
the marking module is used for marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations;
a first generation module for generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction;
a second generation module for randomly generating an animated character in the three-dimensional model;
the path planning module is used for taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the barrier position as influencing factors, taking the outlet point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm;
and the third generation module is used for generating video information of the animated character moving along the escape route and sending the video information to a user terminal so as to instruct the user terminal to play the video information to a user.
In a third aspect, embodiments of the present application provide a server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect described above when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as described in the first aspect above.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
the method comprises the steps of obtaining a three-dimensional model of a scene to be simulated; marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations; generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction; randomly generating an animated character in the three-dimensional model; taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the obstacle position as influencing factors, taking the outlet point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm; and generating video information of the motion of the animated character along the escape route, and sending the video information to the user terminal so as to instruct the user terminal to play the video information to the user. Therefore, the method and the device can automatically generate flame, demonstrate the escape process of the animated figure, and truly display the fire escape process for a user to watch.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a fire escape simulation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a fire disaster simulation escape device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The technical scheme provided by the embodiment of the application will be described through a specific embodiment.
Referring to fig. 1, a schematic flow chart of a fire escape simulation method according to an embodiment of the present application may be applied to a server, and the method may include the following steps:
step S101, a three-dimensional model of a scene to be simulated is obtained.
In a specific application, obtaining a three-dimensional model of a scene to be simulated includes:
step S201, a to-be-processed image is acquired.
The image to be processed is an image obtained by shooting a scene to be simulated by a depth camera.
Step S202, generating a point cloud according to the image to be processed.
The method comprises the steps of collecting depth image information by using a laser, collecting color image information by using a camera, calibrating the depth image information and the color image information, storing the color image information as RGBD panoramic images, carrying out local polynomial regression on the depth images, repairing the missing and the error of partial pixels of the depth images collected by the laser, obtaining the pose of the camera by using an SFM algorithm, and carrying out reprojection probability fusion on a plurality of depth images to obtain point clouds.
And step S203, processing the point cloud to obtain a three-dimensional model.
Illustratively, processing the point cloud to obtain a three-dimensional model includes:
step S301, performing smoothing processing on the point cloud.
In a specific application, the non-parametric regression extraction method vector and the scaling degree are utilized to carry out smoothing treatment on the point cloud. The non-parametric method is a smoothing technique, the form is not known in advance, but a sample is used for generating, and common methods include a nearest neighbor method, a kernel function method and the like.
And step S302, performing geometric structure recovery on the point cloud after the smoothing treatment to obtain a three-dimensional model.
In the specific application, the MVE algorithm is used for extracting the surface triangles of the point cloud in the S4, storing the surface triangles into an original grid model, clustering and face reduction operations are carried out on the original grid model, the number of the triangles can be greatly reduced, a simplified model with the same shape but smaller data size is generated, and finally parameterization and visibility judgment are carried out, so that a 3D model with a color map is generated.
And step S102, marking basic building information of the three-dimensional model.
Wherein the basic building information includes exit points and obstacle locations.
In a specific application, the three-dimensional model of the current scene is input into a pre-trained semantic segmentation neural network model (for example, a UNet neural network model) to identify basic building information of the three-dimensional model of the current scene. The pre-trained semantic segmentation neural network model can be obtained through training according to an open source data set.
Step S103, generating flame particles on the target object in the three-dimensional model.
Optionally, before generating the flame particles on the target object in the three-dimensional model, the method further comprises:
a target item in the three-dimensional model is determined.
Illustratively, an item corresponding to a click position of the user in the three-dimensional model is taken as a target item.
The flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction, and the types of the flame particles comprise common flames, flame burning and jet flames. Preferably, the flame particle properties, including flame multiplier, flame size, and smoke multiplier, may be automatically set.
In a specific application, particles are emitted in a preset time and a preset area through a particle system so as to generate initial flame particles; the flame particle attribute is initialized, the flame particle has a plurality of attributes, mainly comprises particle color, speed, size and transparency, the flame particle is composed of flame smoke, sparks and smoke, different particles show different colors and transparency, and the color and transparency of the flame particle are affected at the stage of fire.
Step S104, randomly generating an animation character in the three-dimensional model.
Step S105, taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the obstacle position as influencing factors, taking the exit point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm.
The preset path planning algorithm includes, but is not limited to, dijstra algorithm, a algorithm, D algorithm, LPA algorithm, D algorithm.
Step S106, generating video information of the motion of the animated character along the escape route, and sending the video information to the user terminal so as to instruct the user terminal to play the video information to the user.
In the embodiment of the application, a three-dimensional model of a scene to be simulated is obtained; marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations; generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction; randomly generating an animated character in the three-dimensional model; taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the obstacle position as influencing factors, taking the outlet point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm; and generating video information of the motion of the animated character along the escape route, and sending the video information to the user terminal so as to instruct the user terminal to play the video information to the user. Therefore, the embodiment of the application can automatically generate flame, demonstrate the escape process of the animated figure, and truly display the fire escape process for a user to watch.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the fire disaster simulation escape method described in the above embodiments, fig. 2 shows a block diagram of the fire disaster simulation escape device provided in the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 2, the apparatus includes:
an acquisition module 21, configured to acquire a three-dimensional model of a scene to be simulated;
a marking module 22 for marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations;
a first generation module 23 for generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction;
a second generation module 24 for randomly generating an animated character in said three-dimensional model;
the path planning module 25 is configured to take a virtual position point where the animated character is located as a fire point, a preset distance difference between the fire point and the virtual position, a fire spreading direction, smoke spreading and an obstacle position are taken as influencing factors, and the exit point is taken as an end point, and generate and mark an escape route in the three-dimensional model based on a preset path planning algorithm;
and a third generating module 26, configured to generate video information of the animated character moving along the escape route, and send the video information to a user terminal, so as to instruct the user terminal to play the video information to the user.
In one possible implementation manner, the acquiring module includes:
the acquisition sub-module is used for acquiring an image to be processed; the image to be processed is obtained by shooting a scene to be checked by a depth camera;
the generation sub-module is used for generating a point cloud according to the image to be processed;
and the processing sub-module is used for processing the point cloud to obtain a three-dimensional model.
In one possible implementation, the processing sub-module includes:
a smoothing unit, configured to smooth the point cloud;
and the geometric recovery unit is used for carrying out geometric structure recovery on the point cloud after the smoothing treatment to obtain a three-dimensional model.
In one possible implementation, the apparatus further includes:
and the determining module is used for determining the target object in the three-dimensional model.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
Fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 3, the server 3 of this embodiment includes: at least one processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 implementing the steps of any of the various method embodiments described above when executing the computer program 32.
The server 3 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The server may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the server 3 and is not meant to be limiting as the server 3, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), the processor 30 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may in some embodiments be an internal storage unit of the server 3, such as a hard disk or a memory of the server 3. The memory 31 may in other embodiments also be an external storage device of the server 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the server 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the server 3. The memory 31 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs etc., such as program codes of the computer program etc. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in the form of source code, object code, executable files or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a server, a recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A fire escape simulation method, comprising:
acquiring a three-dimensional model of a scene to be simulated;
marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations;
generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction;
randomly generating an animated character in said three-dimensional model;
taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the barrier position as influencing factors, taking the exit point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm;
and generating video information of the animated character moving along the escape route, and sending the video information to a user terminal so as to instruct the user terminal to play the video information to a user.
2. The fire escape simulation method according to claim 1, wherein acquiring a three-dimensional model of a scene to be simulated comprises:
acquiring an image to be processed; the image to be processed is obtained by shooting a scene to be checked by a depth camera;
generating a point cloud according to the image to be processed;
and processing the point cloud to obtain a three-dimensional model.
3. The fire escape simulation method according to claim 2, wherein the processing of the point cloud to obtain a three-dimensional model includes:
smoothing the point cloud;
and (3) carrying out geometric structure recovery on the point cloud after the smoothing treatment to obtain a three-dimensional model.
4. The fire escape simulation method according to claim 1, further comprising, before generating flame particles on the target object in the three-dimensional model:
and determining the target object in the three-dimensional model.
5. A fire escape simulation device, comprising:
the acquisition module is used for acquiring a three-dimensional model of a scene to be simulated;
the marking module is used for marking basic building information of the three-dimensional model; wherein the basic building information includes exit points and obstacle locations;
a first generation module for generating flame particles on a target object in the three-dimensional model; wherein, the flame particles are correspondingly provided with a flame spreading direction and a smoke spreading direction;
a second generation module for randomly generating an animated character in the three-dimensional model;
the path planning module is used for taking the virtual position point of the animated character as a fire point, taking the preset distance difference between the fire point and the virtual position, the fire spreading direction, the smoke spreading and the barrier position as influencing factors, taking the outlet point as an end point, and generating and marking an escape route in the three-dimensional model based on a preset path planning algorithm;
and the third generation module is used for generating video information of the animated character moving along the escape route and sending the video information to a user terminal so as to instruct the user terminal to play the video information to a user.
6. The fire escape simulation device according to claim 5, wherein the acquisition module includes:
the acquisition sub-module is used for acquiring an image to be processed; the image to be processed is obtained by shooting a scene to be checked by a depth camera;
the generation sub-module is used for generating a point cloud according to the image to be processed;
and the processing sub-module is used for processing the point cloud to obtain a three-dimensional model.
7. The fire escape simulation device of claim 6, wherein the processing sub-module comprises:
a smoothing unit, configured to smooth the point cloud;
and the geometric recovery unit is used for carrying out geometric structure recovery on the point cloud after the smoothing treatment to obtain a three-dimensional model.
8. The fire escape simulation device according to claim 5, wherein the device further comprises:
and the determining module is used for determining the target object in the three-dimensional model.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 4 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 4.
CN202211068814.XA 2022-09-02 2022-09-02 Fire escape simulation method, device, server and computer readable storage medium Pending CN117689802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211068814.XA CN117689802A (en) 2022-09-02 2022-09-02 Fire escape simulation method, device, server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211068814.XA CN117689802A (en) 2022-09-02 2022-09-02 Fire escape simulation method, device, server and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117689802A true CN117689802A (en) 2024-03-12

Family

ID=90130620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211068814.XA Pending CN117689802A (en) 2022-09-02 2022-09-02 Fire escape simulation method, device, server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117689802A (en)

Similar Documents

Publication Publication Date Title
US20220383649A1 (en) System and method for facilitating graphic-recognition training of a recognition model
US9299188B2 (en) Automatic geometry and lighting inference for realistic image editing
US11256958B1 (en) Training with simulated images
US9262853B2 (en) Virtual scene generation based on imagery
CN107223269A (en) Three-dimensional scene positioning method and device
CN111880652B (en) Method, apparatus and storage medium for moving AR object position
CN110310224B (en) Light effect rendering method and device
KR20190068341A (en) Method and apparatus for providing contents controlled or synthesized in accordance with an interaction of user
CN111199573A (en) Virtual-real mutual reflection method, device, medium and equipment based on augmented reality
CN114092670A (en) Virtual reality display method, equipment and storage medium
CN112652046A (en) Game picture generation method, device, equipment and storage medium
JP2024518695A (en) User input based distraction removal in media items - Patents.com
CN110248165B (en) Label display method, device, equipment and storage medium
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN113706709A (en) Text special effect generation method, related device, equipment and storage medium
CN113269781A (en) Data generation method and device and electronic equipment
CN117422851A (en) Virtual clothes changing method and device and electronic equipment
KR20220135890A (en) Method and system for collecting virtual environment-based data for artificial intelligence object recognition model
CN117689802A (en) Fire escape simulation method, device, server and computer readable storage medium
CN116503538A (en) Monomer modeling method, system, terminal and storage medium based on oblique photography
KR100701784B1 (en) Method and apparatus of implementing an augmented reality by merging markers
CN113240720B (en) Three-dimensional surface reconstruction method and device, server and readable storage medium
CN114489341A (en) Gesture determination method and apparatus, electronic device and storage medium
CN113192171A (en) Three-dimensional effect graph efficient rendering method and system based on cloud rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination