CN115359568B - Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment - Google Patents
Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment Download PDFInfo
- Publication number
- CN115359568B CN115359568B CN202211020475.8A CN202211020475A CN115359568B CN 115359568 B CN115359568 B CN 115359568B CN 202211020475 A CN202211020475 A CN 202211020475A CN 115359568 B CN115359568 B CN 115359568B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- motion
- agent
- parameters
- individual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Computer Graphics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Psychiatry (AREA)
- Remote Sensing (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Alarm Systems (AREA)
Abstract
The application provides a simulation method, a computer-readable storage medium and computer equipment for pedestrian agent movement and emergency evacuation, comprising the following steps: identifying the pedestrian motion state in the building scene according to the acquired monitoring videos of the plurality of monitoring video points in the current building scene, and updating the individual pedestrian parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion behaviors of the current building scene; constructing a space model and a pedestrian agent according to the current building scene, synchronizing the position coordinates of the pedestrian agent in the space model, and synchronizing the position coordinates of the pedestrian agent in the space model; and updating the real-time position coordinates of the pedestrian intelligent body to obtain a predicted movement track or evacuation track of the pedestrian intelligent body. The crowd excessive aggregation early warning and emergency evacuation prediction can be achieved.
Description
Technical Field
The application belongs to the field of pedestrian evacuation simulation, and particularly relates to a simulation method, a computer-readable storage medium and computer equipment for pedestrian intelligent body movement and emergency evacuation.
Background
The urban crowd behavior monitoring analysis and the movement and evacuation simulation can predict crowd moving lines in urban intelligent buildings and crowd migration in urban areas, can avoid urban large-scale crowd abnormal gathering, traffic hub pedestrian flow confusion and crowd congestion under extreme conditions, are helpful for urban pedestrian traffic flow design and personnel safety management, and are beneficial to daily management of urban intelligent traffic and intelligent buildings and safety guarantee of major activities. The pedestrian behaviors play a decisive role in crowd movement and evacuation simulation results, the existing pedestrian flow and emergency evacuation simulation technology in China at present cannot realize the coupling of a model and various pedestrian behaviors, cannot simulate the influence of individual behaviors among crowds on crowd movement, and cannot accurately simulate the flowing scene of complex crowds.
Disclosure of Invention
The invention aims to provide a simulation method, a computer-readable storage medium and computer equipment for pedestrian intelligent body movement and emergency evacuation, and aims to solve the problems that the emergency evacuation simulation cannot realize the coupling of a model and various pedestrian behaviors, cannot simulate the influence of individual behaviors among crowds on the crowd movement and cannot accurately simulate the flow of complex crowds.
In a first aspect, the present application provides a simulation method for pedestrian agent movement and emergency evacuation, including:
the method comprises the steps of presetting a building scene type, and setting initialization data of individual parameters of pedestrians, pedestrian motion parameters and pedestrian motion behaviors according to the preset building scene;
identifying the pedestrian motion state in the building scene according to the acquired monitoring videos of the plurality of monitoring video points in the current building scene, and updating the individual pedestrian parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion behaviors of the current building scene;
constructing a space model and a pedestrian intelligent body according to the current building scene, and synchronizing the position coordinates of the pedestrian intelligent body in the space model; the space model is a three-dimensional space model for crowd simulation, which is created based on a building BIM model;
inputting the updated initialization data of the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the current building scene into a pre-constructed intelligent body model, outputting the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the pedestrian intelligent body which are updated in real time, and updating the real-time position coordinates of the pedestrian intelligent body to obtain the predicted pedestrian intelligent body motion trail or evacuation trail.
Further, identifying a pedestrian motion state in the building scene according to the acquired monitoring videos of the plurality of monitoring video points in the current building scene, and updating initialization data of individual pedestrian parameters, pedestrian motion parameters and pedestrian motion behaviors of the current building scene, wherein the initialization data specifically comprises:
identifying individual pedestrians in the monitoring video according to the monitoring videos of the plurality of monitoring video points in the current building scene, and marking the individual pedestrians;
the method comprises the steps that association and matching are carried out on pedestrian individuals in a current building scene through marks according to different time, different monitoring points and different videos, position coordinates of the pedestrian individuals at different time points are obtained, and known motion trail and pedestrian dynamic parameters of each pedestrian individual are obtained;
calculating the pedestrian motion parameters of each pedestrian individual according to the known motion trail of each pedestrian individual;
according to the obtained known motion trail of each pedestrian, estimating the motion trail of each pedestrian outside the monitoring video, and obtaining pedestrian distribution and motion trail data of different times in the building scene;
analyzing individual pedestrians in the monitoring video to obtain static parameters of the pedestrians;
calculating the pedestrian movement behaviors of different pedestrian individuals by using a cyclic neural network algorithm according to the pedestrian static parameters and the movement track data;
and updating the pedestrian individual parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion of the current building scene, wherein the pedestrian individual parameters comprise static parameters and dynamic parameters.
Further, the static parameters include: pedestrian physical size, gender, age, pedestrian historical average moving speed, and pedestrian identity; the identity of the pedestrian is determined by clothing, average moving speed and historical moving track of the pedestrian; the dynamic parameters include: the panic index and polite index of pedestrians in scene changes, and whether they are injured.
Further, the spatial model is a three-dimensional spatial model for crowd simulation created based on the building BIM model. Further, the pedestrian motion parameters include pedestrian coordinates, a pedestrian motion speed, and a motion direction.
Further, the agent model is constructed by: according to the obtained monitoring videos of a plurality of monitoring video points in the current building scene and the space model of the current building scene as samples, extracting dynamic parameters of each pedestrian agent by adopting a computer vision algorithm, and calculating the relative positions of the pedestrian agents with other pedestrian agents in real time; predicting the target position of the pedestrian agent according to the known motion trail and the pedestrian identity, identifying an obstacle in the route range from the pedestrian agent to the target position according to the video image, predicting the motion path and the motion speed of the pedestrian agent in the preset time in real time according to the preset influence weight of the model in the building scene, and updating the identification result of the pedestrian motion behavior.
Further, the step of constructing a space model and a pedestrian agent according to the current building scene, and synchronizing the position coordinates of the pedestrian agent in the space model further comprises:
environmental events and time events in the current building scene are preset.
Further, the step of inputting the updated initialization data of the pedestrian individual parameter, the pedestrian motion parameter and the pedestrian motion behavior of the current building scene into a pre-constructed intelligent body model, outputting the pedestrian individual parameter, the pedestrian motion parameter and the pedestrian motion behavior of the pedestrian intelligent body updated in real time, and updating the real-time position coordinates of the pedestrian intelligent body to obtain a predicted pedestrian intelligent body motion track or evacuation track, specifically comprising:
presetting a motion trail of a pedestrian intelligent body according to the final target position coordinates, and updated individual pedestrian parameters, pedestrian motion parameters and pedestrian motion behaviors;
acquiring a preference value of the pedestrian agent, the environmental event and the time event, which influence the motion direction of the pedestrian agent, and calculating the preference value of the motion direction of the pedestrian agent;
and calculating and updating the individual pedestrian parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the pedestrian intelligent body in real time according to the preference value of the motion direction of the pedestrian intelligent body, obtaining the real-time position coordinates of the pedestrian intelligent body, and correcting the motion track or the evacuation track of the pedestrian intelligent body.
In a second aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the pedestrian agent movement and emergency evacuation simulation method.
In a third aspect, the present application provides a computer device comprising: one or more processors, a memory, and one or more computer programs, the processors and the memory being connected by a bus, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, the processor implementing the steps of the simulation method of pedestrian agent movement and emergency evacuation when the computer programs are executed.
In the application, the crowd distribution trend of urban buildings is predicted based on video monitoring images. The simulation method can predict the movement direction and density trend change of the crowd based on the crowd distribution state and the expression behavior, and achieves crowd excessive aggregation early warning and emergency evacuation prediction.
Drawings
Fig. 1 is a flowchart of a simulation method for pedestrian agent movement and emergency evacuation according to an embodiment of the present application.
Fig. 2 is a relationship diagram of conceptual elements of a pedestrian agent according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a motion path and a motion direction of a pedestrian agent according to an embodiment of the present application.
Fig. 4 is a schematic diagram of interactions between pedestrian agents according to an embodiment of the present application.
Fig. 5 is a specific structural block diagram of a computer device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a simulation system for pedestrian agent movement and emergency evacuation according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Referring to fig. 1, a simulation method for pedestrian intelligent agent movement and emergency evacuation according to an embodiment of the present application includes the following steps: it should be noted that, if there are substantially the same results, the simulation method of pedestrian intelligent agent movement and emergency evacuation in the present application is not limited to the flow sequence shown in fig. 1.
S101, presetting a building scene type, and setting initialization data of individual parameters of pedestrians, motion parameters of the pedestrians and motion behaviors of the pedestrians according to the preset building scene;
s102, identifying the pedestrian motion state in the building scene according to the acquired monitoring videos of a plurality of monitoring video points in the current building scene, and updating the individual pedestrian parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion behaviors of the current building scene;
s103, constructing a space model and a pedestrian agent according to the current building scene, and synchronizing the position coordinates of the pedestrian agent in the space model; the space model is a three-dimensional space model for crowd simulation, which is created based on a building BIM model;
s104, inputting the updated initialization data of the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the current building scene into a pre-constructed intelligent body model, outputting the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the pedestrian intelligent body updated in real time, and updating the real-time position coordinates of the pedestrian intelligent body to obtain a predicted pedestrian intelligent body motion track or a predicted evacuation track.
In an embodiment of the present application, the identifying a pedestrian motion state in a building scene according to the obtained monitoring videos of a plurality of monitoring video points in the current building scene, updating initialization data of individual pedestrian parameters, pedestrian motion parameters and pedestrian motion behaviors of the current building scene specifically includes:
identifying individual pedestrians in the monitoring video according to the monitoring videos of the plurality of monitoring video points in the current building scene, and marking the individual pedestrians;
the method comprises the steps that association and matching are carried out on pedestrian individuals in a current building scene through marks according to different time, different monitoring points and different videos, position coordinates of the pedestrian individuals at different time points are obtained, and known motion trail and pedestrian dynamic parameters of each pedestrian individual are obtained;
calculating the pedestrian motion parameters of each pedestrian individual according to the known motion trail of each pedestrian individual;
according to the obtained known motion trail of each pedestrian, estimating the motion trail of each pedestrian outside the monitoring video, and obtaining pedestrian distribution and motion trail data of different times in the building scene;
analyzing individual pedestrians in the monitoring video to obtain static parameters of the pedestrians;
calculating the pedestrian movement behaviors of different pedestrian individuals by using a cyclic neural network algorithm according to the pedestrian static parameters and the movement track data;
and updating the pedestrian individual parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion of the current building scene, wherein the pedestrian individual parameters comprise static parameters and dynamic parameters.
In an embodiment of the present application, the static parameters include: pedestrian physical size, gender, age, pedestrian historical average moving speed, and pedestrian identity; the identity of the pedestrian is determined by clothing, average moving speed and historical moving track of the pedestrian; the dynamic parameters include: the panic index and polite index of pedestrians in scene changes, and whether they are injured.
In an embodiment of the present application, the analyzing the pedestrian individual in the surveillance video, and the obtaining the static parameters of the pedestrian specifically includes: and (3) identifying clothing, average moving speed and historical moving track of the individual pedestrian by an image classification algorithm of the neural network, and judging the physical size, sex and age of the pedestrian.
In an embodiment of the present application, the spatial model is a three-dimensional spatial model for crowd simulation created based on a building BIM model.
In an embodiment of the present application, the pedestrian motion parameters include pedestrian coordinates, a pedestrian motion speed, and a motion direction.
In one embodiment of the present application, the agent model is constructed by: according to the obtained monitoring videos of a plurality of monitoring video points in the current building scene and the space model of the current building scene as samples, extracting dynamic parameters of each pedestrian agent by adopting a computer vision algorithm, and calculating the relative positions of the pedestrian agents with other pedestrian agents in real time; predicting the target position of the pedestrian agent according to the known motion trail and the pedestrian identity, identifying an obstacle in the route range from the pedestrian agent to the target position according to the video image, predicting the motion path and the motion speed of the pedestrian agent in the preset time in real time according to the preset influence weight of the model in the building scene, and updating the identification result of the pedestrian motion behavior.
In an embodiment of the present application, the pedestrian movement behavior refers to the behavior of a pedestrian to find, follow, wait, wander, and turn back under the influence of other pedestrians, environments, events, and time.
The pedestrian movement behavior comprises road searching, following, waiting, wandering, turning back and the like, and the pedestrian movement behavior can be influenced by factors such as other pedestrians, environments, emergencies, events and the like; the pedestrians can react to building environments and events and can also communicate with other intelligent pedestrians, so that the generated motion behaviors of the pedestrians influence the motion decisions of the pedestrians.
When the motion routes of the pedestrians are independent, judging that the pedestrians belong to the motion behaviors of active road searching; when the moving route of the pedestrian is similar to surrounding people, determining that a crowd-based scene has small groups, wherein the pedestrian belongs to the following movement behavior; when the position of the pedestrian does not change in a period of time, judging that the pedestrian belongs to the waiting exercise behavior; when the track of the pedestrian moves in a smaller range, judging that the pedestrian belongs to the loitering movement behavior; when the motion path of the pedestrian is simpler in a period of time and goes to and from a fixed point, the pedestrian is judged to belong to the waiting motion behavior. The analysis of the pedestrian movement behavior is applied to a cyclic neural network method, and the movement behavior of the pedestrian is finally classified by identifying and classifying key features of a pedestrian track curve.
In an embodiment of the present application, a calculation formula of a preference value of the pedestrian agent for a movement direction is:
wherein f i (theta) represents the total preference value of the pedestrian intelligent body i to the direction theta, and the value range of the direction theta is 0-360 degrees; f (f) i The larger (θ) the greater the probability that the agent representing the pedestrian moves in the direction θ; f (f) i The smaller (θ) represents the more the pedestrian agent is repelled from moving in the direction θ; f (f) i (θ) is a piecewise linear function, f i Maximum value of (θ) and corresponding direction θ fmax The forward direction with the highest probability of the current pedestrian agent i is adopted.
In an embodiment of the present application, a calculation formula of a preference value of a motion direction of the pedestrian agent affected by the pedestrian agent, the environmental event and the time event is:
wherein f type,i (θ) represents the influence of object type on the direction θ preference value of pedestrian agent i, agent refers to pedestrian agent, obstacle refers to environmental event and time event, target refers to final target position coordinates, different f type,i (θ) can be superimposed; f (f) type,i (θ) is a piecewise linear function, the direction interval range is (θ) i,j -Δθ i,j ,θ i,j +Δθ i,j ) θi, j is the direction of object j in the field of view of pedestrian agent i, the influence of the zone center is the maximum value F type,i (d) The influence in the interval changes linearly, and the influence outside the interval is the minimum value 0,F type,i (d) And delta theta j Is adjusted according to different environmental and temporal events and different building scenarios. Since the direction θ is 360 ° periodic, the difference in direction ranges from-180 ° to 180 °.
In an embodiment of the present application, the building of the spatial model and the pedestrian agent according to the current building scene, synchronizing the position coordinates of the pedestrian agent in the spatial model, further includes:
environmental events and time events in the current building scene are preset.
In an embodiment of the present application, the step of inputting the updated initialization data of the pedestrian individual parameter, the pedestrian motion parameter and the pedestrian motion behavior of the current building scene into the pre-constructed intelligent body model, outputting the pedestrian individual parameter, the pedestrian motion parameter and the pedestrian motion behavior of the pedestrian intelligent body updated in real time, updating the real-time position coordinates of the pedestrian intelligent body, and obtaining the predicted pedestrian intelligent body motion trail or evacuation trail specifically includes:
presetting a motion trail of a pedestrian intelligent body according to the final target position coordinates, and updated individual pedestrian parameters, pedestrian motion parameters and pedestrian motion behaviors;
acquiring a preference value of the pedestrian agent, the environmental event and the time event, which influence the motion direction of the pedestrian agent, and calculating the preference value of the motion direction of the pedestrian agent;
and calculating and updating the individual pedestrian parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the pedestrian intelligent body in real time according to the preference value of the motion direction of the pedestrian intelligent body, obtaining the real-time position coordinates of the pedestrian intelligent body, and correcting the motion track or the evacuation track of the pedestrian intelligent body.
An embodiment of the present application provides a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps of a simulation method for pedestrian agent movement and emergency evacuation as provided by an embodiment of the present application.
Fig. 5 shows a specific block diagram of a computer device according to an embodiment of the present application, where a computer device 100 includes: one or more processors 101, a memory 102, and one or more computer programs, wherein the processors 101 and the memory 102 are connected by a bus, the one or more computer programs being stored in the memory 102 and configured to be executed by the one or more processors 101, the processor 101 implementing the steps of a simulation method of pedestrian agent movement and emergency evacuation as provided by an embodiment of the present application when the computer programs are executed.
The computer device includes a server, a terminal, and the like. The computer device may be a desktop computer, a mobile terminal or a vehicle-mounted device, the mobile terminal including at least one of a cell phone, a tablet computer, a personal digital assistant or a wearable device, etc.
According to the embodiment of the application, based on the video monitoring image, the crowd distribution trend of the urban building is predicted. The simulation method can predict the movement direction and density trend change of the crowd based on the crowd distribution state and the expression behavior, and achieves crowd excessive aggregation early warning and emergency evacuation prediction.
Referring to fig. 6, another embodiment of the present application provides a simulation system for pedestrian agent movement and emergency evacuation, including: the system comprises a video monitoring gateway 20, a video monitoring camera 10, a video data storage device 30 and a computer graphic workstation 40 which are connected with the video monitoring gateway 20, and further comprises a crowd model data storage device 50 and an evacuation indication system 60 which are connected with the computer graphic workstation 40;
video monitoring camera 10: the method is used for acquiring crowd image information in the building space and comprehensively covering the area in the monitoring space range of the building scene.
Video monitoring gateway 20: the method is used for connecting a plurality of monitoring cameras and acquiring and analyzing the picture information of the monitoring cameras.
Video data storage device 30: the system is used for storing historical pictures shot by the monitoring camera and recording real-time pedestrian moving picture information.
Computer graphics workstation 40: the method is used for image recognition processing of monitoring equipment, classification network training updating and model calculation, crowd track recognition and prediction and other calculation works.
Crowd model data storage device 50: the method is used for storing model data of crowd classification recognition, track recognition and prediction.
Evacuation indication system 60: based on the result of crowd movement track prediction and evacuation simulation, under the condition of forming evacuation congestion, reasonable and targeted crowd evacuation route guidance is performed by controlling the guiding direction of evacuation lamps.
The method and the system can monitor the crowd distribution trend in the urban building based on the video monitoring gateway and the video monitoring camera, and accurately predict crowd evacuation in real time under the emergency situation; the situation that the current building personnel evacuation design and evaluation are separated from the real-time crowd distribution situation is effectively improved.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but is intended to cover any and all modifications, equivalents, and alternatives falling within the spirit and principles of the present application.
Claims (9)
1. The simulation method for pedestrian intelligent body movement and emergency evacuation is characterized by comprising the following steps:
the method comprises the steps of presetting a building scene type, and setting initialization data of individual parameters of pedestrians, pedestrian motion parameters and pedestrian motion behaviors according to the preset building scene;
identifying the pedestrian motion state in the building scene according to the acquired monitoring videos of the plurality of monitoring video points in the current building scene, and updating the individual pedestrian parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion behaviors of the current building scene;
constructing a space model and a pedestrian intelligent body according to the current building scene, and synchronizing the position coordinates of the pedestrian intelligent body in the space model;
inputting the updated initialization data of the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the current building scene into a pre-constructed intelligent body model, outputting the pedestrian individual parameters, the pedestrian motion parameters and the pedestrian motion behaviors of the pedestrian intelligent body updated in real time, and updating the real-time position coordinates of the pedestrian intelligent body;
presetting a motion track of the pedestrian agent according to the position coordinates of the pedestrian agent and updated individual pedestrian parameters, pedestrian motion parameters and pedestrian motion behaviors;
acquiring a preference value of the pedestrian agent, the environmental event and the time event, which influence the motion direction of the pedestrian agent, and calculating the preference value of the motion direction of the pedestrian agent;
according to the preference value of the movement direction of the pedestrian agent, calculating and updating the individual pedestrian parameter, the pedestrian movement parameter and the pedestrian movement behavior of the pedestrian agent in real time to obtain the real-time position coordinates of the pedestrian agent, and correcting the movement track or evacuation track of the pedestrian agent;
the calculation formula of the preference value of the pedestrian agent to the movement direction is as follows:
wherein f i (theta) represents the total preference value of the pedestrian intelligent body i to the direction theta, and the value range of the direction theta is 0-360 degrees; f (f) i The larger (θ) the greater the probability that the agent representing the pedestrian moves in the direction θ; f (f) i The smaller (θ) represents the more the pedestrian agent is repelled from moving in the direction θ; f (f) i (θ) is a piecewise linear function, f i Maximum value of (θ) and corresponding direction θ fmax The forward direction with the highest probability of the current pedestrian intelligent agent i;
the calculation formula for obtaining the preference value of the motion direction of the pedestrian intelligent agent influenced by the pedestrian intelligent agent, the environmental event and the time event is as follows:
(type=agent,obstacle,target);
wherein f type,i (θ) represents the influence of object type on the direction θ preference value of pedestrian agent i, agent refers to pedestrian agent, obstacle refers to environmental event and time event, target refers to final target position coordinates, different f type,i (θ) can be superimposed; f (f) type,i (θ) is a piecewise linear function, the direction interval range is (θ) i,j -Δθ i,j ,θ i,j +Δθ i,j ),θ i,j For the direction of object j in the field of view of pedestrian agent i, the influence of the interval center is maximum value F type,i (d) The influence in the interval changes linearly, and the influence outside the interval is the minimum value 0,F type,i (d) And delta theta j According to different environmental events, time events and different building scenes; since the direction θ is periodic by 360 °, the difference in direction ranges from-180 ° to 180 °.
2. The method of claim 1, wherein the identifying the pedestrian motion state in the building scene based on the acquired surveillance videos of the plurality of surveillance video points in the current building scene updates the individual pedestrian parameters, the pedestrian motion parameters, and the initialization data of the pedestrian motion behavior of the current building scene is specifically:
identifying individual pedestrians in the monitoring video according to the monitoring videos of the plurality of monitoring video points in the current building scene, and marking the individual pedestrians;
the pedestrian individuals in the current building scene are associated and matched according to different time, different monitoring points and different videos through marks, so that position information of the pedestrian individuals at different time points is obtained, and known motion trail and dynamic parameters of each pedestrian individual are obtained;
calculating the pedestrian motion parameters of each pedestrian individual according to the known motion trail of each pedestrian individual;
according to the obtained known motion trail of each pedestrian, estimating the motion trail of each pedestrian outside the monitoring video, and obtaining the pedestrian distribution and motion trail of different times in the building scene;
analyzing individual pedestrians in the monitoring video to obtain static parameters;
calculating pedestrian movement behaviors of different pedestrian individuals by using a cyclic neural network algorithm according to the static parameters and the movement track data;
and updating the pedestrian individual parameters, the pedestrian motion parameters and the initialization data of the pedestrian motion of the current building scene, wherein the pedestrian individual parameters comprise static parameters and dynamic parameters.
3. The method of claim 2, wherein the static parameters comprise: pedestrian physical size, gender, age, pedestrian historical average moving speed, and pedestrian identity; the identity of the pedestrian is determined by clothing, average moving speed and historical moving track of the pedestrian; the dynamic parameters include: the panic index and polite index of pedestrians in scene changes, and whether they are injured.
4. The method of claim 1, wherein the spatial model is a three-dimensional spatial model for crowd simulation created based on a building BIM model.
5. The method of claim 1, wherein the pedestrian motion parameters include pedestrian coordinates, pedestrian motion speed, and motion direction.
6. The method of claim 1, wherein the agent model is constructed by: according to the obtained monitoring videos of a plurality of monitoring video points in the current building scene and the space model of the current building scene as samples, extracting dynamic parameters of each pedestrian agent by adopting a computer vision algorithm, and calculating the relative positions of the pedestrian agents with other pedestrian agents in real time; predicting the target position of the pedestrian agent according to the known motion trail and the pedestrian identity; identifying an obstacle in the route range from the pedestrian agent to the target position according to the monitoring video; the method is used for presetting the influence weights of dynamic parameters, relative positions with other pedestrian agents, target positions, barriers and other factors on the pedestrian agents in the current building scene, predicting the motion trail and the motion speed of the pedestrian agents in a preset time in real time, and updating the recognition results of the pedestrian motion behaviors.
7. The method of claim 1, wherein the constructing the spatial model and the pedestrian agent from the current building scene, synchronizing the position coordinates of the pedestrian agent in the spatial model, further comprises:
environmental events and time events in the current building scene are preset.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the simulation method of pedestrian agent movement and emergency evacuation according to any one of claims 1 to 7.
9. A computer device, comprising: one or more processors, a memory, and one or more computer programs, the processor and the memory being connected by a bus, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, characterized in that the steps of the simulation method of pedestrian agent movement and emergency evacuation of any one of claims 1 to 7 are implemented when the processor executes the computer programs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211020475.8A CN115359568B (en) | 2022-08-24 | 2022-08-24 | Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211020475.8A CN115359568B (en) | 2022-08-24 | 2022-08-24 | Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115359568A CN115359568A (en) | 2022-11-18 |
CN115359568B true CN115359568B (en) | 2023-06-02 |
Family
ID=84005297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211020475.8A Active CN115359568B (en) | 2022-08-24 | 2022-08-24 | Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115359568B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115796588B (en) * | 2022-12-05 | 2023-06-20 | 西安交通大学 | On-site emergency evacuation system and method based on radiation monitoring |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013082779A1 (en) * | 2011-12-08 | 2013-06-13 | Thomson Licensing | System and method for crowd simulation |
CN107256307A (en) * | 2017-06-09 | 2017-10-17 | 山东师范大学 | The crowd evacuation emulation method and system of knowledge based navigation |
CN113935109A (en) * | 2021-10-13 | 2022-01-14 | 大连海事大学 | Multi-agent parking system simulation system and construction method thereof |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2995866A1 (en) * | 2015-09-03 | 2017-03-09 | Miovision Technologies Incorporated | System and method for detecting and tracking objects |
CN107464021B (en) * | 2017-08-07 | 2019-07-23 | 山东师范大学 | A kind of crowd evacuation emulation method based on intensified learning, device |
CN110807345A (en) * | 2018-08-06 | 2020-02-18 | 开利公司 | Building evacuation method and building evacuation system |
CN109886196A (en) * | 2019-02-21 | 2019-06-14 | 中水北方勘测设计研究有限责任公司 | Personnel track traceability system and method based on BIM plus GIS video monitoring |
CN110737989B (en) * | 2019-10-18 | 2024-05-24 | 中国科学院深圳先进技术研究院 | Parallel intelligent emergency collaboration method, system and electronic equipment |
CN111401161A (en) * | 2020-03-04 | 2020-07-10 | 青岛海信网络科技股份有限公司 | Intelligent building management and control system for realizing behavior recognition based on intelligent video analysis algorithm |
CN111639809B (en) * | 2020-05-29 | 2023-07-07 | 华中科技大学 | Multi-agent evacuation simulation method and system based on leaders and panic emotion |
CN112862192A (en) * | 2021-02-08 | 2021-05-28 | 青岛理工大学 | Crowd evacuation auxiliary decision-making system based on ant colony algorithm and improved social model |
CN113538520B (en) * | 2021-08-02 | 2022-03-18 | 北京易航远智科技有限公司 | Pedestrian trajectory prediction method and device, electronic equipment and storage medium |
-
2022
- 2022-08-24 CN CN202211020475.8A patent/CN115359568B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013082779A1 (en) * | 2011-12-08 | 2013-06-13 | Thomson Licensing | System and method for crowd simulation |
CN107256307A (en) * | 2017-06-09 | 2017-10-17 | 山东师范大学 | The crowd evacuation emulation method and system of knowledge based navigation |
CN113935109A (en) * | 2021-10-13 | 2022-01-14 | 大连海事大学 | Multi-agent parking system simulation system and construction method thereof |
Non-Patent Citations (2)
Title |
---|
Modeling and simulating for congestion pedestrian evacuation with panic;Jinhuan Wang,et al;Physica A: Statistical Mechanics and its Applications;第428卷;369-409 * |
社会力模型在行人运动建模中的应用综述;单庆超,等;城市交通(06);77-83 * |
Also Published As
Publication number | Publication date |
---|---|
CN115359568A (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Morris et al. | A survey of vision-based trajectory learning and analysis for surveillance | |
KR101995107B1 (en) | Method and system for artificial intelligence based video surveillance using deep learning | |
Yogameena et al. | Computer vision based crowd disaster avoidance system: A survey | |
JP6474919B2 (en) | Congestion status monitoring system and congestion status monitoring method | |
Sharma et al. | Fisher’s linear discriminant ratio based threshold for moving human detection in thermal video | |
KR101877294B1 (en) | Smart cctv system for crime prevention capable of setting multi situation and recognizing automatic situation by defining several basic behaviors based on organic relation between object, area and object's events | |
US11288954B2 (en) | Tracking and alerting traffic management system using IoT for smart city | |
KR102282800B1 (en) | Method for trackig multi target employing ridar and camera | |
KR102333143B1 (en) | System for providing people counting service | |
CN115359568B (en) | Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment | |
Shehab et al. | Statistical detection of a panic behavior in crowded scenes | |
Ben Hamida et al. | Video surveillance system based on a scalable application-oriented architecture | |
CN114218992A (en) | Abnormal object detection method and related device | |
CN110503032B (en) | Individual important place detection method based on track data of monitoring camera | |
CN109508657B (en) | Crowd gathering analysis method, system, computer readable storage medium and device | |
Gündüz et al. | A new YOLO-based method for social distancing from real-time videos | |
Fehr et al. | Counting people in groups | |
CN113538513A (en) | Method, device and equipment for controlling access of monitored object and storage medium | |
US20200293020A1 (en) | Surveillance system with intelligent robotic surveillance device | |
Karishma et al. | Artificial Intelligence in Video Surveillance | |
KR102663282B1 (en) | Crowd density automatic measurement system | |
Prezioso et al. | Integrating Object Detection and Advanced Analytics for Smart City Crowd Management | |
CN117237879B (en) | Track tracking method and system | |
Nandakumar et al. | High-frequency crowd insights for public safety and congestion control | |
Wang et al. | People Flow Monitoring Based on Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |