CN117224951B - Pedestrian behavior prediction method and device based on perception and electronic equipment - Google Patents
Pedestrian behavior prediction method and device based on perception and electronic equipment Download PDFInfo
- Publication number
- CN117224951B CN117224951B CN202311452876.5A CN202311452876A CN117224951B CN 117224951 B CN117224951 B CN 117224951B CN 202311452876 A CN202311452876 A CN 202311452876A CN 117224951 B CN117224951 B CN 117224951B
- Authority
- CN
- China
- Prior art keywords
- scene
- pedestrian
- grid map
- virtual
- virtual pedestrian
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008447 perception Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000009877 rendering Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 14
- 230000003068 static effect Effects 0.000 claims description 49
- 230000006399 behavior Effects 0.000 claims description 36
- 230000003993 interaction Effects 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 19
- 230000007613 environmental effect Effects 0.000 description 11
- 230000019771 cognition Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000010391 action planning Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a pedestrian behavior prediction method and device based on perception and electronic equipment in the technical field of virtual games, and the method comprises the following steps: periodically updating a hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene; according to the hierarchical grid map, obtaining perception data of each virtual pedestrian for scene perception, planning the advancing of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping people flow moving speed, and obtaining a planning path of each virtual pedestrian; calling a UE5 engine, rendering to obtain the scene, and rendering and drawing each virtual pedestrian in the scene; and controlling each virtual pedestrian to move in the scene according to the planned path, and calling blueprint animation in the UE5 engine in the moving process to control the animation of each virtual pedestrian so as to obtain a behavior prediction result of each virtual pedestrian.
Description
Technical Field
The disclosure relates to the technical field of virtual games, in particular to a pedestrian behavior prediction method and device based on perception and electronic equipment.
Background
In the technical field of games, for example, NPC and virtual pedestrians in a game scene need to be modeled, and the data structure describing the total crowd is updated based on the latest position of each pedestrian, so that the behavior of the virtual pedestrians is predicted, in the prior art, the assumption of intelligent individuals of the virtual pedestrians is too simplified, and the virtual pedestrians are too mechanical and simple; the virtual pedestrian individuals lack self cognition and decision making, only obey the scheduling of global algorithm, and cannot realize the more complex and real individual intelligent technical problem.
Disclosure of Invention
The invention aims to provide a pedestrian behavior prediction method and device based on perception and electronic equipment, and aims to solve the problem that the assumption of intelligent individuals of virtual people in related scenes is too simplified, and the behaviors of the virtual people are too mechanical and simple; the virtual pedestrian individuals lack self cognition and decision making, only obey the scheduling of global algorithm, and cannot realize the more complex and real individual intelligent technical problem.
To achieve the above object, a first aspect of embodiments of the present disclosure provides a pedestrian behavior prediction method based on perception, the method including:
Periodically updating a hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
According to the hierarchical grid map, obtaining perception data of each virtual pedestrian for scene perception, planning the advancing of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping people flow moving speed, and obtaining a planning path of each virtual pedestrian;
calling a UE5 engine, rendering to obtain the scene, and rendering and drawing each virtual pedestrian in the scene;
And controlling each virtual pedestrian to move in the scene according to the planned path, and calling blueprint animation in the UE5 engine in the moving process to control the animation of each virtual pedestrian so as to obtain a behavior prediction result of each virtual pedestrian.
In one possible implementation manner, the periodically updating, in frame units, the hierarchical grid map describing the crowd and the scene based on the position of each virtual pedestrian in the scene includes:
periodically constructing an object two-dimensional grid map describing a static object in a scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
Constructing a quadtree grid map for path planning;
Constructing a pedestrian two-dimensional grid map for inquiring nearby pedestrians according to the object two-dimensional grid map;
And adding attribute description for the static object in the scene, and updating to obtain a hierarchical grid map describing crowd and the scene according to the attribute description, the object two-dimensional grid map, the quadtree grid map and the pedestrian two-dimensional grid map, wherein the attribute description comprises at least one of position, size, shape, material, interaction attribute and color.
In one possible implementation manner, the periodically constructing, in frame units, an object two-dimensional grid map describing the static object in the scene based on the position of each virtual pedestrian in the scene includes:
Periodically mapping the urban 3-dimensional model in the scene onto a 2-dimensional horizontal plane by taking a frame as a unit;
Covering a two-dimensional grid which is uniformly divided on the 2-dimensional horizontal plane, and generating a two-dimensional grid map, wherein each grid unit in the two-dimensional grid corresponds to one sub-area;
storing in the two-dimensional grid an identifier of a static object occupying the sub-region;
And determining the description of each virtual pedestrian aiming at the static object in the direction sensing range according to the position of each virtual pedestrian in the scene, and storing the description into the corresponding subarea to obtain the two-dimensional grid map.
In one possible implementation manner, the building the quadtree grid map for path planning includes:
Dividing grid space areas according to the urban 3-dimensional model in the scene by taking a frame as a unit periodicity, wherein each node of the quadtree represents a space area, and when a static object exists in one space area, the corresponding node of the quadtree is continuously divided into a plurality of sub-nodes;
and determining and saving information of the static object occupying each grid space area, and reachability and isotopy of the grid space areas, and generating a quadtree grid map for path planning.
In one possible implementation manner, the constructing a two-dimensional grid map for querying pedestrians adjacent to the pedestrians according to the two-dimensional grid map of the object includes:
covering a uniformly distributed unit grid on a two-dimensional horizontal plane of the scene, wherein each unit grid stores pedestrian information of virtual pedestrians in the unit grid;
and when each virtual pedestrian moves, periodically taking a frame as a unit, recording the position of the virtual pedestrian into a corresponding unit grid cell according to a static object in the object two-dimensional grid map, and constructing and generating a pedestrian two-dimensional grid map for inquiring the adjacent pedestrians.
In one possible implementation manner, the obtaining, according to the hierarchical grid map, perception data of each virtual pedestrian perceived by the scene includes:
Inquiring and determining whether an obstacle object exists between the current position and the inquiring position of each virtual pedestrian according to the hierarchical grid map;
determining a fan-shaped range in a preset radius of the virtual pedestrian orientation as a perception range;
And determining perception data of each virtual pedestrian perceived by the scene in the perception range according to the existence of the obstacle object.
In one possible implementation manner, the querying and determining whether each virtual pedestrian has an obstacle object from the current location to the query location according to the hierarchical grid map includes:
Generating detection rays to a query position based on the current position of each virtual pedestrian when each virtual pedestrian moves;
rasterizing the detected rays into the object two-dimensional grid map;
For each target two-dimensional grid with the detection rays in the two-dimensional grid map, inquiring whether an obstacle object exists in a sub-region corresponding to the target two-dimensional grid;
And determining whether an obstacle object exists between the current position and the query position of each virtual pedestrian according to the result of whether the obstacle object exists in the subarea corresponding to each target two-dimensional grid.
In one possible implementation manner, the perception data includes at least one of a ground height, a virtual pedestrian within a perception range, a boundary of a static object, and an interactable static object, wherein the virtual pedestrian within the perception range is a preset number or less of adjacent pedestrians.
In a second aspect of embodiments of the present disclosure, there is provided a pedestrian behavior prediction apparatus based on perception, the apparatus comprising:
The updating module is configured to periodically update the hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
The path planning module is configured to acquire perception data of each virtual pedestrian for scene perception according to the hierarchical grid map, plan the travel of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping the people flow moving speed, and acquire a planning path of each virtual pedestrian;
the rendering module is configured to call a UE5 engine, render the scene and render and draw each virtual pedestrian in the scene;
And the behavior prediction module is configured to control each virtual pedestrian to move in the scene according to the planned path, and call blueprint animation in the UE5 engine in the moving process, and control the animation of each virtual pedestrian to obtain a behavior prediction result of each virtual pedestrian.
In a third aspect of the disclosed embodiments, there is provided an electronic device, including:
A processor;
A memory for storing processor-executable instructions;
Wherein the processor is configured to execute executable instructions stored in the memory to perform the method of any one of the first aspects.
The invention provides a pedestrian behavior prediction method and device based on perception and electronic equipment. Compared with the prior art, the method has the following beneficial effects:
Periodically updating a hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene; according to the hierarchical grid map, obtaining perception data of each virtual pedestrian for scene perception, planning the advancing of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping people flow moving speed, and obtaining a planning path of each virtual pedestrian; calling a UE5 engine, rendering to obtain the scene, and rendering and drawing each virtual pedestrian in the scene; and controlling each virtual pedestrian to move in the scene according to the planned path, and calling blueprint animation in the UE5 engine in the moving process to control the animation of each virtual pedestrian so as to obtain a behavior prediction result of each virtual pedestrian. The behavior of the intelligent individual of the virtual person can be accurately predicted, so that the game scene is enriched; moreover, the virtual pedestrian individuals can make decisions based on cognition, a relatively complex game scene is constructed, and the user experience is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method of perceived-based pedestrian behavior prediction, according to an embodiment of the present disclosure.
Fig. 2 is a flowchart for implementing step S11 in fig. 1, according to an embodiment of the present disclosure.
Fig. 3 is a flowchart for implementing step S111 in fig. 2, according to an embodiment of the present disclosure.
Fig. 4 is a flowchart for implementing step S112 in fig. 2, according to an embodiment of the present disclosure.
Fig. 5 is a flowchart for implementing step S113 in fig. 2, according to an embodiment of the present disclosure.
Fig. 6 is a flowchart for implementing step S12 in fig. 1, according to an embodiment of the present disclosure.
Fig. 7 is a flowchart for implementing step S121 in fig. 6, according to an embodiment of the present disclosure.
Fig. 8 is a block diagram of a perception-based pedestrian behavior prediction device, shown in accordance with an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
To achieve the above object, the present disclosure provides a pedestrian behavior prediction method based on perception, and fig. 1 is a flowchart illustrating a pedestrian behavior prediction method based on perception according to an embodiment. The method comprises the following steps:
in step S11, the hierarchical grid map describing the crowd and the scene is periodically updated in units of frames based on the position of each virtual pedestrian in the scene.
Wherein updating an overall environmental data structure module comprising a data structure describing a population of people, the data structure comprising: (1) a two-dimensional grid map describing static obstacles; (2) a quadtree grid map for path planning; (3) a two-dimensional grid map for querying nearby pedestrians; (4) specific environmental object descriptions.
In step S12, according to the hierarchical grid map, the perceived data of each virtual pedestrian perceived by the scene is obtained, and the traveling of each virtual pedestrian is planned according to the perceived data with the goal of reducing the crowd collision and maintaining the people stream moving speed, so as to obtain the planned path of each virtual pedestrian.
Wherein the virtual pedestrians acquire necessary data for sensing the surrounding environment through inquiring the environment data structure module, including but not limited to ground height, other pedestrians in the visual range, boundaries of surrounding obstacles, interactable environment objects and the like.
In step S13, a UE5 engine is invoked, the scene is rendered, and each virtual pedestrian is rendered and drawn in the scene.
The method is realized by adopting a UE5 engine framework, and the animation sequence call of the role is mainly based on the use of events to trigger an animation blueprint of the UE5 engine so as to control the role animation; and a real-time rendering renderer built in the UE5 engine is used for rendering the scene and the role. The use of the UE5 engine is not a critical aspect of the present invention. The technical scheme of the invention can be conveniently migrated to other game engines (such as Unity) or other visual effect modeling software (such as Houdini). These commercial or open source engines can be used as alternatives to the character animation and scene, character rendering portions of the present invention.
The virtual pedestrians generate decisions and corresponding action plans through environment perception data queried in real time and internal psychological states of the individuals, wherein the decisions and the corresponding action plans comprise a long-term plan with stronger purposes and a temporarily created short-term plan. The memory system is responsible for remembering and switching the various different plans to accommodate the highly dynamic urban environment.
In step S14, each virtual pedestrian is controlled to move in the scene according to the planned path, and in the moving process, a blueprint animation in the UE5 engine is invoked to control the animation of each virtual pedestrian, so as to obtain a behavior prediction result of each virtual pedestrian.
In the embodiment of the disclosure, the behavior module for updating the pedestrian based on the decision module may be responsible for executing the specific plan generated by the decision module, including how to plan to reach the destination specified by the plan, perform static or dynamic obstacle avoidance on the pedestrian and the obstacle in the moving process, execute the specific action of interacting with the environment, and the like.
The complexity of a virtual pedestrian individual is completely simulated in this disclosure. Unlike the group animation focusing on visual fluency only, the invention develops a comprehensive and perfect virtual pedestrian agent model with autonomous perception, decision planning and behavior control, and can simulate the behavior of a single pedestrian in urban environment. The invention introduces a new cognition and decision model to support intelligent roles to establish high-efficiency perception of surrounding environment, carries out decisions of short term (such as emergency obstacle avoidance) and long term (such as destination path planning and behavior selection) based on information obtained by perception, aims at a decision-making memory system, and drives various behaviors of virtual pedestrian agents based on the decisions so as to complete highly real interaction of virtual pedestrians, environments and other pedestrians.
The present invention uses efficient environmental representation and querying. By using a hierarchical data structure that efficiently models and processes complex urban virtual environments, the present invention can efficiently support pedestrian sensory queries, thereby driving their behavioral responses, and supporting them to conduct efficient action planning in both local and global contexts.
According to the technical scheme, based on the position of each virtual pedestrian in the scene, the hierarchical grid map describing the crowd and the scene is periodically updated by taking the frame as a unit; according to the hierarchical grid map, obtaining perception data of each virtual pedestrian for scene perception, planning the advancing of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping people flow moving speed, and obtaining a planning path of each virtual pedestrian; calling a UE5 engine, rendering to obtain the scene, and rendering and drawing each virtual pedestrian in the scene; and controlling each virtual pedestrian to move in the scene according to the planned path, and calling blueprint animation in the UE5 engine in the moving process to control the animation of each virtual pedestrian so as to obtain a behavior prediction result of each virtual pedestrian. The behavior of the intelligent individual of the virtual person can be accurately predicted, so that the game scene is enriched; moreover, the virtual pedestrian individuals can make decisions based on cognition, a relatively complex game scene is constructed, and the user experience is improved.
In one possible implementation manner, referring to fig. 2, in step S11, based on the location of each virtual pedestrian in the scene, a hierarchical grid map describing the crowd and the scene is periodically updated in units of frames, including:
In step S111, an object two-dimensional grid map describing a static object in a scene is periodically constructed in units of frames based on the position of each virtual pedestrian in the scene.
In the embodiment of the disclosure, the urban 3-dimensional model can be effectively mapped to the 2-dimensional horizontal plane. By overlaying a uniformly divided two-dimensional grid on the plane, each grid cell corresponds to a sub-area, and identifiers of all environmental objects occupying the small area are stored in the grid cell. In order to ensure high precision of the two-dimensional grid map, the division size of the grid cells is usually small, and is 0.2 to 0.3 meter.
The perception of an environmental static obstacle by a virtual pedestrian can be described as emitting a series of query rays for a sector-shaped viewable area in front, the length of which reflects the desired perception range and the density of which reflects the desired perceived acuity; the algorithm examines the grid cells covered by each query ray and queries the environmental object information associated therewith.
Wherein the time spent for obstacle-aware queries grows linearly with the density of grid cells and is independent of the number of environmental objects. The problem that the time consumption of environment perception query is long due to the large and complex virtual environment with a large number of pedestrians is avoided.
In step S112, a quadtree grid map for path planning is constructed;
The quadtree is a special tree data structure, and each branch has four sub-nodes, which are suitable for dividing the two-dimensional space. The use of quadtrees to describe a grid map of a city or environment provides an efficient, dynamic method for adaptively partitioning space, thereby providing an efficient tool for virtual pedestrian path planning.
In a specific implementation of the invention, each node of the quadtree represents a particular spatial region. When there is an obstacle or environmental object in an area, the node is further divided into four sub-nodes, which represent four sub-areas of the area. This dynamic partitioning and merging ensures that the quadtree is subdivided only where needed, thereby saving storage and computing resources.
In order to increase the efficiency of the path planning algorithm for virtual pedestrians, not only the obstacle information of each area but also the reachability and trafficability data of the area can be stored. When a virtual pedestrian needs to plan a path, the algorithm firstly queries the quadtree according to the starting position and the target position of the pedestrian, and a high-efficiency and safe path is quickly found. Furthermore, by adaptive partitioning of the quadtree, the algorithm is able to flexibly switch between a large area and a small area, ensuring fast movement over a large area (typically representing a wide road) and precise obstacle avoidance in a small area (typically representing a narrow space).
In this way, the method can dynamically adapt to the change of the environment, especially in large urban scenes, the spaciousness of different areas is different, if the uniformly divided two-dimensional grids are completely used for road searching, the stored space and the calculation complexity of a path planning algorithm can be greatly increased, and the adaptability of the quadtree data structure provides good support for fast road searching in the spacious areas.
In step S113, a two-dimensional grid map for querying pedestrians adjacent to the pedestrian is constructed according to the object two-dimensional grid map;
in a complex virtual environment, the behavior and decisions of one pedestrian are often related to other pedestrians around it. In order to quickly and efficiently query and process such correlations, a two-dimensional grid map system is specifically designed for querying nearby pedestrians. The core idea is that each grid cell is capable of storing and updating pedestrian information therein by overlaying another evenly distributed grid on the two-dimensional level of the virtual environment. Each pedestrian's location information is recorded in real time into the corresponding grid cell as it moves. When the adjacent pedestrian of a certain pedestrian needs to be inquired, only the information of the grid unit where the pedestrian is and the adjacent grid units is checked, so that the position and state information of surrounding pedestrians are rapidly acquired.
Thus, the time complexity of the query operation is greatly reduced, and the method is particularly suitable for high-density pedestrian scenes because only a limited number of grid cells need to be checked. The dividing size of the grid can be adjusted according to the accuracy requirement of the query. Larger grid cells are suitable for quick queries, while smaller grid cells may provide more accurate results.
In step S114, attribute descriptions are added for static objects in the scene, and hierarchical grid maps describing crowd and scene are updated according to the attribute descriptions, the object two-dimensional grid map, the quadtree grid map and the pedestrian two-dimensional grid map.
Wherein the attribute description includes at least one of location, size, shape, material, interaction attribute, and color.
In the embodiments of the present disclosure, in order to achieve high interaction between pedestrians and environments in a virtual environment, accurate description of environmental objects is particularly important. The devices 1-4 are a comprehensive environmental object description system capable of providing specific descriptions of each object, thereby ensuring that virtual pedestrians can accurately recognize, interact and make corresponding decisions based on the environmental objects. Each object has a set of basic properties including its location, size, shape, material and color. These attributes determine the appearance and physical characteristics of objects in a virtual environment, which are the basis for pedestrian perception and interaction. In addition to basic properties, each object has its specific interaction properties, e.g. a door can be opened and closed and a chair can be used for sitting down. These functional descriptions provide possibilities for pedestrian interactions with objects.
According to the technical scheme, different maps have different levels and pertinence, so that the whole virtual environment processing method is more efficient when complex virtual environments are processed.
A hierarchical map is composed of a two-dimensional grid map describing static barriers, a quadtree grid map used for path planning and a two-dimensional grid map used for inquiring pedestrians nearby. The innovation point enables the overall environment data structure module to efficiently model and process complex urban virtual environments. And the static obstacle sensing, dynamic approaching pedestrian sensing module and the global navigation behavior sub-module are more efficient, so that great advantages are provided for simulating hundreds or thousands of pedestrians in real time.
In one possible implementation manner, referring to fig. 3, in step S111, based on the location of each virtual pedestrian in the scene, an object two-dimensional grid map describing the static object in the scene is periodically constructed in units of frames, including:
periodically mapping the urban 3-dimensional model in the scene onto a 2-dimensional horizontal plane in units of frames in step S1111;
In step S1112, a two-dimensional grid map is generated by covering the 2-dimensional horizontal plane with uniformly divided two-dimensional grids, wherein each grid unit in the two-dimensional grid corresponds to a sub-region;
In step S1113, an identifier of a static object occupying the sub-region is stored in the two-dimensional grid;
In step S1114, a description of each virtual pedestrian with respect to the static object in the direction sensing range is determined according to the position of each virtual pedestrian in the scene, and the description is stored in the corresponding sub-region, so as to obtain the two-dimensional grid map.
In one possible implementation, referring to fig. 4, in step S112, the building a quadtree grid map for path planning includes:
In step S1121, periodically dividing a grid space area according to the 3-dimensional model of the city in the scene by taking a frame as a unit, wherein each node of the quadtree represents a space area, and when a static object exists in one space area, the corresponding node of the quadtree is continuously divided into a plurality of sub-nodes;
In step S1122, information of the static object occupying each of the mesh space areas, and reachability and isotopy of the mesh space areas are determined and saved, and a quadtree mesh map for path planning is generated.
In one possible implementation manner, referring to fig. 5, in step S113, the constructing a two-dimensional grid map for querying pedestrians adjacent to the pedestrians according to the object two-dimensional grid map includes:
In step S1131, covering a uniformly distributed unit grid on a two-dimensional horizontal plane of the scene, wherein each unit grid stores pedestrian information of a virtual pedestrian therein;
In step S1132, when each virtual pedestrian moves, periodically taking a frame as a unit, according to the static object in the object two-dimensional grid map, recording the position of the virtual pedestrian into a corresponding unit grid cell, and constructing and generating a two-dimensional grid map for querying pedestrians nearby.
In one possible implementation manner, referring to fig. 6, in step S12, the obtaining, according to the hierarchical grid map, perception data of each virtual pedestrian perceived by the scene includes:
In step S121, inquiring and determining whether an obstacle object exists between the current position and the inquiring position of each virtual pedestrian according to the hierarchical grid map;
in step S122, determining a fan-shaped range within a preset radius of the virtual pedestrian orientation as a perception range;
in step S123, the perception data of each virtual pedestrian perceived by the scene is determined within the perception range according to whether the obstacle object exists.
In one possible implementation manner, referring to fig. 7, in step S121, the querying and determining, according to the hierarchical grid map, whether an obstacle object exists between the current location and the queried location of each virtual pedestrian includes:
In step S1211, while each of the virtual pedestrians is moving, a detection ray is generated toward a query location based on the current location of the virtual pedestrian;
Rasterizing the detection ray into the object two-dimensional grid map in step S1212;
In step S1213, for each target two-dimensional grid in the two-dimensional grid map in which the detected ray exists, whether an obstacle object exists in a sub-region corresponding to the target two-dimensional grid is queried;
Wherein the obstacle object may be either a static object in the game scene or a standing NPC or a standing virtual pedestrian, for example.
In step S1214, it is determined whether an obstacle exists between the current position and the query position of each virtual pedestrian according to the result of whether an obstacle exists in the sub-region corresponding to each target two-dimensional grid.
In one possible implementation manner, the perception data includes at least one of a ground height, a virtual pedestrian within a perception range, a boundary of a static object, and an interactable static object, wherein the virtual pedestrian within the perception range is a preset number or less of adjacent pedestrians.
In the embodiment of the disclosure, other adjacent pedestrians currently located in the self-sensing area are queried, the sensing area of the virtual pedestrian is defined as a sector of the self-facing direction, the included angle of the sector is 120 degrees, and the radius of the sector is the furthest sensing range of the predefined pedestrian. The return value is an array containing pointers of all eligible nearby pedestrians. Once a predetermined number (currently set to 16) of nearby pedestrians is perceived, the query is terminated prematurely because at any particular time the attention of the virtual pedestrians is limited, people typically only pay attention to a limited number of others, and typically those closest to him. The adjacent pedestrian query sub-module will be used for subsequent cognitive control and behavioral control.
According to the technical scheme, the environment can be efficiently perceived, and surrounding information of each virtual pedestrian can be acquired with low time cost. When the urban scene is highly complex and contains a large number of pedestrians, the invention can still process the perception of the virtual pedestrians in batches at a higher running rate and provide input for the subsequent cognitive control and decision module.
The embodiment of the disclosure further provides a pedestrian behavior prediction device based on perception, referring to fig. 8, the device includes:
An updating module 810 configured to periodically update a hierarchical grid map describing the crowd and the scene in units of frames based on a location of each virtual pedestrian in the scene;
The path planning module 820 is configured to obtain, according to the hierarchical grid map, perception data of each virtual pedestrian perceived by the scene, and plan, according to the perception data, travel of each virtual pedestrian with the goal of reducing crowd collision and maintaining people flow moving speed, so as to obtain a planned path of each virtual pedestrian;
a rendering module 830, configured to invoke a UE5 engine, render the scene, and render and draw each virtual pedestrian in the scene;
The behavior prediction module 840 is configured to control each virtual pedestrian to move in the scene according to the planned path, and call the blueprint animation in the UE5 engine during the movement process, and control the animation of each virtual pedestrian to obtain the behavior prediction result of each virtual pedestrian.
In one possible implementation, the updating module 810 is configured to:
periodically constructing an object two-dimensional grid map describing a static object in a scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
Constructing a quadtree grid map for path planning;
Constructing a pedestrian two-dimensional grid map for inquiring nearby pedestrians according to the object two-dimensional grid map;
And adding attribute description for the static object in the scene, and updating to obtain a hierarchical grid map describing crowd and the scene according to the attribute description, the object two-dimensional grid map, the quadtree grid map and the pedestrian two-dimensional grid map, wherein the attribute description comprises at least one of position, size, shape, material, interaction attribute and color.
In one possible implementation, the updating module 810 is configured to:
Periodically mapping the urban 3-dimensional model in the scene onto a 2-dimensional horizontal plane by taking a frame as a unit;
Covering a two-dimensional grid which is uniformly divided on the 2-dimensional horizontal plane, and generating a two-dimensional grid map, wherein each grid unit in the two-dimensional grid corresponds to one sub-area;
storing in the two-dimensional grid an identifier of a static object occupying the sub-region;
And determining the description of each virtual pedestrian aiming at the static object in the direction sensing range according to the position of each virtual pedestrian in the scene, and storing the description into the corresponding subarea to obtain the two-dimensional grid map.
In one possible implementation, the updating module 810 is configured to:
Dividing grid space areas according to the urban 3-dimensional model in the scene by taking a frame as a unit periodicity, wherein each node of the quadtree represents a space area, and when a static object exists in one space area, the corresponding node of the quadtree is continuously divided into a plurality of sub-nodes;
and determining and saving information of the static object occupying each grid space area, and reachability and isotopy of the grid space areas, and generating a quadtree grid map for path planning.
In one possible implementation, the updating module 810 is configured to:
covering a uniformly distributed unit grid on a two-dimensional horizontal plane of the scene, wherein each unit grid stores pedestrian information of virtual pedestrians in the unit grid;
and when each virtual pedestrian moves, periodically taking a frame as a unit, recording the position of the virtual pedestrian into a corresponding unit grid cell according to a static object in the object two-dimensional grid map, and constructing and generating a pedestrian two-dimensional grid map for inquiring the adjacent pedestrians.
In one possible implementation, the path planning module 820 is configured to:
Inquiring and determining whether an obstacle object exists between the current position and the inquiring position of each virtual pedestrian according to the hierarchical grid map;
determining a fan-shaped range in a preset radius of the virtual pedestrian orientation as a perception range;
And determining perception data of each virtual pedestrian perceived by the scene in the perception range according to the existence of the obstacle object.
In one possible implementation, the path planning module 820 is configured to:
Generating detection rays to a query position based on the current position of each virtual pedestrian when each virtual pedestrian moves;
rasterizing the detected rays into the object two-dimensional grid map;
For each target two-dimensional grid with the detection rays in the two-dimensional grid map, inquiring whether an obstacle object exists in a sub-region corresponding to the target two-dimensional grid;
And determining whether an obstacle object exists between the current position and the query position of each virtual pedestrian according to the result of whether the obstacle object exists in the subarea corresponding to each target two-dimensional grid.
In one possible implementation manner, the perception data includes at least one of a ground height, a virtual pedestrian within a perception range, a boundary of a static object, and an interactable static object, wherein the virtual pedestrian within the perception range is a preset number or less of adjacent pedestrians.
The embodiment of the disclosure also provides an electronic device, including:
A processor;
A memory for storing processor-executable instructions;
Wherein the processor is configured to execute executable instructions stored in the memory to perform the method of any one of the preceding embodiments.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various changes, modifications, substitutions and alterations can be made to these embodiments within the scope of the technical idea of the present disclosure, which all fall within the scope of protection of the present disclosure.
It should be further noted that, where specific features described in the foregoing embodiments are combined in any suitable manner, they should also be regarded as disclosure of the present disclosure, and various possible combinations are not separately described in order to avoid unnecessary repetition. The technical scope of the present application is not limited to the contents of the specification, and must be determined according to the scope of claims.
Claims (8)
1. A method of pedestrian behavior prediction based on perception, the method comprising:
Periodically updating a hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
According to the hierarchical grid map, obtaining perception data of each virtual pedestrian for scene perception, planning the advancing of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping people flow moving speed, and obtaining a planning path of each virtual pedestrian;
calling a UE5 engine, rendering to obtain the scene, and rendering and drawing each virtual pedestrian in the scene;
Controlling each virtual pedestrian to move in the scene according to the planned path, and calling blueprint animation in the UE5 engine in the moving process to control the animation of each virtual pedestrian so as to obtain a behavior prediction result of each virtual pedestrian;
the method for updating the hierarchical grid map describing the crowd and the scene periodically by taking the frame as a unit based on the position of each virtual pedestrian in the scene comprises the following steps:
periodically constructing an object two-dimensional grid map describing a static object in a scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
Constructing a quadtree grid map for path planning;
Constructing a pedestrian two-dimensional grid map for inquiring nearby pedestrians according to the object two-dimensional grid map;
Adding attribute description for static objects in the scene, and updating to obtain a hierarchical grid map describing crowd and the scene according to the attribute description, an object two-dimensional grid map, the quadtree grid map and the pedestrian two-dimensional grid map, wherein the attribute description comprises at least one of position, size, shape, material, interaction attribute and color;
The method for periodically constructing the object two-dimensional grid map describing the static object in the scene by taking the frame as a unit based on the position of each virtual pedestrian in the scene comprises the following steps:
Periodically mapping the urban 3-dimensional model in the scene onto a 2-dimensional horizontal plane by taking a frame as a unit;
Covering a two-dimensional grid which is uniformly divided on the 2-dimensional horizontal plane, and generating a two-dimensional grid map, wherein each grid unit in the two-dimensional grid corresponds to one sub-area;
storing in the two-dimensional grid an identifier of a static object occupying the sub-region;
And determining the description of each virtual pedestrian aiming at the static object in the direction sensing range according to the position of each virtual pedestrian in the scene, and storing the description into the corresponding subarea to obtain the two-dimensional grid map.
2. The method of claim 1, wherein constructing a quadtree grid map for path planning comprises:
Dividing grid space areas according to the urban 3-dimensional model in the scene by taking a frame as a unit periodicity, wherein each node of the quadtree represents a space area, and when a static object exists in one space area, the corresponding node of the quadtree is continuously divided into a plurality of sub-nodes;
and determining and saving information of the static object occupying each grid space area, and reachability and isotopy of the grid space areas, and generating a quadtree grid map for path planning.
3. The method of claim 1, wherein constructing a two-dimensional grid map for querying pedestrians adjacent to pedestrians from the two-dimensional grid map of objects comprises:
covering a uniformly distributed unit grid on a two-dimensional horizontal plane of the scene, wherein each unit grid stores pedestrian information of virtual pedestrians in the unit grid;
and when each virtual pedestrian moves, periodically taking a frame as a unit, recording the position of the virtual pedestrian into a corresponding unit grid cell according to a static object in the object two-dimensional grid map, and constructing and generating a pedestrian two-dimensional grid map for inquiring the adjacent pedestrians.
4. A method according to any one of claims 1-3, wherein said obtaining, from said hierarchical grid map, perception data of said scene perception by each of said virtual pedestrians comprises:
Inquiring and determining whether an obstacle object exists between the current position and the inquiring position of each virtual pedestrian according to the hierarchical grid map;
determining a fan-shaped range in a preset radius of the virtual pedestrian orientation as a perception range;
And determining perception data of each virtual pedestrian perceived by the scene in the perception range according to the existence of the obstacle object.
5. The method of claim 4, wherein querying and determining whether an obstacle object exists for each of the virtual pedestrians from a current location to a query location based on the hierarchical grid map comprises:
Generating detection rays to a query position based on the current position of each virtual pedestrian when each virtual pedestrian moves;
rasterizing the detected rays into the object two-dimensional grid map;
For each target two-dimensional grid with the detection rays in the two-dimensional grid map, inquiring whether an obstacle object exists in a sub-region corresponding to the target two-dimensional grid;
And determining whether an obstacle object exists between the current position and the query position of each virtual pedestrian according to the result of whether the obstacle object exists in the subarea corresponding to each target two-dimensional grid.
6. The method of claim 4, wherein the perceived data includes at least one of ground level, virtual pedestrians within a perceived range, boundaries of static objects, interactable static objects, wherein the virtual pedestrians within the perceived range are a preset number or less of nearby pedestrians.
7. A pedestrian behavior prediction device based on perception, the device comprising:
The updating module is configured to periodically update the hierarchical grid map describing the crowd and the scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene;
The path planning module is configured to acquire perception data of each virtual pedestrian for scene perception according to the hierarchical grid map, plan the travel of each virtual pedestrian according to the perception data with the goal of reducing crowd collision and keeping the people flow moving speed, and acquire a planning path of each virtual pedestrian;
the rendering module is configured to call a UE5 engine, render the scene and render and draw each virtual pedestrian in the scene;
The behavior prediction module is configured to control each virtual pedestrian to move in the scene according to the planned path, and call blueprint animation in the UE5 engine in the moving process, and control the animation of each virtual pedestrian to obtain a behavior prediction result of each virtual pedestrian;
wherein the update module is configured to:
Periodically constructing an object two-dimensional grid map describing a static object in a scene by taking a frame as a unit based on the position of each virtual pedestrian in the scene, wherein the two-dimensional grid map comprises the following components: periodically mapping a city 3-dimensional model in the scene onto a 2-dimensional horizontal plane by taking a frame as a unit, covering uniformly divided two-dimensional grids on the 2-dimensional horizontal plane, generating a two-dimensional grid map, wherein each grid unit in the two-dimensional grid corresponds to a sub-area, storing identifiers of static objects occupying the sub-area in the two-dimensional grid, determining descriptions of each virtual pedestrian aiming at the static objects in an orientation perception range according to the positions of each virtual pedestrian in the scene, and storing the descriptions into the corresponding sub-area to obtain the two-dimensional grid map;
Constructing a quadtree grid map for path planning;
Constructing a pedestrian two-dimensional grid map for inquiring nearby pedestrians according to the object two-dimensional grid map;
And adding attribute description for the static object in the scene, and updating to obtain a hierarchical grid map describing crowd and the scene according to the attribute description, the object two-dimensional grid map, the quadtree grid map and the pedestrian two-dimensional grid map, wherein the attribute description comprises at least one of position, size, shape, material, interaction attribute and color.
8. An electronic device, comprising:
A processor;
A memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions stored in the memory to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311452876.5A CN117224951B (en) | 2023-11-02 | 2023-11-02 | Pedestrian behavior prediction method and device based on perception and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311452876.5A CN117224951B (en) | 2023-11-02 | 2023-11-02 | Pedestrian behavior prediction method and device based on perception and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117224951A CN117224951A (en) | 2023-12-15 |
CN117224951B true CN117224951B (en) | 2024-05-28 |
Family
ID=89091484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311452876.5A Active CN117224951B (en) | 2023-11-02 | 2023-11-02 | Pedestrian behavior prediction method and device based on perception and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117224951B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708581A (en) * | 2011-03-28 | 2012-10-03 | 上海日浦信息技术有限公司 | Virtual crowd motion simulation framework |
CN110772791A (en) * | 2019-11-05 | 2020-02-11 | 网易(杭州)网络有限公司 | Route generation method and device for three-dimensional game scene and storage medium |
CN111773724A (en) * | 2020-07-31 | 2020-10-16 | 网易(杭州)网络有限公司 | Method and device for crossing virtual obstacle |
CN115115773A (en) * | 2022-04-29 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Collision detection method, device, equipment and storage medium |
CN115562474A (en) * | 2022-02-25 | 2023-01-03 | 上海惠存展览展示有限公司 | Virtual environment and real scene fusion display system |
CN116036604A (en) * | 2023-01-28 | 2023-05-02 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114384920B (en) * | 2022-03-23 | 2022-06-10 | 安徽大学 | Dynamic obstacle avoidance method based on real-time construction of local grid map |
-
2023
- 2023-11-02 CN CN202311452876.5A patent/CN117224951B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708581A (en) * | 2011-03-28 | 2012-10-03 | 上海日浦信息技术有限公司 | Virtual crowd motion simulation framework |
CN110772791A (en) * | 2019-11-05 | 2020-02-11 | 网易(杭州)网络有限公司 | Route generation method and device for three-dimensional game scene and storage medium |
CN111773724A (en) * | 2020-07-31 | 2020-10-16 | 网易(杭州)网络有限公司 | Method and device for crossing virtual obstacle |
CN115562474A (en) * | 2022-02-25 | 2023-01-03 | 上海惠存展览展示有限公司 | Virtual environment and real scene fusion display system |
CN115115773A (en) * | 2022-04-29 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Collision detection method, device, equipment and storage medium |
CN116036604A (en) * | 2023-01-28 | 2023-05-02 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117224951A (en) | 2023-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108303972B (en) | Interaction method and device of mobile robot | |
Karamouzas et al. | Simulating and evaluating the local behavior of small pedestrian groups | |
CN101188025B (en) | A high-efficiency real time group animation system | |
CN101925892A (en) | Photo-based virtual world creation system for non-professional volunteers | |
CN111095170B (en) | Virtual reality scene, interaction method thereof and terminal equipment | |
CN101216951A (en) | Intelligent group motion simulation method in virtual scenes | |
CN108376198B (en) | Crowd simulation method and system based on virtual reality | |
CN104008562B (en) | A kind of virtual crowds simulation framework of user oriented planning | |
Nasir et al. | A survey on simulating real-time crowd simulation | |
CN111611703A (en) | Sand table deduction method, device, equipment and storage medium based on digital twins | |
CN117224951B (en) | Pedestrian behavior prediction method and device based on perception and electronic equipment | |
Montana et al. | Sketching for real-time control of crowd simulations | |
KR101267570B1 (en) | Virtual character steering behaviors simulation method and apparatus based on attractive field with 2d texture image, and virtual ecology park visualization method | |
Sudkhot et al. | A crowd simulation in large space urban | |
Conde et al. | Learnable behavioural model for autonomous virtual agents: low-level learning | |
Van Lammeren et al. | Virtual Reality in the landscape design process. | |
Musse et al. | Groups and crowd simulation | |
Karamouzas | Motion planning for human crowds: from individuals to groups of virtual characters | |
Sobota et al. | On building an object-oriented parallel virtual reality system | |
Arnaldi et al. | Simulating automated cars in a virtual urban environment | |
Ali | Efficient processing of maximum visibility facility selection query in spatial databases | |
Jakovljevic et al. | Implementing multiscale traffic simulators using agents | |
Mouli et al. | In VitrAm. In VITRo AniMats, a behavioural simulation model | |
Thalmann et al. | Behavioral animation of crowds | |
Persiani et al. | Semi-immersive synthetic environment for cooperative air traffic control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |