CN113457122A - User image drawing method based on VR emergency environment - Google Patents

User image drawing method based on VR emergency environment Download PDF

Info

Publication number
CN113457122A
CN113457122A CN202110716830.4A CN202110716830A CN113457122A CN 113457122 A CN113457122 A CN 113457122A CN 202110716830 A CN202110716830 A CN 202110716830A CN 113457122 A CN113457122 A CN 113457122A
Authority
CN
China
Prior art keywords
user
game
player
data
escape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110716830.4A
Other languages
Chinese (zh)
Inventor
何高奇
王长波
张嘉文
毛羽霞
周黎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202110716830.4A priority Critical patent/CN113457122A/en
Publication of CN113457122A publication Critical patent/CN113457122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/798Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The invention discloses a user image drawing method based on a VR (virtual reality) emergency environment, which is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, the five-personality information is used to obtain labels trained by a prediction model to train a preset machine learning model, and a user image under an emergency scene presented by five-personality scores is generated. Compared with the prior art, the method has the advantages that the corresponding preset label is generated by collecting the behavior data of the user in the VR game, the final user portrait of the user is determined, the behavior of the user in an emergency situation is reflected well, the system is simple, the use is convenient, the time and the labor are saved, the cost is low, and a solution for generating the user portrait without a questionnaire table is provided for measuring the five personality of the user.

Description

User image drawing method based on VR emergency environment
Technical Field
The invention relates to the technical field of computer human-computer interaction, in particular to a user image drawing method based on a VR emergency environment.
Background
Personality descriptions are made up of statements about behavior patterns that are stable over time and over the course of a scene. The personality is embodied in a behavior mode, and the working preference, style habits and the like of the user can be predicted in advance to a certain extent by knowing the personality. Under the emergency evacuation environment, the user can know the personality characteristics of the user in advance before a disaster comes, and can select a better escape mode temporarily when the disaster comes by combining the personality characteristics of the user. In the field of psychology, personality measures are usually performed by means of questionnaires, which are time-consuming and do not reflect well the behavior of the user in case of an emergency.
In the computer field, personality measures are generally acquired in a big data-based and video-based manner. Big data based approaches typically collect personal information about a user's account on a social network such as: personal avatars, personal favorite pictures, shopping history records, browsing records and the like, and the personality characteristics of the user are modeled and predicted through collected information, and the technology is generally applied to recommendation applications. And the facial features of the human face in the captured video are captured for prediction in a video-based mode, so that the technology is commonly used for talent recruitment to acquire more personal information about a recruiter for an enterprise.
The information collected by the prior art is modeled to predict the personality characteristics of the user, and the application of the information modeling does not relate to the measurement mode of the behavior angle in the emergency evacuation scene.
Disclosure of Invention
The invention aims to provide a user portrait method based on VR emergency environment aiming at the defects of the prior art, which adopts VR equipment with immersive perception to create an emergency scene, collects behavior data in the created scene to generate a corresponding preset label, establishes an index and predicts a personality through a machine learning model established by a system to form a user portrait in the emergency scene, can well reflect the behavior of a user under the emergency condition, is simple in system, convenient to use, time-saving and labor-saving, low in input cost, well solves the problems that the questionnaire survey acquires the personality and consumes time and labor, and cannot well reflect the behavior of the user under the emergency condition, and provides a solution for generating the user portrait without being based on a questionnaire scale for measuring the five-personality of the user.
The purpose of the invention is realized as follows: a user image method based on VR emergency environment is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, labels trained by a prediction model are obtained according to the five-personality information to train a preset machine learning model, and a user image under an emergency scene presented by five-personality scores is generated, wherein the VR game system is composed of a game terminal, an interaction module, a data collection module and a prediction module; the game terminal is the interactive behavior and scene of the user and a display module which presents the user portrait to the user in a visual mode; the interaction module provides the interaction between the user and the game object, including the interaction between the walking control of the player and the manipulation of the object by the player in the game; the data collection module captures walking data, time data, manipulated object data and escape modes; the prediction module generates user characteristics from the collected user game behavior data, uses training data of the user characteristics for predicting the user portrait and sends the user portrait to the game terminal for displaying.
The behavior data comprises trajectory data of the user in the VR game scene, interaction with objects in the game scene and an escape mode selected by the user.
The user behavior data is used for representing the attribute of the user, and the attribute characteristic of the user is determined according to the behavior data of the user.
The label trained by the prediction model obtains the label trained by the prediction model according to the personality information of the user, and trains a preset machine learning model according to the attribute of the user and the label of the prediction model.
The emergency scene is a scene of a fire disaster in a teaching building, and a user needs to control the VR handle to escape.
The determining, by the behavior data of the user, the attribute of the user specifically includes: and collecting the escape time, the walking path and the interactive operation and selected escape way with other objects of the user in the stress evacuation scene.
The escape time is specifically: the escape time is divided into slow escape, medium escape and fast escape according to the actual escape time.
The walking path specifically comprises: the escape route is divided into a highly targeted escape and an unspecified targeted escape according to the overlapping point of the escape route.
The interactive operation with other objects is specifically as follows: the use and non-use of fire extinguishers are classified according to the interaction between the user and the game object.
The selected escape mode specifically comprises the following steps: the method is divided into a jump window escape method, a fire extinguisher fire extinguishing method, an elevator escape method, a safety exit escape method and a common channel escape method according to the escape mode of a user.
The obtaining of the label of the prediction model training according to the personality information of the user specifically includes: and collecting the five-person questionnaire to be tested, and obtaining the score on each dimension, namely the label of the data set.
The preset machine learning model is a decision tree model, the characteristics determined by the attributes of the user are used as the input of a training set, and the five-personality score of the user is used as the label of the training set.
Compared with the prior art, the method has the advantages that the corresponding preset label is generated by collecting the behavior data of the user in the VR game, the final user portrait of the user is determined, the behavior of the user in an emergency situation is reflected well, the system is simple, the use is convenient, the time and the labor are saved, the cost is low, and a solution for generating the user portrait without a questionnaire table is provided for measuring the five personality of the user.
Drawings
Fig. 1 is a system framework diagram for acquiring a user image in an emergency evacuation scene according to embodiment 1;
fig. 2 is a system framework diagram for obtaining a user image in an emergency evacuation scene according to embodiment 2.
Detailed Description
The following is further detailed by the specific embodiments:
example 1
Referring to fig. 1, the system for acquiring user images in an emergency evacuation scene based on VR comprises: a game function system and a game terminal. The game function system includes: an interaction module for interacting with the game object during use by the user, a data collection module for game data generated during use by the user, and a prediction module for ultimately generating a representation of the user. The emergency evacuation scene is developed by adopting a Unity2019.3.15f1 version, wherein the cloud compiles an interaction module, a data collection module and a prediction module by using a C # script; the interaction module is used for controlling the walking direction of a player, controlling the walking speed of the player and controlling the player to walk with an object; the speed scripts for controlling the walking direction and the walking of the player are mounted on the player, and the walking direction of the player is obtained by obtaining the Axis value of the touch pad of the VR handle operated by the player; and judging whether the player presses the key A or not by the walking speed script, wherein the player walks quickly when pressing the key A, or walks slowly and the walking speed is set to be 2m/s and 5m/s respectively corresponding to the slow walking and the fast walking.
And mounting the script operated by the object on the corresponding object, wherein the objects capable of interacting comprise a door, a fire extinguisher, a window and the like. Wherein the script mounted on the door is specifically implemented to open and close the door; the script mounted on the window is specifically realized by opening and closing the window or jumping the window to escape; the script implementation mounted on the fire extinguisher may be used by a player to extinguish a fire.
Specifically, a hollow object is mounted on the door body, and the hollow object is positioned on the shaft of the door, and a box-shaped bounding box is mounted. When a player enters the bounding box, detecting collision and detecting whether a handle of the player presses a Trigger key; when a player presses the Trigger key, the empty object is rotated by 90 degrees to drive the door to rotate, and the door is opened and closed.
An empty object is mounted on the window, and the position of the empty object is placed on the shaft of the window, and a box-type bounding box is mounted. When a player enters the bounding box, detecting collision and detecting whether a handle of the player presses a Trigger key; when a player presses a Trigger key, the empty object is rotated by 90 degrees to drive the window to rotate, so that the window is opened and closed; when the player is detected to press the B key, the jump window escape is triggered.
Two empty objects are mounted on the fire extinguisher, one object is at the hand-held position of the fire extinguisher, and the other object is at the nozzle of the fire extinguisher. A box-type enclosure is mounted in a hand-held position of the fire extinguisher. When the handle of the player enters the bounding box, detecting collision and detecting whether the handle of the right hand of the player presses a skip key or not; when a player presses a Grip key, the player is endowed with a hand-held position and a left-hand position of the player, and when the Grip key is pressed again, the fire extinguisher is put down to realize the taking up and putting down of the fire extinguisher; when the fire extinguisher is in a take-up state, whether a Trigger key is pressed down by a right-hand handle of a player is detected, and when the Trigger key is pressed down by the player, the nozzle position emits white particles which are the spray of the fire extinguisher.
Further, when the spray of the fire extinguisher touches the flame, the collision of the flame is detected, and if the flame is white particles, the number of the particles of the flame is reduced, so that the effect of gradually extinguishing the fire is achieved.
The data collection module comprises data for capturing walking data of a user, data for escaping time, data for manipulating objects and an escaping mode. Specifically, data for capturing walking of the user is mounted on an observation node arranged at each corner of a corridor and at the center of the corridor of each floor. When a user triggers the observation node, the script mounted on the observation node starts to write the triggered track into a local csv file for storage.
And mounting the data of the escape time on a game manager, wherein the game manager records the process from the beginning of the user entering the game to the end of the escape for controlling the empty object arranged for the running of the whole game. When the game is triggered to end, the end time is recorded and written into the csv file to be stored locally.
The method comprises the steps that a script for obtaining data of a manipulation object is mounted on an object capable of interacting, the value is a Boolean value, when a user uses the object capable of interacting, a trigger is set to be true, and when the game is finished, a csv file is written into the game and stored in the game.
The objects that can interact include: doors, windows, elevator buttons, fire extinguishers; the escape modes include jumping windows to escape, using a fire extinguisher to extinguish fire, taking an elevator to escape, escaping from a safety exit, escaping from a common channel and the like. When the user triggers the corresponding escape mode, the script or the observation node on the corresponding interactive object records and writes the csv file into the local.
The prediction module comprises generated user characteristics and training data, wherein the generated user characteristics are read from a local csv file and processed; the processing is characterized by escape time, walking paths, interactive operation with other objects, selected escape modes and the like. Specifically, the escape time is divided into three conditions of slow escape, medium-speed escape and fast escape, wherein the escape within 100s is set as fast escape, the escape within 150s is set as medium-speed escape, and the escape more than 150s is set as slow escape.
The walking path comprises a strong-purpose escape and a weak-purpose escape, specifically, when the number of times of triggering of the same observation node is more than 3 times, the escape is defined as the weak-purpose escape, otherwise, the strong-purpose escape is defined.
The interactive operation with other objects is divided into using a fire extinguisher and not using the fire extinguisher, and the selected escape modes comprise jumping from a window, using the fire extinguisher to extinguish a fire, taking an elevator to escape, escaping from a safety exit, escaping from a common channel and the like; the escape mode is determined by the observation node placed at the escape exit and the interactive game object.
Further, the escape from the safe exit and the escape from the common channel are obtained according to the observation nodes, when the player touches the observation node closer to the safe exit, the player is regarded as the escape from the safe exit, and when the player touches the observation node closer to the common channel, the player is regarded as the escape from the common channel. When a player touches flames by white particles emitted by a fire extinguisher, the player is regarded as the fire extinguisher to extinguish the fire, and the written csv file is stored locally; when a player touches an elevator button by taking an elevator, the player starts to escape by taking the elevator, writes a csv file into the local; when a player opens the window and presses the B key, the player starts to jump the window to escape, writes the csv file into the local area and stores the csv file into the local area.
After all the characteristics are analyzed through the csv file, numbers marked from 0 to 4 are used as characteristic values in sequence. And the script for processing the training data takes the processed characteristic value as the input of the training data and takes the locally stored scores of five dimensions of the five-dimension of the user as accurate values.
Specifically, the trained model is five decision tree models, and the characteristic values are respectively input into the five decision tree models to fit specific scores on each dimension of the five figures, wherein the scores range from 1 to 10.
The training data is divided into four attributes: the escape time, the walking path, the interactive operation with other objects and the selected escape mode are divided into 3, 2 and 5 categories respectively according to each attribute. And filling data obtained from a game scene in a mode of attributes and categories, wherein the data are respectively used as the input of five decision tree models, and five dimensions 1-10 of the tested five-personality score are respectively used as the label of each decision tree.
The game terminal comprises a game scene display module which is responsible for displaying the scene of the game, wherein the game scene is built by a Unity2019.3.15f1 version, the emergency evacuation scene is designed into a teaching building with 3 floors, the building shape is square, and the emergency scene is a fire scene. The teaching building comprises a fire extinguisher, a safe escape passage, a common passage and an elevator. The application program runs on VR terminal equipment, the selected terminal equipment is HTC VIVE COSMOS, and a user can carry out interactive operation in the game by wearing the head-mounted display and the handle.
Example 2
Referring to fig. 2, this embodiment is different from embodiment 1 in that a game function system has an increased number of prediction modules and a game terminal has an increased number of user image display modules as compared with embodiment 1, and the other points are substantially the same as embodiment 1, but are not described in any detail. The generation of the portrait is realized by adding a new realization on the premise that five decision tree models are trained in the embodiment 1; the generated portrait is represented in a prediction score value of 1-10 points represented by five decision tree models; and the user portrait display module is used for finally displaying the five-dimension scores of the user, and when the game is finished, the five-dimension scores of the five-dimension are acquired from the prediction module in the game function system and are displayed on the VR equipment of the user.
The above examples are only for further illustration of the present invention and are not intended to limit the present invention, and all equivalent implementations of the present invention should be included within the scope of the claims of the present invention.

Claims (7)

1. A user image drawing method based on a VR (virtual reality) emergency environment is characterized in that a VR game system is adopted to obtain behavior data and five-personality information of a user in the emergency environment, game behavior labels of the user are determined according to game behaviors of the user and a preset label word bank, labels for model training are obtained according to the five-personality information, a preset machine learning model is trained, and a user image under an emergency scene presented by five-personality scores is generated, wherein the VR game system is composed of a game terminal, an interaction module, a data collection module and a prediction module; the game terminal is a display module which presents the user image to the user in a visual mode for the interactive behavior and scene of the user; the interaction module provides the interaction between the user and the game object, including the interaction between the walking control of the player and the manipulation of the object by the player in the game; the data collection module captures walking data, time data, manipulated object data and escape modes; the prediction module generates user characteristics from the collected user game behavior data, uses training data of the user characteristics for predicting the user portrait and sends the user portrait to the game terminal for displaying.
2. The method for imaging users in VR-based emergency environments of claim 1, wherein the preset machine learning model is five decision tree models, the feature values are input into the five decision tree models as training data, scores of five dimensions of five figures are fitted as accurate values, the scores range from 1 to 10, and the feature values are labeled as feature values in a numerical order of 0 to 4 after the features determined by the attributes of the users are analyzed by the csv file.
3. The VR-based emergency environment user imaging method of claim 1, wherein the walking data is a walking path of a game character in a game scene, wherein the walking path is composed of a series of three-dimensional spatial points; the time data is the time for capturing the position of each current player of the game character in the game scene and the time for carrying out interactive operation; the manipulated object data is the name of a manipulated object captured when a game character interacts with an interactively operable object in a game scene; the escape mode is the escape mode selected by the player when the player escapes from the fire scene of the game.
4. The method of claim 1, wherein the user game behavior data includes a user's walking trajectory and interaction with a game object.
5. The method of claim 1, wherein the user representation in the display module is a five-dimensional score value of five people.
6. The VR-based emergency environment user imaging method of claim 1, wherein the interactive operation of the interactive module comprises: interactive operation of opening/closing a door, interactive operation using a fire extinguisher, interactive operation of opening/closing a window, interactive operation taking an elevator, and detection of whether a player selects an escape mode, upon which game is ended.
7. The VR-based emergency environment user imaging method of claim 1, wherein the player walking control is a player controlling a character's walking in a game scene by manipulating a VR handle controller; the interaction of the player for manipulating the object is that the player operates with an interactive game object in a game scene.
CN202110716830.4A 2021-06-28 2021-06-28 User image drawing method based on VR emergency environment Pending CN113457122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110716830.4A CN113457122A (en) 2021-06-28 2021-06-28 User image drawing method based on VR emergency environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110716830.4A CN113457122A (en) 2021-06-28 2021-06-28 User image drawing method based on VR emergency environment

Publications (1)

Publication Number Publication Date
CN113457122A true CN113457122A (en) 2021-10-01

Family

ID=77873179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110716830.4A Pending CN113457122A (en) 2021-06-28 2021-06-28 User image drawing method based on VR emergency environment

Country Status (1)

Country Link
CN (1) CN113457122A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180001198A1 (en) * 2016-06-30 2018-01-04 Sony Interactive Entertainment America Llc Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
CN107670272A (en) * 2017-09-29 2018-02-09 广州云友网络科技有限公司 Intelligent body-sensing based on VR technologies, sense of touch interactive scene analogy method
CN108345874A (en) * 2018-04-03 2018-07-31 苏州欧孚网络科技股份有限公司 A method of according to video image identification personality characteristics
CN108399575A (en) * 2018-01-24 2018-08-14 大连理工大学 A kind of five-factor model personality prediction technique based on social media text
CN108452521A (en) * 2018-01-18 2018-08-28 安徽三弟电子科技有限责任公司 A kind of games system based on VR virtual realities
CN109766452A (en) * 2019-01-18 2019-05-17 北京工业大学 A kind of character personality analysis method based on social data
CN110096575A (en) * 2019-03-25 2019-08-06 国家计算机网络与信息安全管理中心 Psychological profiling method towards microblog users
CN111309936A (en) * 2019-12-27 2020-06-19 上海大学 Method for constructing portrait of movie user
CN111450534A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Training method of label prediction model, and label prediction method and device
CN113688624A (en) * 2021-07-26 2021-11-23 北京邮电大学 Personality prediction method and device based on language style

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180001198A1 (en) * 2016-06-30 2018-01-04 Sony Interactive Entertainment America Llc Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
CN107670272A (en) * 2017-09-29 2018-02-09 广州云友网络科技有限公司 Intelligent body-sensing based on VR technologies, sense of touch interactive scene analogy method
CN108452521A (en) * 2018-01-18 2018-08-28 安徽三弟电子科技有限责任公司 A kind of games system based on VR virtual realities
CN108399575A (en) * 2018-01-24 2018-08-14 大连理工大学 A kind of five-factor model personality prediction technique based on social media text
CN108345874A (en) * 2018-04-03 2018-07-31 苏州欧孚网络科技股份有限公司 A method of according to video image identification personality characteristics
CN109766452A (en) * 2019-01-18 2019-05-17 北京工业大学 A kind of character personality analysis method based on social data
CN110096575A (en) * 2019-03-25 2019-08-06 国家计算机网络与信息安全管理中心 Psychological profiling method towards microblog users
CN111309936A (en) * 2019-12-27 2020-06-19 上海大学 Method for constructing portrait of movie user
CN111450534A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Training method of label prediction model, and label prediction method and device
CN113688624A (en) * 2021-07-26 2021-11-23 北京邮电大学 Personality prediction method and device based on language style

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何高奇等: "基于VR火灾逃生游戏的应急行为评估系统" *

Similar Documents

Publication Publication Date Title
Zhu et al. Virtual and augmented reality technologies for emergency management in the built environments: A state-of-the-art review
Cerekovic et al. Rapport with virtual agents: What do human social cues and personality explain?
US8721341B2 (en) Simulated training environments based upon foveated object events
CN102184020B (en) Gestures and gesture modifiers for manipulating a user-interface
US9652032B2 (en) Simulated training environments based upon fixated objects in specified regions
US10503964B1 (en) Method and system for measuring and visualizing user behavior in virtual reality and augmented reality
US20090112538A1 (en) Virtual reality simulations for health care customer management
CN110069707A (en) A kind of artificial intelligence self-adaption interactive tutoring system
Jacobsen et al. Active personalized construction safety training using run-time data collection in physical and virtual reality work environments
Perugia et al. I can see it in your eyes: Gaze as an implicit cue of uncanniness and task performance in repeated interactions with robots
CN105007525A (en) Interactive situation event correlation smart perception method based on application of smart television
Dugdale et al. Emergency fire incident training in a virtual world
Miller et al. Synchrony within triads using virtual reality
JP3948202B2 (en) Evacuation virtual experience system
CN110176044B (en) Information processing method, information processing device, storage medium and computer equipment
CN113457122A (en) User image drawing method based on VR emergency environment
Puel An authoring system for VR-based firefighting commanders training
Sassi et al. Simulation-based virtual reality training for firefighters
Liang et al. Design virtual reality simulation system for epidemic (COVID-19) education to public
Stachoň et al. The possibilities of using virtual environments in research on wayfinding
Datcu et al. Affective computing and augmented reality for car driving simulators
Small et al. Multi-modal annotation of quest games in Second Life
Fu et al. How individuals sense environments during indoor emergency wayfinding: an eye-tracking investigation
Patel et al. Gesture Recognition Using MediaPipe for Online Realtime Gameplay
Tesfazgi Survey on behavioral observation methods in virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211001