CN114578957B - Reinforcement learning-based redirected walking passive haptic technology - Google Patents

Reinforcement learning-based redirected walking passive haptic technology Download PDF

Info

Publication number
CN114578957B
CN114578957B CN202111003529.5A CN202111003529A CN114578957B CN 114578957 B CN114578957 B CN 114578957B CN 202111003529 A CN202111003529 A CN 202111003529A CN 114578957 B CN114578957 B CN 114578957B
Authority
CN
China
Prior art keywords
scene
walking
training user
virtual
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111003529.5A
Other languages
Chinese (zh)
Other versions
CN114578957A (en
Inventor
汪淼
陈泽寅
李奕君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111003529.5A priority Critical patent/CN114578957B/en
Publication of CN114578957A publication Critical patent/CN114578957A/en
Application granted granted Critical
Publication of CN114578957B publication Critical patent/CN114578957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention discloses a redirection walking passive touch technology based on reinforcement learning, which comprises the following steps: reversing the walking path to generate training data; training a reinforcement learning network, extracting position and direction information of a training user in a period of time from a real scene and a virtual scene, and encoding the information into an N-dimensional vector serving as input of the reinforcement learning network; n represents a positive integer; according to the relation between the training user and the boundary in the real scene and the relation between the training user and the target object in the virtual scene, three rewards and punishments are respectively defined; the reinforcement learning network carries out three transformations of displacement, rotation and curvature on the virtual scene according to the input information of the training user and the target object in the real scene and three rewards and punishments, so that the relative position relationship between the training user and the target object in the virtual scene and the real scene is kept consistent while the training user is far away from the boundary of the real space. The result can be applied to the fields of virtual reality games, virtual roaming and the like.

Description

Reinforcement learning-based redirected walking passive haptic technology
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a passive haptic method of redirecting walking.
Background
With the development of Virtual Reality, applications such as VR (Virtual Reality) games, virtual roaming, etc. gradually come into the field of view of people. Free movement in the virtual world is a significant challenge for room-scale training users because limited real space tends to limit training users from exploring large virtual spaces. Introducing a reset operation may alleviate this problem, but such a strategy tends to interrupt the immersive VR experience of the training user, resulting in dizziness, nausea, and other network ailments. Razzaque et al in 2005 proposed a redirected walk technique that changes the trajectory of a training user in real space by performing three transformations (translation, rotation, curvature) on virtual space to achieve inconsistency of real space and virtual space trajectories. By this technique, the number of resets by a training user can be reduced without being perceived by the training user.
Meanwhile, insko et al propose in 2001 that passive haptic can greatly enhance the experience of training a user in virtual exploration. That is, when the training user touches an object in the virtual scene, the real scene also requires a real feedback to the training user. Steinicke et al in 2008 proposed a method to divide the exploration process into obstacle avoidance and passive touch, and design a walking track by exhausting the relative relation between a training user and a target in virtual and real scenes. Thomas et al in 2020 proposed a gradient field based passive haptic technique for redirecting walking that guided a training user by simulating the repulsive force of an obstacle and the attractive force of a target. However, none of the above techniques achieve a good tactile sense of redirecting walking without being perceived by a trained user.
Aiming at the defects of the existing methods in the research of the field, the invention discloses a redirection walking passive touch method, which guides a training user to be far away from a boundary in a real scene through the ideas of a special data set generation method, a self-adaptive dynamic punishment and the like so as to reduce reset. Meanwhile, the relative relation between the training user and the target object in the virtual scene and the real scene is controlled to realize passive touch sense. Compared with the previous passive tactile method of redirected walking, the method can obtain better experimental results on the distance deviation index for training the relative relation between the user and the target.
Disclosure of Invention
First, the technical problem to be solved is
The invention aims to solve the technical problems that: training a user how to reduce resets in real space while exploring in a virtual scene. Meanwhile, when the training user touches an object in the virtual scene, the real scene can provide real feedback.
(II) technical scheme
In order to solve the technical problems, the invention provides a passive tactile method for redirecting walking, which is characterized by comprising the following steps:
s1: and placing the training user at the position of the target object in the virtual scene, and randomly initializing the walking direction of the training user. And recording a walking path for training the user to reach the boundary from the position of the target object, and obtaining walking path information. And reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data. The training user is a virtual training user generated by the computing equipment, and the walking direction and the walking radius of the training user are randomly changed in the walking process that the training user reaches the boundary from the position of the target object.
S2: for each current frame in the walking path information, coordinates of a training user from the current frame to the previous ten frames in the virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene are obtained. The coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene, and the walking direction in the real scene are input to the reinforcement learning network. The reinforcement learning network designs three rewards and punishments according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene, and trains the reinforcement learning network through the three rewards and punishments.
S3: and controlling the reinforcement learning network to perform translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three transformations are transformations for keeping the relative position relationship between the training user and the target object in the virtual scene and the real scene while keeping the training user away from the boundary in the real scene.
(III) beneficial effects
The technical scheme has the following advantages: the redirection walking passive touch method based on reinforcement learning provided by the invention uses the idea of reinforcement learning and has generalization. The method has a relatively robust effect on different scenes, and meanwhile obstacle avoidance and passive touch are dynamically considered. The number of resets in the real scene is reduced well without being perceived by the training user, and the training user and the target relative relationship in the virtual and real scene are kept consistent.
Drawings
FIG. 1 is a flow chart of some embodiments of a redirect walking passive haptic method according to the present disclosure;
FIG. 2 is a schematic diagram of the relative angular deviation penalty of the present disclosure;
fig. 3 is a comparison of the results before and after a scene change of the present disclosure.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific embodiments.
As shown in fig. 1, the method comprises the following steps:
s1: and placing the training user at the position of the target object in the virtual scene, and randomly initializing the walking direction of the training user. And recording a walking path for training the user to reach the boundary from the position of the target object, and obtaining walking path information. And reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data. The training user is a virtual training user generated by the computing equipment, and the walking direction and the walking radius of the training user are randomly changed in the walking process that the training user reaches the boundary from the position of the target object.
The step S1 is a method for generating training data. The training user is placed at the position of the target object in the virtual scene at the beginning, and the walking direction of the training user is initialized randomly. Wherein the training user is a virtual training user generated by the computing device. The random numbers within the range of [ -180 degrees, 180 degrees ] of the walking direction of the training user are initialized, namely, sampling is carried out in Gaussian distribution with 0 as a mean value and 45 degrees as a standard deviation. The random number generation is performed using the function random.normal variance () in the numpy block at the time of sampling. The numpy module is an open-source numerical calculation extension library in a programming language. The random.normal () function is a library function in the numpy module that generates random numbers. The user is trained to walk in the virtual scene at a speed of 1.4m/s, with a random number between 0.5 m and 3.5 m each time, i.e. change the walking direction. The amount of change in the walking direction was sampled from a gaussian distribution with 0 as the mean and 45 degrees as the standard deviation. The random number generation is performed using the function random.rand () in the numpy module at the time of sampling. Where random () is a library function in the numpy module that generates random numbers. The walking radius when the walking direction is changed is a random number between 2 meters and 4 meters. Random number generation is performed using a function random. The walking radius is the radius when the walking direction of the training user changes, the walking direction does not change instantaneously, but changes gradually around a certain arc, and the arc radius is the walking radius. For example, when the direction is changed by 180 degrees and the walking radius is 1m, the training user needs to walk 2pi×1×180 ° =pi meters to complete the change of the walking direction. Where pi represents the circumference ratio. 180 deg. represents the angle corresponding to the arc. Stopping walking when the training user reaches any boundary, and recording the walking path of each frame of the training user reaching the boundary from the position of the target object to obtain the walking path information. The walking path information comprises coordinates in the virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene. Wherein the frame rate of each frame is 60, and each frame is 1/60 second, namely, 60 walking paths are recorded in 1 second. The walking path information is reversed, and the path information from the boundary to the position of the target object is obtained as training data. Wherein reversing includes, but is not limited to, at least one of: and storing the action path information in a stack and taking out the action path information. For example, the travel path information is stored in the stack in the order of the sequence, and the reverse travel path information is completed by taking out the travel path information again. The stack is a special linear table which can only perform insertion and deletion operations at one end and has the property of first-in last-out. For example, the stacking order is [1,2,3], and then the popping order is [3,2,1].
In some optional implementations of some embodiments, the executing entity may generate the walking path information from the boundary to the position of the target object by using a reverse method. The reverse method is a method in which the order of storing the action path information is changed from the beginning to the end, and the walking direction information in the action path information is added by 180 degrees. For example, the walking path information is { [ (virtual X1, virtual Y1), 45 degrees, (real X1, real Y1), 45 degrees ], [ (virtual X2, virtual Y2), 25 degrees, (real X2, real Y2), 25 degrees ], [ (virtual X3, virtual Y3), 30 degrees, (real X3, real Y3), 30 degrees ] }. The walking path information is generated by a reverse method, and the path information from the boundary to the object is { [ (virtual X3, virtual Y3), 210 degrees, (real X3, real Y3), 210 degrees ], [ (virtual X2, virtual Y2), 205 degrees, (real X2, real Y2), 205 degrees ], [ (virtual X1, virtual Y1), 225 degrees, (real X1, real Y1), 225 degrees ] }.
S2: for each current frame in the walking path information, coordinates of a training user from the current frame to the previous ten frames in the virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene are obtained. The coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene, and the walking direction in the real scene are input to the reinforcement learning network. The reinforcement learning network designs three rewards and punishments according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene, and trains the reinforcement learning network through the three rewards and punishments.
The step S2 is a self-adaptive dynamic punishment design. For each current frame in the walking path information, coordinates of a training user from the current frame to the previous ten frames in the virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene are obtained. The coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene, and the walking direction in the real scene are input to the reinforcement learning network. Wherein the walk path information input into the reinforcement learning network each time contains walk path information of a training user of ten frames among the current frame to the previous ten frames. For example, t frame is the current frame. Wherein t represents a sequence number. S is the walking path information of the training user. S is S t Training the walking path information of the user for t frames. The walk path information of the training user of the current frame to the previous ten frames input into the reinforcement learning network is [ S ] t-9 ,S t-8 ,S t-7 ,...,S t-2 ,S t+1 ,S t ]. The reinforcement learning network designs three rewards and punishments according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The reinforcement learning network is trained through three types of rewards and punishments. Training paths in each virtual sceneThe training user is initialized at any boundary in the real scene, and is guided to walk through the virtual scene path. Meanwhile, three rewards and punishments are defined in each frame, namely obstacle avoidance rewards between a training user and a boundary in a real scene, punishment of the position deviation between the training user and a target in a virtual scene and punishment of the position deviation between the training user and the target in the real scene. The decision importance of the training user in different positions in the virtual scene and the real scene is dynamically adjusted through the three rewards and punishments, so that the training user can realize passive touch effect while being far away from the boundary.
In some optional implementations of some embodiments, the three rewards and penalties are three adaptively adjusted rewards and penalties used for training the reinforcement learning network, wherein the three rewards and penalties are obstacle avoidance rewards between the training user and the boundary in the real scene, and a distance deviation penalty and a relative angle deviation penalty of the training user and the target relative to each other in the virtual scene and the real scene, respectively. Three types of rewards and punishments are introduced below:
s21: training obstacle avoidance rewards between users and boundaries in a real scene. Given the position of a user in a real scene and the direction of travel phi p . Determining the value of the obstacle avoidance reward according to an obstacle avoidance reward formula, wherein the obstacle avoidance reward formula is as follows:
wherein R is 1 Indicating obstacle avoidance rewards.Representing a custom function, assuming a square region, emitting a ray from point a in a direction, and the square intersecting point b, the distance from a to b being the value of the function +.>Phi represents an angle scalar. p represents a real scene. Phi (phi) p Representing the walking direction of the training user in the real scene. />Representing the starting edge phi from the current position p Proceeding to the distance between the boundaries. min (,) represents the minimum of the two values in brackets. Pi represents a constant of 3.14.
S22: training the distance deviation penalty of the relative position of the user and the target in the virtual scene and the real scene. When the training user reaches the target object in the virtual scene, the training user can also reach the target object in the real scene, and the penalty can reduce the difference between the distances between the user and the target object in two spaces. And determining the value of the distance deviation penalty according to the distance deviation penalty formula. Wherein, the distance deviation punishment formula is as follows:
R 2 =-|d 1 -d 2 |。
wherein R is 2 Representing the distance deviation penalty scalar. d, d 1 Representing the absolute distance between the training user and the object in the virtual scene. d, d 2 Representing the absolute distance between the training user and the object in the real scene. - | represents the opposite number of absolute values.
S23: the relative angular deviation penalty. The relative angular deviation penalty considers the angle between the training user's walking direction and the training user's direction relative to the target object in both virtual and real scenarios. As shown in fig. 2. And determining the value of the angle deviation penalty according to the relative angle deviation penalty formula. Wherein, the above-mentioned relative angle deviation punishment formula is as follows:
R 3 =-|θ 1 θ 2 |。
wherein R is 3 Representing an angular deviation penalty scalar. θ 1 Representing the angle between the direction of the training user's walk in the virtual scene relative to the direction of the training user's position link with the target object. θ 2 The angle between the walking direction of the training user in the real scene and the direction of the position connecting line of the training user and the target object is represented. - | represents the opposite number of absolute values.
The three prizes and punishments play a role in different positions of the training user. The relative angle deviation penalty is more important than obstacle avoidance rewards between the training user and the boundary in the real scene, and the relative position distance deviation penalty between the training user and the target in the virtual scene and the real scene. Because if the relative relationship in angle cannot be guaranteed, the distance on the future path cannot be guaranteed. For example, consider mainly having a training user far from a boundary when approaching the boundary in a real scene. Passive haptic tasks are mainly considered when not approaching boundaries in real scenes. Therefore, the method combines the three types of rewards and punishments and provides a self-adaptive dynamic rewards and punishment adjusting method. And determining the numerical values of translation, rotation and curvature of the virtual scene according to the self-adaptive dynamic punishment and punishment adjustment transformation formula. Wherein, the self-adaptive dynamic reward and punishment adjustment transformation formula is as follows:
R=λ l R l2 R 23 R 3 +R 4
wherein R represents the final penalty scalar. Lambda (lambda) 1 Representing constant termse represents a constant of 2.72.d, d 3 Representing the distance of the training user to the nearest boundary in the real scene. Lambda (lambda) 2 Representing constant term->d 4 Representing the distance of the training user to the target object in the virtual scene. Lambda (lambda) 3 The constant 1 is represented. R is R 4 Representing a constant of 0.5.
S3: and controlling the reinforcement learning network to perform translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three transformations are transformations for keeping the relative position relationship between the training user and the target object in the virtual scene and the real scene while keeping the training user away from the boundary in the real scene.
The step S3 is to use the reinforcement learning network after training to transform the virtual scene. Coordinates in the virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene of the current frame to the previous ten frames are input to the reinforcement learning network after training is completed. And controlling the reinforcement learning network after training to perform translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three transformations are transformations for keeping the relative position relationship between the training user and the target object in the virtual scene and the real scene while keeping the training user away from the boundary in the real scene. The transformation result is shown in fig. 3. The left side of the figure is a path in the virtual space, and the right four figures are paths obtained from different positions in the real space. The gray path is a path transformed by the present method, and the black path is a path obtained without any technique.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (3)

1. A method of redirecting walking passive haptics comprising:
s1: placing a training user at the position of a target object in a virtual scene, randomly initializing the walking direction of the training user, recording the walking path of the training user from the position of the target object to the boundary to obtain walking path information, reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data, wherein the training user is a virtual training user generated by computing equipment, and randomly changing the walking direction and the walking radius in the walking process of the training user from the position of the target object to the boundary;
s2: for each current frame in the walking path information, acquiring coordinates of a training user from the current frame to the previous ten frames in a virtual scene, a walking direction in the virtual scene, coordinates in the real scene and a walking direction in the real scene, and inputting the coordinates in the virtual scene, the walking direction in the real scene and the walking direction in the real scene into a reinforcement learning network, wherein the reinforcement learning network designs three types of rewards and punishments according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene, and trains the reinforcement learning network through the three types of rewards and punishments;
s3: and controlling the reinforcement learning network to perform translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene, wherein the three transformations are transformations for keeping the relative position relationship between the training user and the target object in the virtual scene and the real scene while the training user is far away from the boundary in the real scene.
2. The method of claim 1, wherein reversing the travel path information to obtain travel path information from the boundary to the location of the object comprises:
the travel path information is generated from the boundary to the position of the target object by a reverse method.
3. The method of claim 2, wherein the three rewards and penalties are three adaptively adjusted rewards and penalties for training the reinforcement learning network, wherein the three rewards and penalties are obstacle avoidance rewards between a training user and a boundary in a real scene, a virtual scene and a training user relative position distance deviation penalty and relative angle deviation penalty of a target in a real scene, respectively.
CN202111003529.5A 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology Active CN114578957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111003529.5A CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111003529.5A CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Publications (2)

Publication Number Publication Date
CN114578957A CN114578957A (en) 2022-06-03
CN114578957B true CN114578957B (en) 2023-10-27

Family

ID=81768082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111003529.5A Active CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Country Status (1)

Country Link
CN (1) CN114578957B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348733B (en) * 2023-12-06 2024-03-26 山东大学 Dynamic curvature manipulation mapping-based redirection method, system, medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018098710A1 (en) * 2016-11-30 2018-06-07 深圳市大疆创新科技有限公司 Control method, control apparatus, and electronic apparatus
CN112044068A (en) * 2020-09-10 2020-12-08 网易(杭州)网络有限公司 Man-machine interaction method and device, storage medium and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10403043B2 (en) * 2016-04-14 2019-09-03 The Research Foundation For The State University Of New York System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018098710A1 (en) * 2016-11-30 2018-06-07 深圳市大疆创新科技有限公司 Control method, control apparatus, and electronic apparatus
CN112044068A (en) * 2020-09-10 2020-12-08 网易(杭州)网络有限公司 Man-machine interaction method and device, storage medium and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于deep Q-network双足机器人非平整地面行走稳定性控制方法;赵玉婷;韩宝玲;罗庆生;;计算机应用(09);全文 *
面向智能避障场景的深度强化学习研究;刘庆杰;林友勇;李少利;;智能物联技术(02);全文 *

Also Published As

Publication number Publication date
CN114578957A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN113138671B (en) Method and system for dynamic haptic retargeting
Stipanović et al. Guaranteed strategies for nonlinear multi-player pursuit-evasion games
Aliprantis et al. Natural Interaction in Augmented Reality Context.
Galvane et al. Camera-on-rails: automated computation of constrained camera paths
EP3398045A1 (en) Hand tracking for interaction feedback
Thomas et al. Towards physically interactive virtual environments: Reactive alignment with redirected walking
CN114578957B (en) Reinforcement learning-based redirected walking passive haptic technology
CN110853150A (en) Method and system for mapping actual space and virtual space suitable for virtual roaming system
Dvorožňák et al. Example-based expressive animation of 2d rigid bodies
JP2023089947A (en) Feature tracking system and method
Thomas et al. Reactive alignment of virtual and physical environments using redirected walking
Sousa et al. Humanized robot dancing: humanoid motion retargeting based in a metrical representation of human dance styles
Pascher et al. AdaptiX-A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
Tran et al. Easy-to-use virtual brick manipulation techniques using hand gestures
CN107678828A (en) A kind of wave volume control method realized based on picture charge pattern technology
Wang et al. Flock morphing animation
Ullal et al. A multi-objective optimization framework for redirecting pointing gestures in remote-local mixed/augmented reality
WO2022062570A1 (en) Image processing method and device
CN112435316B (en) Method and device for preventing mold penetration in game, electronic equipment and storage medium
Papadogiorgaki et al. Gesture synthesis from sign language notation using MPEG-4 humanoid animation parameters and inverse kinematics
Yi et al. AR system for mold design teaching
CN117348733B (en) Dynamic curvature manipulation mapping-based redirection method, system, medium and equipment
JP2009247555A (en) Image generating system, program, and information storage medium
Liu et al. A Redirected Walking Toolkit for Exploring Large-Scale Virtual Environments
CN111273773B (en) Man-machine interaction method and system for head-mounted VR environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant