CN114578957A - Redirected walking passive touch technology based on reinforcement learning - Google Patents

Redirected walking passive touch technology based on reinforcement learning Download PDF

Info

Publication number
CN114578957A
CN114578957A CN202111003529.5A CN202111003529A CN114578957A CN 114578957 A CN114578957 A CN 114578957A CN 202111003529 A CN202111003529 A CN 202111003529A CN 114578957 A CN114578957 A CN 114578957A
Authority
CN
China
Prior art keywords
walking
scene
training user
virtual
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111003529.5A
Other languages
Chinese (zh)
Other versions
CN114578957B (en
Inventor
汪淼
陈泽寅
李奕君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111003529.5A priority Critical patent/CN114578957B/en
Publication of CN114578957A publication Critical patent/CN114578957A/en
Application granted granted Critical
Publication of CN114578957B publication Critical patent/CN114578957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention discloses a re-oriented walking passive tactile technology based on reinforcement learning, which comprises the following steps: reversing the walking path to generate training data; training a reinforcement learning network, extracting position and direction information of a training user in a period of time from a real scene and a virtual scene, and encoding the information into an N-dimensional vector as the input of the reinforcement learning network; n represents a positive integer; respectively defining three types of reward and punishment according to the relation between a training user and a boundary in a real scene and the relative relation between the training user and a target object in a virtual scene and a real scene; and the reinforcement learning network performs displacement, rotation and curvature three kinds of transformation on the virtual scene according to input information and three kinds of reward and punishment of the training user and the target object in the real scene, so that the training user is far away from the real space boundary, and the relative position relationship between the training user and the target object in the virtual scene and the real scene is kept consistent. The result can be applied to the fields of virtual reality games, virtual roaming and the like.

Description

Redirected walking passive touch technology based on reinforcement learning
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method for redirecting walking passive touch.
Background
With the development of Virtual Reality, applications such as VR (Virtual Reality) games, Virtual roaming, and the like gradually come into view of people. Free movement in the virtual world is a great challenge for room-scale training users because the limited real space tends to limit the training users from exploring large virtual spaces. Introducing a reset operation may alleviate this problem, but such a strategy tends to interrupt the immersive VR experience of the training user, resulting in network ailments such as dizziness, nausea, and the like. Razaque et al proposed a re-directed walking technique in 2005 to achieve inconsistency between the real space and the virtual space trajectory by performing three transformations (translation, rotation, curvature) on the virtual space, thereby changing the trajectory of the training user in the real space. By this technique, the number of times a training user resets can be reduced without being perceived by the training user.
Simultaneously, Insko et al propose in 2001 that passive haptics can greatly enhance the experience of training users in virtual exploration. Namely, when the training user touches an object in the virtual scene, the real scene also needs a real feedback to the training user. Steinicke et al, 2008, propose a method that divides the exploration process into obstacle avoidance and passive touch, and designs a walking track by exhaustively exhausting the relative relation between a training user and a target in a virtual and real scene. Thomas et al propose a gradient field-based redirected walking passive haptic technique in 2020 to guide a training user by simulating the repulsive force of an obstacle and the attractive force of a target object. None of the above techniques, however, achieve redirected walking passive haptics well without being perceived by a trained user.
Aiming at the defects of the existing method in the field of research, the invention discloses a method for redirecting walking passive touch, which guides a training user to be far away from a boundary in a real scene through ideas such as a special data set generation method and self-adaptive dynamic reward and punishment, so that the resetting is reduced. And simultaneously controlling the relative relation between the training user and the target object in the virtual scene and the real scene so as to realize passive touch. Compared with the prior method for redirecting walking and passive touch, the method can obtain better experimental results on the distance deviation index of the relative relation between the training user and the target.
Disclosure of Invention
(I) technical problem to be solved
The technical problem to be solved by the invention is as follows: training the user how to reduce resets in real space while exploring in the virtual scene. Meanwhile, when the training user touches an object in the virtual scene, the real scene can also provide real feedback.
(II) technical scheme
In order to solve the technical problem, the invention provides a method for redirecting walking passive haptics, which is characterized by comprising the following steps:
s1: and placing the training user at the position of the target object in the virtual scene, and randomly initializing the walking direction of the training user. And recording a walking path for training a user to reach the boundary from the position of the target object to obtain walking path information. And reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data. The training user is a virtual training user generated by the computing device, and the walking direction and the walking radius of the training user are randomly changed in the walking process of reaching the boundary from the position of the target object.
S2: for each current frame in the walking path information, coordinates of training users from the current frame to the first ten frames in a virtual scene, a walking direction in the virtual scene, coordinates in a real scene and a walking direction in the real scene are obtained. And inputting the coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene into the reinforcement learning network. The reinforcement learning network is designed with three types of reward and punishment according to the input coordinate in the virtual scene, the walking direction in the virtual scene, the coordinate in the real scene and the walking direction in the real scene, and the reinforcement learning network is trained through the three types of reward and punishment.
S3: and the control reinforcement learning network carries out translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three kinds of transformation are transformation which enables the training user to be far away from the boundary in the real scene, and meanwhile keeps the relative position relation between the training user and the target object in the virtual scene and the real scene.
(III) advantageous effects
The technical scheme has the following advantages: the walking-oriented passive haptic method based on reinforcement learning provided by the invention uses the idea of reinforcement learning and has generalization. The method has a relatively robust effect on different scenes, and simultaneously dynamically considers obstacle avoidance and passive touch. The method can better reduce the reset times in the real scene under the condition of not being perceived by the training user and keep the relative relation between the training user and the target in the virtual and real scenes consistent.
Drawings
FIG. 1 is a flow diagram of some embodiments of a redirected walking passive haptic method according to the present disclosure;
FIG. 2 is a schematic diagram of the relative angular deviation penalty of the present disclosure;
fig. 3 is a comparison of results before and after a scene change of the present disclosure.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
As shown in FIG. 1, the method comprises the following steps:
s1: and placing the training user at the position of the target object in the virtual scene, and randomly initializing the walking direction of the training user. And recording a walking path for training a user to reach the boundary from the position of the target object to obtain walking path information. And reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data. The training user is a virtual training user generated by the computing device, and the walking direction and the walking radius of the training user are randomly changed in the walking process of reaching the boundary from the position of the target object.
Step S1 is a method of generating training data. Initially, a training user is placed at the position of a target object in a virtual scene, and meanwhile, the walking direction of the training user is initialized randomly. Wherein the training user is a virtual training user generated by the computing device. And initializing random numbers in a range of [ -180 degrees, 180 degrees ] for training the walking direction of the user, namely sampling in a Gaussian distribution with 0 as a mean value and 45 degrees as a standard deviation. The sampling uses the function random. normal () in the numpy module to generate random numbers. The numpy module is an open-source numerical calculation extension library in a programming language. The random. normalvariate () function is a library function in the numpy module that generates random numbers. The user is trained to walk in the virtual scene at a speed of 1.4m/s, and the walking direction is changed when the user walks for a random number between 0.5 m and 3.5 m. The amount of change in the walking direction was sampled from a gaussian distribution with 0 as the mean and 45 degrees as the standard deviation. Random () function in numpy block is used to generate random number during sampling. And the random () is a library function for generating random numbers in the numpy module. The walking radius when the walking direction is changed is a random number between 2 meters and 4 meters. Random () of the numpy block is used for generation of random numbers. The walking radius is the radius when training user's walking direction changes, and the walking direction is not the change in the twinkling of an eye, but changes gradually around certain arc, and the radius of arc is the walking radius. For example, if the direction is changed by 180 degrees, when the walking radius is 1m, the training user needs to walk 2 pi × 1 × 180 ° -pi meters to complete the change of the walking direction. Where π represents the circumference ratio. 180 deg. indicates the angle corresponding to the arc. And stopping walking when the training user reaches any boundary, and recording the walking path of each frame of the training user from the position of the target object to the boundary to obtain walking path information. The walking path information comprises coordinates in a virtual scene, a walking direction in the virtual scene, coordinates in a real scene and a walking direction in the real scene. Wherein, the frame rate of each frame is 60, and each frame is 1/60 seconds, i.e. 1 second, 60 walking paths are recorded. And reversing the walking path information to obtain path information from the boundary to the position of the target object as training data. Wherein reversing includes, but is not limited to, at least one of: and storing the action path information into a stack and then taking out the action path information. For example, the walking path information is stored in the stack according to the sequence, and the action path information is taken out again, so that the reverse action path information can be completed. The stack is a special linear table which can only carry out insertion and deletion operations at one end and has the property of first-in and last-out. For example, the push order is [1, 2, 3], and then the pop order is [3, 2, 1 ].
In some optional implementation manners of some embodiments, the executing subject may generate the walking path information from the boundary to the position of the target object by using a reverse method. The reverse method is a method of changing the storing sequence of the action path information from head to tail and adding 180 degrees to the walking direction information in the action path information. For example, the walking path information is { [ (virtual X1, virtual Y1), 45 degrees, (real X1, real Y1), 45 degrees ], [ (virtual X2, virtual Y2), 25 degrees, (real X2, real Y2), 25 degrees ], [ (virtual X3, virtual Y3), 30 degrees, (real X3, real Y3), 30 degrees ] }. And generating path information from the boundary to the target object by a reverse method, wherein the path information is { [ (virtual X3, virtual Y3), 210 degrees, (real X3, real Y3), 210 degrees ], [ (virtual X2, virtual Y2), 205 degrees, (real X2, real Y2), 205 degrees ], [ (virtual X1, virtual Y1), 225 degrees, (real X1, real Y1), 225 degrees ] }.
S2: for each current frame in the walking path information, coordinates of training users from the current frame to the first ten frames in a virtual scene, a walking direction in the virtual scene, coordinates in a real scene and a walking direction in the real scene are obtained. And inputting the coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene into the reinforcement learning network. The reinforcement learning network is designed with three types of reward and punishment according to the input coordinate in the virtual scene, the walking direction in the virtual scene, the coordinate in the real scene and the walking direction in the real scene, and the reinforcement learning network is trained through the three types of reward and punishment.
Step S2 is an adaptive dynamic reward and punishment design. For each current frame in the walking path information, coordinates of training users from the current frame to the first ten frames in a virtual scene, a walking direction in the virtual scene, coordinates in a real scene and a walking direction in the real scene are obtained. And inputting the coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene into the reinforcement learning network. Wherein the walking path information input into the reinforcement learning network every timeThe walking path information of the training user from the current frame to the ten frames in the first ten frames is contained. For example, the t frame is the current frame. Where t represents a serial number. And S is the walking path information of the training user. StAnd training the walking path information of the user for t frames. The walking path information of the training user from the current frame to the first ten frames input into the reinforcement learning network is St-9,St-8,St-7,...,St-2,St+1,St]. The reinforcement learning network designs three types of reward and punishment according to the input coordinate in the virtual scene, the walking direction in the virtual scene, the coordinate in the real scene and the walking direction in the real scene. And training the reinforcement learning network through three types of reward and punishment. And initializing the training user at any boundary in the real scene for the training path in each virtual scene, and guiding the training user to walk through the virtual scene path. And simultaneously, three penalties including an obstacle avoidance reward between a training user and a boundary in each frame definition real scene, a relative position distance deviation penalty and a relative angle deviation penalty of the training user and a target in a virtual scene and the real scene are defined. The decision importance of the training user at different positions in the virtual scene and the real scene is dynamically adjusted through the three reward and punishment functions, so that the passive touch effect is realized when the training user is far away from the boundary.
In some optional implementation manners of some embodiments, the three kinds of reward and punishment are three kinds of reward and punishment adaptively adjusted, and the three kinds of reward and punishment are used for training the reinforcement learning network, where the three kinds of reward and punishment are respectively an obstacle avoidance reward between a training user and a boundary in a real scene, a position distance deviation penalty and a relative angle deviation penalty of the training user and a target object in a virtual scene and a real scene. Three types of reward and punishment are respectively introduced as follows:
s21: and training obstacle avoidance rewards between the user and the boundary in the real scene. Given the position of the user in the real scene and the walking direction phip. Determining the value of the obstacle avoidance reward according to an obstacle avoidance reward formula, wherein the obstacle avoidance reward formula is as follows:
Figure BDA0003236361810000061
wherein R is1Indicating an obstacle avoidance reward.
Figure BDA0003236361810000062
Representing a self-defined function, assuming a square area, a ray is shot from a point a along a certain direction, and the square intersects a point b, and the distance from a to b is the value of the function
Figure BDA0003236361810000071
Phi denotes an angle scalar. p represents a real scene. Phi is a unit ofpRepresenting the walking direction of the training user in a real scene.
Figure BDA0003236361810000072
Indicating the edge phi from the current positionpDistance between the forward to boundary. min (,) denotes taking the minimum of the two values in parentheses. And pi represents a constant of 3.14.
S22: and training the position distance deviation punishment of the user and the target object in the virtual scene and the real scene. When the training user reaches the target object in the virtual scene, the training user can also reach the target object in the real scene, and the punishment can reduce the difference between the distances between the user and the target object in the two spaces. And determining the value of the distance deviation penalty according to a distance deviation penalty formula. Wherein, the distance deviation penalty formula is as follows:
R2=-|d1-d2|。
wherein R is2A distance deviation penalty scalar is represented. d is a radical of1Representing the absolute distance between the training user and the target object in the virtual scene. d2Representing the absolute distance between the training user and the target object in the real scene. - | | represents the inverse of the absolute value.
S23: a relative angular deviation penalty. The relative angle deviation punishment considers the angles between the walking direction of the training user and the direction of the training user relative to the target object in the virtual scene and the real scene. As shown in fig. 2. And determining the value of the angle deviation penalty according to a relative angle deviation penalty formula. Wherein, the penalty formula of the relative angle deviation is as follows:
R3=-|θ1θ2|。
wherein R is3An angular deviation penalty scalar is represented. Theta1Representing the angle between the walking direction of the training user in the virtual scene relative to the direction of the line connecting the training user and the position of the target object. Theta2Representing the angle between the walking direction of the training user in the real scene and the direction of the position connecting line of the training user and the target object. - | | represents the inverse of the absolute value.
The above three types of reward and punishment are inconsistent in the effect of playing when the training user is at different positions. The relative angle deviation punishment is more important than the obstacle avoidance reward between the training user and the boundary in the real scene, and the relative position distance deviation punishment between the training user and the target object in the virtual scene and the real scene. Because if the relative relationship in angle cannot be guaranteed, the distance in the future path cannot be guaranteed. For example, when a real scene is close to a boundary, the training user is mainly considered to be far away from the boundary. Passive haptic tasks are primarily considered when not near a boundary in a real scene. Therefore, the method provides a self-adaptive dynamic reward and punishment adjusting method by combining the three reward and punishment types. And determining values of translation, rotation and curvature conversion of the virtual scene according to a self-adaptive dynamic reward and punishment regulation conversion formula. The adaptive dynamic reward and punishment regulation conversion formula is as follows:
R=λlRl2R23R3+R4
where R represents the final reward and penalty scalar quantity. Lambda [ alpha ]1Represents a constant term
Figure BDA0003236361810000081
e represents a constant of 2.72. d3Representing the distance of the training user to the nearest boundary in a real scene. Lambda [ alpha ]2Represents a constant term
Figure BDA0003236361810000082
d4Representing the distance of the training user to the target object in the virtual scene. Lambda [ alpha ]3Representing a constant of 1. R4Representing a constant of 0.5.
S3: and the control reinforcement learning network carries out translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three kinds of transformation are transformation which enables the training user to be far away from the boundary in the real scene, and meanwhile keeps the relative position relation between the training user and the target object in the virtual scene and the real scene.
In step S3, the virtual scene is transformed by using the reinforcement learning network after training. Inputting the coordinates from the current frame to the first ten frames in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene into the reinforcement learning network after training. And the reinforcement learning network after the control training performs translation, rotation and curvature transformation on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene. The three kinds of transformation are transformation which enables the training user to be far away from the boundary in the real scene, and meanwhile keeps the relative position relation between the training user and the target object in the virtual scene and the real scene. The result of the transformation is shown in fig. 3. The left side of the diagram is the path in the virtual space, and the right four diagrams are the paths from different positions in the real space. The grey path is the path transformed by the method and the black is the path not obtained by any technique.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (3)

1. A method of redirecting walking passive haptics, comprising:
s1: placing a training user at the position of a target object in a virtual scene, randomly initializing the walking direction of the training user, recording the walking path from the position of the target object to the boundary of the training user to obtain walking path information, reversing the walking path information to obtain the walking path information from the boundary to the position of the target object as training data, wherein the training user is a virtual training user generated by computing equipment, and the walking direction and the walking radius of the training user are randomly changed in the walking process from the position of the target object to the boundary;
s2: for each current frame in the walking path information, acquiring a coordinate of a training user from the current frame to the first ten frames in a virtual scene, a walking direction in the virtual scene, a coordinate in the real scene and a walking direction in the real scene, and inputting the coordinate in the virtual scene, the walking direction in the virtual scene, the coordinate in the real scene and the walking direction in the real scene into the reinforcement learning network, wherein the reinforcement learning network designs three reward punishments according to the input coordinate in the virtual scene, the walking direction in the virtual scene, the coordinate in the real scene and the walking direction in the real scene, and trains the reinforcement learning network through the three reward punishments;
s3: the control reinforcement learning network carries out three transformations of translation, rotation and curvature on the virtual scene according to the input coordinates in the virtual scene, the walking direction in the virtual scene, the coordinates in the real scene and the walking direction in the real scene, wherein the three transformations are the transformations which enable a training user to be far away from a boundary in the real scene and simultaneously keep the relative position relationship between the training user and a target object in the virtual scene and the real scene.
2. The method of claim 1, wherein reversing the walking path information to obtain walking path information from the boundary to the location of the object comprises:
and generating the walking path information from the boundary to the position of the target object by a reverse method according to the walking path information.
3. The method according to claim 2, wherein the three types of reward and punishment are three types of reward and punishment which are adaptively adjusted and are used for training the reinforcement learning network, and the three types of reward and punishment are respectively an obstacle avoidance reward between a training user and a boundary in a real scene, a position distance deviation penalty and a relative angle deviation penalty of the training user and a target object in a virtual scene and the real scene.
CN202111003529.5A 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology Active CN114578957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111003529.5A CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111003529.5A CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Publications (2)

Publication Number Publication Date
CN114578957A true CN114578957A (en) 2022-06-03
CN114578957B CN114578957B (en) 2023-10-27

Family

ID=81768082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111003529.5A Active CN114578957B (en) 2021-08-30 2021-08-30 Reinforcement learning-based redirected walking passive haptic technology

Country Status (1)

Country Link
CN (1) CN114578957B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348733A (en) * 2023-12-06 2024-01-05 山东大学 Dynamic curvature manipulation mapping-based redirection method, system, medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018098710A1 (en) * 2016-11-30 2018-06-07 深圳市大疆创新科技有限公司 Control method, control apparatus, and electronic apparatus
US20190051051A1 (en) * 2016-04-14 2019-02-14 The Research Foundation For The State University Of New York System and Method for Generating a Progressive Representation Associated with Surjectively Mapped Virtual and Physical Reality Image Data
CN112044068A (en) * 2020-09-10 2020-12-08 网易(杭州)网络有限公司 Man-machine interaction method and device, storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190051051A1 (en) * 2016-04-14 2019-02-14 The Research Foundation For The State University Of New York System and Method for Generating a Progressive Representation Associated with Surjectively Mapped Virtual and Physical Reality Image Data
WO2018098710A1 (en) * 2016-11-30 2018-06-07 深圳市大疆创新科技有限公司 Control method, control apparatus, and electronic apparatus
CN112044068A (en) * 2020-09-10 2020-12-08 网易(杭州)网络有限公司 Man-machine interaction method and device, storage medium and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘庆杰;林友勇;李少利;: "面向智能避障场景的深度强化学习研究", 智能物联技术, no. 02 *
赵玉婷;韩宝玲;罗庆生;: "基于deep Q-network双足机器人非平整地面行走稳定性控制方法", 计算机应用, no. 09 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348733A (en) * 2023-12-06 2024-01-05 山东大学 Dynamic curvature manipulation mapping-based redirection method, system, medium and equipment
CN117348733B (en) * 2023-12-06 2024-03-26 山东大学 Dynamic curvature manipulation mapping-based redirection method, system, medium and equipment

Also Published As

Publication number Publication date
CN114578957B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US11257268B2 (en) Avatar animation using Markov decision process policies
CN111260762B (en) Animation implementation method and device, electronic equipment and storage medium
Steinicke et al. Taxonomy and implementation of redirection techniques for ubiquitous passive haptic feedback
US11836843B2 (en) Enhanced pose generation based on conditional modeling of inverse kinematics
Galvane et al. Camera-on-rails: automated computation of constrained camera paths
Segen et al. Human-computer interaction using gesture recognition and 3D hand tracking
Ishigaki et al. Performance-based control interface for character animation
Ho et al. Interactive partner control in close interactions for real-time applications
Aliprantis et al. Natural Interaction in Augmented Reality Context.
CN111028317B (en) Animation generation method, device and equipment for virtual object and storage medium
CN110333773A (en) For providing the system and method for immersion graphical interfaces
Tessler et al. Calm: Conditional adversarial latent models for directable virtual characters
Dvorožňák et al. Example-based expressive animation of 2d rigid bodies
Liu et al. Posetween: Pose-driven tween animation
CN114578957A (en) Redirected walking passive touch technology based on reinforcement learning
Thomas et al. Reactive alignment of virtual and physical environments using redirected walking
Pascher et al. AdaptiX--A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
Fender et al. Creature teacher: A performance-based animation system for creating cyclic movements
Tran et al. Easy-to-use virtual brick manipulation techniques using hand gestures
Nagendran et al. Continuum of virtual-human space: Towards improved interaction strategies for physical-virtual avatars
US20230267668A1 (en) Joint twist generation for animation
KR20200012561A (en) System and method for game in virtual reality
CN112435316B (en) Method and device for preventing mold penetration in game, electronic equipment and storage medium
Ullal et al. A multi-objective optimization framework for redirecting pointing gestures in remote-local mixed/augmented reality
Liu et al. Natural user interface for physics-based character animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant