CN114840079A - High-speed rail driving action simulation virtual-real interaction method based on gesture recognition - Google Patents

High-speed rail driving action simulation virtual-real interaction method based on gesture recognition Download PDF

Info

Publication number
CN114840079A
CN114840079A CN202210452815.8A CN202210452815A CN114840079A CN 114840079 A CN114840079 A CN 114840079A CN 202210452815 A CN202210452815 A CN 202210452815A CN 114840079 A CN114840079 A CN 114840079A
Authority
CN
China
Prior art keywords
coordinates
hand
joint
depth
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210452815.8A
Other languages
Chinese (zh)
Other versions
CN114840079B (en
Inventor
邹喜华
邓果
蒋灵明
谭镇泷
刘晔
潘炜
闫连山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jinxi Shuzhi Technology Co ltd
Southwest Jiaotong University
CRSC Research and Design Institute Group Co Ltd
Original Assignee
Chengdu Jinxi Shuzhi Technology Co ltd
Southwest Jiaotong University
CRSC Research and Design Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jinxi Shuzhi Technology Co ltd, Southwest Jiaotong University, CRSC Research and Design Institute Group Co Ltd filed Critical Chengdu Jinxi Shuzhi Technology Co ltd
Priority to CN202210452815.8A priority Critical patent/CN114840079B/en
Publication of CN114840079A publication Critical patent/CN114840079A/en
Application granted granted Critical
Publication of CN114840079B publication Critical patent/CN114840079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for simulating driving action virtual-real interaction in a high-speed rail based on gesture recognition, which specifically comprises the following steps: fixing the depth camera on the head display, and fusing the three-dimensional data captured by the depth camera to the head display coordinate system; detecting 21 hand joint points of the hand by using a MediaPipe frame, and obtaining three-dimensional coordinates of the hand joint points by using depth data; drawing a hand model in a Unity3D platform by taking the three-dimensional coordinates of 21 hand joint points as base points, and updating in real time according to coordinate changes; and converting the real object model into a head display coordinate system, and realizing virtual-real interaction of the simulated driving action according to the position relation between the hand model and the real object model. The invention realizes interaction with the real object by utilizing gesture actions in the virtual environment by fusing and mapping the real object and the hand coordinates to the virtual environment, and has the advantages of flexible operation, shielding interference resistance and the like.

Description

High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
Technical Field
The invention belongs to the technical field of Virtual Reality (VR)/Augmented Reality (AR) and simulated driving of a high-speed rail, and particularly relates to a gesture recognition-based virtual-real interaction method for simulated driving actions of the high-speed rail.
Background
Under the virtual reality environment, the coordinates of objects in the real and virtual environments are mutually independent, and interaction with the objects in the real environment through the virtual environment is difficult to realize; today's virtual reality applications can use handles, data gloves, machine vision for interaction. The interaction by the handle limits the action of the hand and can not interact with the real object; the data glove can interact with a real object, but the data glove can limit the movement of the hand and is expensive, and meanwhile, the sensor in the data glove can not provide position information, and a position tracking system is usually additionally arranged on the glove; the three-dimensional hand posture estimation is carried out by utilizing computer vision, so that the comfortable sensation of bare hand interaction is kept, the price is relatively low, and the method is a great direction for the development of a virtual reality interaction mode.
Disclosure of Invention
In order to overcome the problems, the invention provides a method for simulating driving action virtual-real interaction in a high-speed rail based on gesture recognition.
The invention discloses a gesture recognition-based virtual-real interaction method for simulating driving actions of a high-speed rail, which comprises the following steps of:
step 1: fixing the depth camera on a head display (namely a head-mounted display), and fusing three-dimensional data captured by the depth camera into a head display coordinate system;
according to the size of the head display, the size of the depth camera and the position of the depth camera fixed on the head display, a space conversion matrix M from a camera coordinate system to a head display coordinate system is obtained, and the method specifically comprises the following steps:
Figure BDA0003619483160000011
in the formula, x H 、y H 、z H Representing coordinates in the head coordinate system, x C 、y C 、z C Coordinates in the camera coordinate system, r 1 To r 9 Denotes the rotation parameter, t x 、t y 、t z Representing the translation parameters.
Step 2: and detecting 21 hand joint points of the hand by using a MediaPipe frame, and obtaining three-dimensional coordinates of the hand joint points by using depth data.
According to the depth image and the RGB image provided by the depth camera, the RGB image is input into MediaPipe and output as 21 joint coordinates (x, y, z), wherein x and y are normalized coordinates based on image pixels, and z is relative depth information relative to the wrist joint.
Aligning the depth image to the RGB image, and obtaining a corresponding depth value through the pixel coordinates of the RGB image; and obtaining 21 pixel coordinates of the hand joint points according to the resolution of the input image, and converting the pixel coordinates into three-dimensional coordinates to obtain the three-dimensional coordinates of the hand joint points based on a camera coordinate system.
And step 3: and drawing a hand model in the Unity3D platform by taking the three-dimensional coordinates of the 21 hand joint points as base points, and updating in real time according to coordinate changes.
And 4, step 4: and converting the real object model into a head display coordinate system, and realizing virtual-real interaction of the simulated driving action according to the position relation between the hand model and the real object model.
In the step 2, when joint occlusion does not exist, the three-dimensional coordinates of the hand joint points under a color camera coordinate system are directly obtained according to the pixel coordinates of the hand joint points; setting the pixel coordinate of a certain joint as (u, v), taking 100 pixel coordinates around the pixel coordinate, namely corresponding depth values of all pixel coordinates in the range of (u +/-5, v +/-5), and simultaneously removing illegal values; and (3) enabling the distance between the hand and the camera to be between 0.25m and 1m, removing the depth values within the out-of-range, and taking the average value of all legal values as the depth value of the joint.
Calculating coordinates of all joint points, recording the length between every two connected joint points, and calling when joint shielding exists; and when the joints are shielded, the coordinates of other joint points are calculated according to the three-dimensional coordinates of the wrist joints.
The beneficial technical effects of the invention are as follows:
the invention fuses and maps the real object and the hand coordinates to the virtual environment, and can interact with the real object by using gesture actions in the virtual environment. Through the position relation of the hand model and the real object model in the virtual environment, the moving hand can accurately grasp the real object.
Drawings
Fig. 1 is a flow chart of a virtual-real interaction method for simulating driving actions of a high-speed rail based on gesture recognition.
FIG. 2 is a schematic view of a camera coordinate system and a head display coordinate system.
Fig. 3 is a schematic diagram of 21 hand joint points.
Fig. 4 shows the principle of the index finger metacarpophalangeal joint coordinate derivation.
FIG. 5 is a three-dimensional model of a hand.
Fig. 6 is a real image of a simulated driving platform of a high-speed rail.
Fig. 7 is a model diagram of a high-speed rail simulation driving platform.
Fig. 8 is a view in the real view of the simulated driving stand for operating the high-speed rail.
Fig. 9 is a virtual view of the simulated driver's cab for operating the high-speed rail.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The invention discloses a gesture recognition-based virtual-real interaction method for simulating driving actions of a high-speed rail, which is shown in figure 1 and comprises the following steps:
step 1: and fixing the depth camera on the head display, and fusing the three-dimensional data captured by the depth camera into a head display coordinate system. The camera coordinate system and the head coordinate system are shown in fig. 2.
According to the size of the head display, the size of the depth camera and the position of the depth camera fixed on the head display, a space conversion matrix M from a camera coordinate system to a head display coordinate system is obtained, and the method specifically comprises the following steps:
Figure BDA0003619483160000031
wherein in the formula, x H 、y H 、x H Representing coordinates in the head coordinate system, x C 、y C 、z C Coordinates in the camera coordinate system, r 1 To r 9 Denotes the rotation parameter, t x 、t y 、t z Representing the translation parameters.
The laser locator may track the location of an object by binding the object in the real world. A laser locator (Tracker) is fixed on a certain real object, the fixed position of the Tracker is marked, and the coordinates of any point on the object under a Tracker coordinate system can be obtained. By measuring the real object, a one-to-one real model is established, and the mark point of the real model is equal to the Tracker coordinate in the virtual environment, so that the distance between the real model and the eyes in the VR environment is equal to the distance between the object and the eyes in the real environment.
Step 2: the MediaPipe framework is used for detecting 21 hand joint points of the hand, as shown in fig. 3, the MediaPipe is output as 2.5D data, and therefore, the three-dimensional data of the joint points needs to be further obtained.
According to the depth image and the RGB image provided by the depth camera, the RGB image is input into MediaPipe and output as 21 joint coordinates (x, y, z), wherein x and y are normalized coordinates based on image pixels, and z is relative depth information relative to the wrist joint.
Aligning the depth image to the RGB image, and obtaining a corresponding depth value of the RGB image through pixel coordinates of the RGB image; and (3) obtaining 21 pixel coordinates of the hand joint points according to the resolution of the input image, and converting the pixel coordinates into three-dimensional coordinates to obtain the three-dimensional coordinates of the hand joint points based on a camera coordinate system. According to whether the problem of joint shielding exists or not, the method can be divided into two types of discussion, and whether shielding exists or not is judged according to the angle of the joint.
When no hand joint is shielded: and directly acquiring the three-dimensional coordinates of the joint point in the color camera coordinate system according to the pixel coordinates of the joint point. And (3) setting the pixel coordinate of a certain joint as (u, v), taking 100 pixel coordinates around the pixel coordinate, namely all pixel coordinates in the range of (u5, v5) as corresponding depth values, and removing illegal values, wherein the distance of a hand relative to the camera is between 0.25m and 1m, the depth values in the range exceeding the range are removed, and the average value of all legal values is taken as the depth value of the joint.
If the pixel coordinate is known to correspond to the depth value, the internal reference can be obtained according to the internal reference of the color camera. And recording the length between every two connected joint points after calculating the coordinates of all the joint points, and calling when joint shielding exists.
When the hand joint shelters from: and because the wrist joint has no shielding condition, the coordinates of the joint points of the other fingers are calculated according to the three-dimensional coordinates of the wrist joint. Taking index finger as an example, the metacarpophalangeal joint coordinates of the index finger are obtained according to the wrist joint coordinates, as shown in FIG. 4, in which O C Representing the origin of the coordinate system of the color camera, I representing the imaging plane of the color camera, W representing the wrist joint, MCP 1 、MCP 2 Indicating the location where the index finger metacarpophalangeal joint may exist, W 'the point of the wrist joint on the imaging plane, MCP' the point of the metacarpophalangeal joint on the imaging plane.
The metacarpophalangeal joint is then in the straight line O C C, because the pixel coordinate of the MCP' point is known, a depth value is arbitrarily selected, and a straight line O can be obtained C Three-dimensional coordinates (x) of a point on C 1 ,y 1 ,z 1 ) While a straight line O C C passes through the origin and its straight line equation can be determined. According to the recorded distance from the wrist joint to the metacarpophalangeal joint of the index finger when no occlusion exists, taking the distance as the radius and the wrist joint as the sphere center, a spherical equation can be determined. From wrist to straight line O C C, judging the number of intersection points of the straight line and the ball according to the distance, wherein at least one intersection point exists between the spatial straight line and the ball, and when one intersection point exists, the coordinate of the point is the index finger metacarpophalangeal joint coordinate; when two intersection points exist, the depth of the index finger metacarpophalangeal joint relative to the wrist joint is judged according to the Z coordinate of the index finger metacarpophalangeal joint output by the MediaPipe, and then whether the index finger metacarpophalangeal joint is closer to the camera or farther away from the camera than the wrist joint is obtained, so that the index finger metacarpophalangeal joint is determined to be MCP 1 Point or MCP 2 And (4) point. The coordinates of the proximal knuckle of the index finger can be obtained by using the metacarpophalangeal joint of the index finger, and then the coordinates of the distal knuckle and the fingertip can be obtained in sequence. In the same way, the coordinates of the other fingers can be obtained.
And step 3: the hand model is drawn in the Unity3D platform with the three-dimensional coordinates of the 21 hand joint points as base points, as shown in fig. 5, and is updated in real time according to the coordinate change.
And 4, step 4: and converting the real object model into a head display coordinate system, and realizing virtual-real interaction of the simulated driving action according to the position relation between the hand model and the real object model.
Example (b):
for the high-speed rail simulation driving application, objects which interact with reality in a virtual environment are all operating equipment on the physical driving platform. Modeling the solid driving platforms one by one, and marking the positions of the Tracker of the solid driving platforms, as shown in fig. 6 and 7, wherein A in fig. 6 is the Tracker of the solid driving platforms, B is a brake handle, C is a traction handle, and D is a speed setting handle; in fig. 7, a 'is a position marked on the bridge model, B' is a brake handle, C 'is a traction handle, D' is a speed setting handle, corresponding to fig. 6; then, in order to ensure that the coordinates of the cab model in the Tracker coordinate system are consistent with the coordinates of the physical cab in the Tracker coordinate system, the coordinates are obtained by overlapping the actual coordinates of Tracker in Unity3D with the mark points on the cab model.
Thus, the distance and orientation to each operating member on the cab can be determined from the hand model observed from the VR viewpoint, and when the hand model touches any operating device on the cab in the virtual environment, the corresponding operation is performed in the real environment, as shown in fig. 8 and 9.
The invention realizes interaction with the real object by utilizing gesture actions in the virtual environment through fusion mapping of the real object and the hand coordinates to the virtual environment, and has the remarkable advantages of flexible operation, shielding interference resistance, good somatosensory experience effect and the like.
The above description is only a preferred embodiment of the present invention, and it should be noted that several modifications and decorations can be made in the actual implementation without departing from the essence of the method and core device of the present invention.

Claims (2)

1. A virtual-real interaction method for simulating driving actions of a high-speed rail based on gesture recognition is characterized by comprising the following steps:
step 1: fixing the depth camera on the head display, and fusing the three-dimensional data captured by the depth camera under a head display coordinate system;
according to the size of the head display, the size of the depth camera and the position of the depth camera fixed on the head display, a space conversion matrix M from a camera coordinate system to a head display coordinate system is obtained, and the method specifically comprises the following steps:
Figure FDA0003619483150000011
in the formula, x H 、y H 、z H Representing coordinates in the head coordinate system, x C 、y C 、z C Coordinates in the camera coordinate system, r 1 To r 9 Denotes the rotation parameter, t x 、t y 、t z Representing a translation parameter;
step 2: detecting 21 hand joint points of the hand by using a MediaPipe frame, and obtaining three-dimensional coordinates of the hand joint points by using depth data;
inputting an RGB image into MediaPipe according to a depth image and the RGB image provided by a depth camera, and outputting the RGB image as 21 joint coordinates (x, y, z), wherein x and y are normalized coordinates based on image pixels, and z is relative depth information relative to a wrist joint;
aligning the depth image to the RGB image, and obtaining a corresponding depth value of the RGB image through pixel coordinates of the RGB image; obtaining 21 pixel coordinates of hand joint points according to the resolution of an input image, and converting the pixel coordinates into three-dimensional coordinates to obtain three-dimensional coordinates of the hand joint points based on a camera coordinate system;
and step 3: drawing a hand model in the Unity3D platform by taking the three-dimensional coordinates of the 21 hand joint points as base points, and updating in real time according to coordinate changes;
and 4, step 4: and converting the real object model into a head display coordinate system, and realizing virtual-real interaction of the simulated driving action according to the position relation between the hand model and the real object model.
2. The virtual-real interaction method for simulating driving actions of a high-speed rail based on gesture recognition as claimed in claim 1, wherein in step 2, when there is no joint occlusion, the three-dimensional coordinates of the hand joint point in the color camera coordinate system are directly obtained according to the pixel coordinates of the hand joint point, the pixel coordinates of a certain joint are set to be (u, v), 100 pixel coordinates around the pixel coordinates are taken, that is, all pixel coordinates within the range of (u ± 5, v ± 5) correspond to depth values, and illegal values therein are removed; the distance between the hand and the camera is between 0.25m and 1m, the depth value which exceeds the range is removed, and the average value of all legal values is taken as the depth value of the joint;
calculating coordinates of all joint points, recording the length between every two connected joint points, and calling when joint shielding exists; and when the joints are shielded, the coordinates of other joint points are calculated according to the three-dimensional coordinates of the wrist joints.
CN202210452815.8A 2022-04-27 2022-04-27 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition Active CN114840079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210452815.8A CN114840079B (en) 2022-04-27 2022-04-27 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210452815.8A CN114840079B (en) 2022-04-27 2022-04-27 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition

Publications (2)

Publication Number Publication Date
CN114840079A true CN114840079A (en) 2022-08-02
CN114840079B CN114840079B (en) 2023-03-10

Family

ID=82566862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210452815.8A Active CN114840079B (en) 2022-04-27 2022-04-27 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition

Country Status (1)

Country Link
CN (1) CN114840079B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519487A (en) * 2024-01-05 2024-02-06 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
JP2019016341A (en) * 2017-07-10 2019-01-31 アンリツ株式会社 Test system and testing method for on-vehicle application
CN110045821A (en) * 2019-03-12 2019-07-23 杭州电子科技大学 A kind of augmented reality exchange method of Virtual studio hall
CN110610547A (en) * 2019-09-18 2019-12-24 深圳市瑞立视多媒体科技有限公司 Cabin training method and system based on virtual reality and storage medium
CN113421346A (en) * 2021-06-30 2021-09-21 暨南大学 Design method of AR-HUD head-up display interface for enhancing driving feeling
CN113646736A (en) * 2021-07-17 2021-11-12 华为技术有限公司 Gesture recognition method, device and system and vehicle
CN113934389A (en) * 2021-09-10 2022-01-14 杭州思看科技有限公司 Three-dimensional scanning processing method, system, processing device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019016341A (en) * 2017-07-10 2019-01-31 アンリツ株式会社 Test system and testing method for on-vehicle application
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN110045821A (en) * 2019-03-12 2019-07-23 杭州电子科技大学 A kind of augmented reality exchange method of Virtual studio hall
CN110610547A (en) * 2019-09-18 2019-12-24 深圳市瑞立视多媒体科技有限公司 Cabin training method and system based on virtual reality and storage medium
CN113421346A (en) * 2021-06-30 2021-09-21 暨南大学 Design method of AR-HUD head-up display interface for enhancing driving feeling
CN113646736A (en) * 2021-07-17 2021-11-12 华为技术有限公司 Gesture recognition method, device and system and vehicle
CN113934389A (en) * 2021-09-10 2022-01-14 杭州思看科技有限公司 Three-dimensional scanning processing method, system, processing device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TEJO CHALASANI等: "Egocentric Gesture Recognition for Head-Mounted AR devices", 《网页在线公开:HTTPS://IEEEXPLORE.IEEE.ORG/STAMP/STAMP.JSP?TP=&ARNUMBER=8699166》 *
李浩等: "基于机器视觉的火车驾驶员动态手势识别方法", 《传感器与微系统》 *
赵珩越等: "CTCS一3级列控仿真系统的三维站场模拟平台设计与VR实现", 《铁道通信信号》 *
郑倩等: "关于云导播台交互方式的探讨", 《广播电视信息》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519487A (en) * 2024-01-05 2024-02-06 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture
CN117519487B (en) * 2024-01-05 2024-03-22 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture

Also Published As

Publication number Publication date
CN114840079B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
JP6116064B2 (en) Gesture reference control system for vehicle interface
CN107992188B (en) Virtual reality interaction method, device and system
CN107622257A (en) A kind of neural network training method and three-dimension gesture Attitude estimation method
US20090322671A1 (en) Touch screen augmented reality system and method
JP2779448B2 (en) Sign language converter
KR102269414B1 (en) Method and device for object manipulation in virtual/augmented reality based on hand motion capture device
JP5697590B2 (en) Gesture-based control using 3D information extracted from extended subject depth
CN104035557B (en) Kinect action identification method based on joint activeness
CN102884492A (en) Pointing device of augmented reality
CN110142770B (en) Robot teaching system and method based on head-mounted display device
CN114840079B (en) High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
CN115546365A (en) Virtual human driving method and system
CN110333776A (en) A kind of military equipment operation training system and method based on wearable device
Zhang et al. A real-time upper-body robot imitation system
CN108020223B (en) Attitude measurement method of force feedback equipment handle based on inertia measurement device
JP3742879B2 (en) Robot arm / hand operation control method, robot arm / hand operation control system
CN210361314U (en) Robot teaching device based on augmented reality technology
JP2011248443A (en) Presentation device and method for augmented reality image
CN109147057A (en) A kind of virtual hand collision checking method towards wearable haptic apparatus
Akahane et al. Two-handed multi-finger string-based haptic interface SPIDAR-8
CN112181135B (en) 6-DOF visual and tactile interaction method based on augmented reality
Besnea et al. Experiments regarding implementation of a virtual training environment for automotive industry
CN114756130A (en) Hand virtual-real interaction system
CN111221407A (en) Motion capture method and device for multiple fingers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant