WO2019218457A1 - Virtual reality driving method based on arm motion capture, and virtual reality system - Google Patents

Virtual reality driving method based on arm motion capture, and virtual reality system Download PDF

Info

Publication number
WO2019218457A1
WO2019218457A1 PCT/CN2018/097078 CN2018097078W WO2019218457A1 WO 2019218457 A1 WO2019218457 A1 WO 2019218457A1 CN 2018097078 W CN2018097078 W CN 2018097078W WO 2019218457 A1 WO2019218457 A1 WO 2019218457A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture
arm
data
preset
joint
Prior art date
Application number
PCT/CN2018/097078
Other languages
French (fr)
Chinese (zh)
Inventor
蔡树彬
温锦纯
明仲
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2019218457A1 publication Critical patent/WO2019218457A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention relates to the field of intelligent terminal technologies, and in particular, to a virtual reality driving method based on arm motion capture and a virtual reality system.
  • Virtual reality is a new technology that integrates real world information and virtual world information "seamlessly". It combines real and illusory that could not be experienced in the real world through cutting-edge technologies such as computers. After the simulation and superimposition, the unreal characters or objects are superimposed on the real world, and are perceived by the human visual senses, thereby achieving an experience beyond reality. This allows real-world and illusory objects to be superimposed in the same space in real time.
  • the existing virtual implementation is generally based on a motion capture system to identify human motion, and to control the virtual reality character according to the human body motion, in particular, to control the character through the human arm motion. For example, human body arm motion is recognized based on inertial sensors and based on computer vision.
  • the effect of capturing the arm in the above manner is not good.
  • the computer vision-based method is easily interfered by the external environment, such as lighting conditions, background, and obstructions.
  • the inertial sensor-based method is used to measure noise and travel. The influence of factors such as errors cannot be accurately tracked for a long time.
  • the present invention aims to provide a virtual reality driving method based on arm motion capture and a virtual reality system.
  • a virtual reality driving method based on arm motion capture comprising:
  • initializing the preset posture to obtain initial pose data wherein the initial pose data includes preset pose data and human body raw data;
  • the first arm posture includes a trunk joint and an arm motion chain joint
  • the virtual reality driving method based on arm motion capture wherein the motion capture system includes at least a head display, a left and right handle, a left and right upper arm tracker, and a torso tracker.
  • the virtual reality driving method based on the arm motion capture wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data specifically includes:
  • the motion capture system When the human body wears the motion capture system, capturing preset posture data of the human body in a preset posture, wherein the preset posture includes a first posture and a second posture;
  • the virtual reality driving method based on the arm motion capture wherein the calculating the initial data of the human body according to the preset pose data to obtain the initial pose data specifically includes:
  • the virtual reality driving method based on the arm motion capture wherein the capturing the real-time attitude data of the human body, determining the first arm posture by the arm-to-part transformation matrix method according to the real-time attitude data and the initial pose data specifically includes:
  • the virtual reality driving method based on arm motion capture wherein the angle between the upper arm and the forearm is calculated according to the shoulder joint data and the elbow joint data, and the forearm pose data is calculated according to the included angle to obtain the first
  • the arm posture specifically includes:
  • the virtual reality driving method based on the arm motion capture wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data includes:
  • the preset skeleton model is received and stored, and each joint coordinate system of the preset skeleton model is associated with the preset built-in model to obtain a correspondence between the preset skeleton model and the preset built-in model.
  • the virtual reality driving method based on arm motion capture wherein the first arm posture is converted into a second arm posture of a preset virtual character according to a preset built-in model, and the pre-drive is driven according to the second arm posture
  • the virtual role specifically includes:
  • a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement an arm based as described above The steps in the virtual reality driving method of motion capture.
  • a virtual reality system includes: a motion capture system and a virtual reality device, the virtual reality device including a processor, a memory, and a communication bus; and the memory stores a computer readable program executable by the processor;
  • the communication bus implements connection communication between the processor and the memory
  • the step in the virtual reality driving method based on the arm motion capture of any of the above is implemented when the processor executes the computer readable program.
  • the present invention provides a virtual reality driving method based on arm motion capture and a virtual reality system, the method comprising: initializing a preset posture to obtain an initial position when the human body wears the motion capturing system Attitude data; capturing real-time posture data of the human body, determining a first arm posture by an arm-to-part transformation matrix method according to real-time attitude data and initial pose data, wherein the first arm posture includes a trunk joint and an arm motion chain joint Converting the first arm posture into a second arm posture of a preset virtual character according to a preset built-in model, and driving the preset virtual character according to the second arm posture.
  • the present invention uses the arm kinematic chain structure to determine the arm posture data in the form of a transformation matrix between the links, thereby improving the accuracy of the arm motion recognition.
  • the arm posture data is converted to drive the 3D virtual character movement, which ensures that the virtual character and the arm space position are consistent with the spatial position of the real character.
  • FIG. 1 is a flowchart of an embodiment of a virtual reality driving method based on arm motion capture provided by the present invention.
  • FIG. 2 is a schematic diagram of a wear motion capture system in an embodiment of a virtual reality driving method based on arm motion capture provided by the present invention.
  • FIG. 3 is a schematic diagram of a first posture in an embodiment of a virtual reality driving method based on arm motion capture according to the present invention.
  • FIG. 4 is a schematic diagram of a second posture in an embodiment of a virtual reality driving method based on arm motion capture according to the present invention.
  • FIG. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of a virtual reality system according to the present invention.
  • the present invention provides a virtual reality driving method and a virtual reality system based on arm motion capture.
  • the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
  • FIG. 1 is a flowchart of a preferred embodiment of a self-starting control method provided by the present invention. The method includes:
  • the preset posture is initialized to obtain initial pose data, wherein the initial pose data includes preset pose data and human body raw data.
  • the motion capture system is configured to capture human body motion, and at least includes a head display, a left and right handle, a left and right upper arm tracker, and a torso tracker.
  • the head is used for wearing on a human head
  • the left and right handles are respectively held on the left and right hands of the human body
  • the left upper arm tracker is used to be worn on the left upper arm position
  • the right upper arm tracker is used for wearing.
  • the torso tracker is used to be worn in the chest position.
  • the head is used to collect head posture data
  • the left and right handles are used to collect wrist joint posture data
  • the left and right upper arm trackers are used to adopt shoulder joint posture data.
  • the preset posture includes a first posture and a second posture.
  • the human body After the human body is equipped with a motion capture system, the human body respectively positions the first posture and the second posture, and the motion capture system respectively captures the human body.
  • the first posture data of the first posture and the posture data of the second posture, and the initial pose data is acquired according to the first posture data and the second posture time.
  • initializing the preset posture to obtain the initial pose data specifically includes:
  • the first posture is preferably an "I” type posture
  • the second posture is preferably a "T” type posture.
  • the "T"-shaped posture is that the body stands, and the two arms are stretched out and left;
  • the "I”-shaped posture is that the body stands and the two arms naturally sag.
  • the first posture and the second posture may be performed by the user according to the prompt, and the preset posture is initialized by touching the handle after the placement is completed.
  • the first head position data of the head is collected by the head display, and the first left and right arm pose data is collected by the left and right arm trackers; when the human body is in the "T” posture, the passage is passed.
  • the second head position data of the head is collected by the head, and the second left and right arm position data is collected by the left and right arm trackers, wherein the pose data includes position data and posture data.
  • the skeleton model of the virtual character in the virtual space is pre-stored, and is recorded as a preset skeleton model, and the coordinate system corresponding to the head position of the preset skeleton model is set as a root coordinate system, and each joint configuration The corresponding local coordinate system sets a virtual character relative to the root coordinate system.
  • the position of the head can be collected by the head, and the relative postures of the thoracic joint and the left and right clavicle joints remain unchanged during the movement, that is, the position of the thoracic joint and the left and right clavicle joints can be obtained by the torso tracker;
  • the left and right upper arm trackers are obtained;
  • the relative postures of the left and right palms and the handle are always unchanged during the movement, and the posture of the palm can be calculated by the position of the handle.
  • the posture data of the torso tracker and the posture data of the left and right upper arm trackers can be collected, wherein the posture data of the torso tracker is recorded as The attitude data of the left upper arm tracker is recorded as The attitude data of the right upper arm tracker is recorded as And each posture data is represented by a quaternion.
  • the preset skeleton model is corrected by using the "I" type posture data.
  • the initial body data of the human body may be calculated according to the first lion head data and the second posture data.
  • the calculating the initial data of the human body according to the preset pose data to obtain the initial pose data specifically includes:
  • the obtaining the first head posture data and the first left and right arm position data according to the “I” type posture calculates the distance between the two shoulders (ie, the body width) and the center point of the two shoulders (ie, the chest cavity) The position of the joint), and then calculate the vector of the thoracic joint point to the head (ie, the position of the thoracic joint point to the head position).
  • the length of the upper arm and the forearm can be calculated according to the national standard (GB/T1000-1988) "Chinese adult human body size".
  • the height is calculated according to the average value of the head-display Z-axis height of the "I"-type attitude and the "T"-type posture, for example, the height is equal to the average value + an offset, wherein the offset is preset. , which can be obtained from a large amount of experimental data.
  • the length of the spine, the length of the neck, the length of the leg, the length of the thigh, and the length of the calf can be calculated according to the proportion of "Chinese adult human body size", thereby obtaining the raw data of the human body.
  • S20 Capture real-time attitude data of the human body, and determine a first arm posture according to the real-time attitude data and the initial pose data by a method of transforming the matrix between the arm and the link, wherein the first arm posture includes a trunk joint and an arm motion chain joint.
  • the motion capture system captures the posture data of the human body in real time
  • the posture data can be collected by the head display, the left and right handles, the left and right upper arm trackers, and the torso tracker.
  • the head data, the left and right handles, the left and right upper arm trackers, and the torso tracker capture the posture data of the human head, the left and right palms, the left and right upper arms, and the torso in real time.
  • the position of each joint of the human body can be calculated according to the initial pose data and the real-time posture data, wherein the position is represented by a quaternion form.
  • the coordinates of the human torso and the joints of the arms are updated based on the initial pose data and the real-time posture data.
  • the capturing the real-time attitude data of the human body, determining the first arm posture by the arm-to-part transformation matrix method according to the real-time attitude data and the initial pose data specifically includes:
  • S21 capturing real-time posture data of the human body, calculating real-time data of the trunk joint according to the preset torso posture formula, and calculating real-time position data of the upper arm according to the preset upper arm posture formula.
  • the arm motion is described by a rigid body posture (rotation), and a quaternion method is employed.
  • the quaternion is a method used in graphics as a rotation transform operation, which can perform multiplication, inversion, conjugate quaternion, and rotation interpolation.
  • the form of the quaternion can be:
  • q represents a quaternion
  • p v is an imaginary part
  • q w is a real part
  • Rotating the ⁇ angle around the unit vector (x y z) with the quaternion q can be expressed as:
  • the rigid body transformation may be considered by integrating the position information and the attitude information of the rigid body, that is, using a transformation matrix. And usually a 4 ⁇ 4 homogeneous transform matrix is used to represent the transform matrix:
  • a transformation matrix representing the spatial description of rigid body B in the A coordinate system (for example, Representing the transformation matrix of the shoulder in the world coordinate system, (p x p y p z ) represents the position information of the rigid body, Indicates the posture information of the rigid body.
  • the transformation matrix can also represent the local coordinate system of the rigid body, such as
  • (p x p y p z ) represents the vector of the rigid body relative to the origin of the world coordinate system.
  • Each line represents the representation of its orthogonal axis on the parent axis
  • (u x u y u z ) represents the vector of its x-axis
  • (v x v y v z ) represents the vector of its y-axis
  • (w x w y w z ) represents the vector of its z-axis.
  • the preset torso posture formula may be:
  • q BTracker is the real-time attitude data of the torso tracker
  • q Body is the real-time posture of the thoracic joint
  • Is the initial posture data of the thoracic joint obtained in the "I” posture.
  • the upper arm real-time pose data formula can be:
  • the q Shoulder is the upper arm real-time pose data
  • the q STracker is the real-time posture of the upper arm tracker.
  • the initial posture data of the upper arm acquired in the "I" posture
  • the upper arm tracker is divided into two parts, a left upper arm tracker and a right upper arm tracker.
  • the upper arm real-time attitude data is collected by the left and right trackers, and the left upper arm tracker and the right upper arm tracker can be described by q STrackerL and q STrackerR , respectively.
  • q STracker is used to describe it.
  • the positions of the trunk and the upper arm may also be offset-adjusted according to the chest position, and the adjustment value of the offset adjustment is a half body width.
  • the chest position is equal to the position of the head display + the vector of the head to the chest cavity.
  • the chest position shift adjustment is obtained from the position of the chest cavity + half body width, that is, the position of the left upper arm joint, that is, the position of the right upper arm from the position of the chest cavity - half body width, that is, the position of the right shoulder joint.
  • the position information of the neck, the chest cavity and the trunk can be obtained by calculating the relative positional relationship with the head, and the posture of the left and right clavicle should be consistent with the trunk.
  • the initial pose data can read the shoulder joint position (ie, the upper arm starting position) p shoulder , and the shoulder joint real-time position p shoulder can be obtained from the chest center position ⁇ half body width, wherein + represents The position of the left shoulder joint, - indicates the position of the right shoulder joint.
  • the calculation formula of the shoulder joint position may be:
  • the p Ribcage is a chest position
  • the L bodyWidth is a body length
  • elbow joint is a child node of the shoulder joint
  • elbow joint position p elbow is offset from the upper arm length along the x-axis direction of the shoulder joint coordinate system, and is obtained by the following formula
  • p elbow is the elbow joint position and p shoulder is the shoulder joint position.
  • p shoulder is the shoulder joint position.
  • L Upperarm is the upper arm length.
  • the elbow joint coordinate system is the same as the shoulder joint coordinate system, and the elbow joint is a swivel joint and the degree of rotational freedom is only one. That is to say, the forearm can only be rotated about the z-axis of the elbow joint to determine the angle ⁇ between the upper arm and the forearm to determine the forearm posture.
  • the angle ⁇ between the upper arm and the forearm can be calculated according to the shoulder joint position p shoulder , the elbow joint position p elbow and the handle position p hand , and the calculation formula of the angle between the upper arm and the forearm can be:
  • V e2h p hand -p elbow denotes the unit vector of the elbow joint pointing to the wrist joint
  • the calculation formula of the ⁇ elbow can be:
  • ⁇ elbow is added to obtain a new elbow joint posture, that is, the elbow joint posture is captured.
  • the calculating the angle between the upper arm and the forearm according to the shoulder joint data and the elbow joint data, and calculating the forearm posture data according to the included angle to obtain the first arm posture specifically includes:
  • the preset built-in model is preset and is independent of each joint local coordinate system in the preset bone model, and the preset built-in model forms an orthogonal base by using a forward axis, a horizontal axis, and a vertical axis.
  • Each of the preset skeleton model terminal joints pre-stored by the virtual reality system may be identified by a coordinate axis of the built-in model, so that differences in local coordinate axes of different skeleton models may be ignored.
  • the method includes:
  • S030 Receive and store a preset skeleton model, and associate each joint coordinate system of the preset skeleton model with a preset built-in model to obtain a correspondence between the preset skeleton model and the preset built-in model.
  • the built-in model corresponds to each joint of the preset skeleton model
  • the joint name of the preset skeleton model that can be imported into the built-in model can be based on the coordinates of the joints of the preset skeleton model and the preset skeleton model respectively.
  • Corresponding relationship is established to establish a correspondence between the preset built-in model and the joints of the preset skeleton model, so that when the captured real-time pose data is respectively imported into the preset built-in model, the preset skeleton model can be automatically imported, thereby Set the virtual character corresponding to the skeleton model to control.
  • a plurality of skeleton models may be preset in the virtual reality system, and the coordinate systems of the joint points of the skeleton models are different, and each of the preset models has partial identical properties at that time.
  • each of the preset models has partial identical properties at that time.
  • the same properties of each preset skeleton model can be acquired, and a built-in skeleton model is generated according to the same property.
  • each imported skeletal model uses the head above the foot, that is, the vector with the preset built-in model facing up; the left hand is on the left side of the right hand, that is, the direction of the model toward the right can be determined; from the top to the right
  • the vector can determine the forward vector by cross-multiplication, which determines the orthonormal basis of the built-in bone model.
  • the built-in model determines a bone joint of the built-in model according to the correspondence relationship
  • the bone joint of the built-in model may be composed of three custom data structures, the data structure including a type of the coordinate axis and one A tag, where the type indicates the type of axis it belongs to, and the tag indicates the directional relationship of the axis to the built-in model.
  • the mark may be 1 or -1, with 1 indicating the same direction and -1 indicating the opposite direction.
  • the correspondence between the joints of the preset skeleton model and the built-in model can be established in sequence. That is to say, when the spatial information of the joint point of the preset skeleton model is first read, the spatial information of each joint point is associated with the preset built-in model, that is, the joint point coordinates of the different imported skeleton model resources are used by using the built-in model.
  • the axes are marked and marked with the following data structure.
  • the forward axis of the built-in model corresponds to the y-axis (opposite direction) of the thoracic joint point
  • the horizontal axis corresponds to the x-axis of the thoracic joint point (in the opposite direction)
  • the vertical axis corresponds to the thoracic joint point.
  • the other joint point axes of the preset skeleton model are respectively compared with the built-in model axes, and the coordinate axes of the joint points of the preset skeleton model and the three axes of the built-in model (forward axis, horizontal axis, and The correspondence of the vertical axis) simultaneously records the direction of the coordinate axis and the built-in model, thereby realizing the establishment of the correspondence between the preset skeleton model and the preset built-in model.
  • the preset built-in model can be used to redirect to correspond to the corresponding preset skeleton model.
  • the captured data can also be converted into the joint coordinate system of the preset skeleton model.
  • the converting the first arm posture into the second arm posture of the preset virtual character according to the preset built-in model, and driving the preset virtual character according to the second arm posture comprises:
  • the pose data (including the position and posture data) of each joint point included in the first arm posture is acquired, and the x-axis, the y-axis, and the z-axis of the pose data of each joint point are corresponding to the preset built-in.
  • a coordinate axis of the model and according to a correspondence relationship between the preset built-in model coordinate axis and the coordinate axis of the joint point of the preset skeleton model, the coordinate axis of the captured data corresponding to the built-in model coordinate axis is converted into the coordinate axis of the preset skeleton model,
  • the first arm posture is converted into the second arm posture and the virtual character corresponding to the preset skeleton model is driven.
  • the process of reorienting the first arm posture may be: matching a coordinate axis of the captured data with a coordinate axis of the preset built-in model, and redirecting the coordinate system of the captured data.
  • the specific process may be: capturing three coordinate axes of joint points in the data and three coordinate axes of the built-in model, and correspondingly positioning the forward axis, the horizontal axis and the vertical axis of the built-in model on the coordinate axes of the data model;
  • the z-axis vector; if the horizontal axis is labeled as the x-axis, the vector y the x-axis
  • the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores one or more programs, the one or more The program may be executed by one or more processors to implement the steps in the virtual motion driving method based on arm motion capture as described in the above embodiments.
  • the present invention further provides a virtual reality system, including: a motion capture system and a virtual reality device, as shown in FIG. 5, the virtual reality device includes at least one processor (processor) 20; display 21; and memory 22, which may also include a communication interface 23 and a bus 24.
  • the processor 20, the display screen 21, the memory 22, and the communication interface 23 can complete communication with each other through the bus 24.
  • the display screen 21 is set to display a user guidance interface preset in the initial setting mode.
  • the communication interface 23 can transmit information.
  • Processor 20 may invoke logic instructions in memory 22 to perform the methods in the above-described embodiments.
  • logic instructions in the memory 22 described above may be implemented in the form of software functional units and sold or used as separate products, and may be stored in a computer readable storage medium.
  • the memory 22 is a computer readable storage medium, and can be configured to store a software program, a computer executable program, a program instruction or a module corresponding to the method in the embodiment of the present disclosure.
  • the processor 20 performs the functional application and data processing by executing software programs, instructions or modules stored in the memory 22, i.e., implements the methods in the above embodiments.
  • the memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory, and may also include a nonvolatile memory. For example, a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., may also be used to store a program code. State storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual reality driving method based on arm motion capture, and a virtual reality system. The method comprises: when a motion capture system is worn on the human body, initializing a pre-set posture to obtain initial position and posture data; capturing real-time posture data of the human body, and determining a first arm posture by means of a transformation matrix between arm connecting rod members according to the real-time posture data and the initial position and posture data; and converting, according to a pre-set built-in model, the first arm posture into a second arm posture of a pre-set virtual character, and driving the pre-set virtual character according to the second arm posture (S30). According to acquired initial position and posture data and real-time posture data, an arm kinematic chain structure is used to determine arm posture data by means of a transformation matrix between connecting rods, thereby improving the accuracy of arm motion recognition. Moreover, the arm posture data is converted based on a built-in model to drive a 3D virtual character to move, thereby ensuring that the spatial positions of the virtual character and the arm remain consistent with the spatial position of a real character.

Description

基于手臂动作捕捉的虚拟现实驱动方法及虚拟现实系统Virtual reality driving method based on arm motion capture and virtual reality system 技术领域Technical field
本发明涉及智能终端技术领域,特别涉及一种基于手臂动作捕捉的虚拟现实驱动方法及虚拟现实系统。The present invention relates to the field of intelligent terminal technologies, and in particular, to a virtual reality driving method based on arm motion capture and a virtual reality system.
背景技术Background technique
虚拟现实(VR),是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,通过电脑等前沿技术,把原本在现实世界中无法体验到的真实与虚幻相结合。模拟仿真后再叠加,将虚幻的角色或物体叠加到真实的世界中,被人类视觉感官所感知,从而达到超越现实的体验。这样就可以将真实环境和虚幻的物体实时地叠加到同一个空间中。现有的虚拟实现普遍基于动作捕捉系统来识别人体动作,并根据人体动作对虚拟现实角色进行控制,特别是通过人体手臂动作对角色进行控制。例如,基于惯性传感器以及基于计算机视觉等方式识别人体手臂动作。但是,采用上述方式捕捉手臂动作效果不好,例如,采用基于计算机视觉的方式容易受到外界环境干扰较大,比如光照条件、背景和遮挡物等;采用基于惯性传感器的方式受测量噪声和游走误差等因素的影响,无法长时间精确的跟踪。Virtual reality (VR) is a new technology that integrates real world information and virtual world information "seamlessly". It combines real and illusory that could not be experienced in the real world through cutting-edge technologies such as computers. After the simulation and superimposition, the unreal characters or objects are superimposed on the real world, and are perceived by the human visual senses, thereby achieving an experience beyond reality. This allows real-world and illusory objects to be superimposed in the same space in real time. The existing virtual implementation is generally based on a motion capture system to identify human motion, and to control the virtual reality character according to the human body motion, in particular, to control the character through the human arm motion. For example, human body arm motion is recognized based on inertial sensors and based on computer vision. However, the effect of capturing the arm in the above manner is not good. For example, the computer vision-based method is easily interfered by the external environment, such as lighting conditions, background, and obstructions. The inertial sensor-based method is used to measure noise and travel. The influence of factors such as errors cannot be accurately tracked for a long time.
发明内容Summary of the invention
本发明旨在提供一种基于手臂动作捕捉的虚拟现实驱动方法及虚拟现实系统。The present invention aims to provide a virtual reality driving method based on arm motion capture and a virtual reality system.
为了解决上述技术问题,本发明所采用的技术方案如下:In order to solve the above technical problem, the technical solution adopted by the present invention is as follows:
一种基于手臂动作捕捉的虚拟现实驱动方法,其包括:A virtual reality driving method based on arm motion capture, comprising:
当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据,其中,所述初始位姿数据包括预设位姿数据及人体原始数据;When the human body wears the motion capture system, initializing the preset posture to obtain initial pose data, wherein the initial pose data includes preset pose data and human body raw data;
捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态,其中,所述第一手臂姿态包括躯干关节以及手臂运动链关节;Capturing the real-time posture data of the human body, determining the first arm posture according to the real-time attitude data and the initial pose data by using an arm-to-link transformation matrix method, wherein the first arm posture includes a trunk joint and an arm motion chain joint;
根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色。Converting the first arm posture into a second arm posture of a preset virtual character according to a preset built-in model, and driving the preset virtual character according to the second arm posture.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述动作捕捉系统至少包括头显、左右手柄、左右上臂追踪器以及躯干追踪器。The virtual reality driving method based on arm motion capture, wherein the motion capture system includes at least a head display, a left and right handle, a left and right upper arm tracker, and a torso tracker.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据具体包括:The virtual reality driving method based on the arm motion capture, wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data specifically includes:
当人体穿戴动作捕捉系统时,捕捉人体处于预设姿态的预设位姿数据,其中,所述预设姿态包括第一姿态和第二姿态;When the human body wears the motion capture system, capturing preset posture data of the human body in a preset posture, wherein the preset posture includes a first posture and a second posture;
根据所述第一姿态对应的预设位姿数据校正预设骨骼模型;Correcting a preset skeleton model according to the preset pose data corresponding to the first posture;
根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据。Calculating initial body data based on the preset pose data to obtain initial pose data.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据具体包括:The virtual reality driving method based on the arm motion capture, wherein the calculating the initial data of the human body according to the preset pose data to obtain the initial pose data specifically includes:
根据所述第一姿态对应的预设位姿数据计算上半身各关节的相对位置关系;Calculating a relative positional relationship of each joint of the upper body according to the preset pose data corresponding to the first posture;
将所述第二姿态对应的预设位姿数据与所述第一姿态对应的预设位姿数据相比较来计算人体原始数据,以得到初始位姿数据。Comparing the preset pose data corresponding to the second posture with the preset pose data corresponding to the first posture to calculate human body raw data to obtain initial pose data.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态具体包括:The virtual reality driving method based on the arm motion capture, wherein the capturing the real-time attitude data of the human body, determining the first arm posture by the arm-to-part transformation matrix method according to the real-time attitude data and the initial pose data specifically includes:
捕捉人体的实时姿态数据,根据预设躯干姿态公式计算躯干关节实时数据,并根据预设上臂姿态公式计算上臂实时位置数据;Capturing the real-time posture data of the human body, calculating the real-time data of the trunk joint according to the preset torso posture formula, and calculating the real-time position data of the upper arm according to the preset upper arm posture formula;
根据初始位姿数据确定肩关节位置,并根据所述肩关节数据及肩关节变化矩阵计算肘关节实时数据,其中,所述肘关节数据为肩关节所处坐标系的X轴方向偏移上臂长;Determining the shoulder joint position according to the initial pose data, and calculating real-time data of the elbow joint according to the shoulder joint data and the shoulder joint change matrix, wherein the elbow joint data is offset from the upper arm length in the X-axis direction of the coordinate system of the shoulder joint ;
根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。Calculating an angle between the upper arm and the forearm according to the shoulder joint data and the elbow joint data, and calculating forearm pose data according to the included angle to obtain a first arm posture.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态具体包括:The virtual reality driving method based on arm motion capture, wherein the angle between the upper arm and the forearm is calculated according to the shoulder joint data and the elbow joint data, and the forearm pose data is calculated according to the included angle to obtain the first The arm posture specifically includes:
根据肩关节数据和肘关节实时数据确定肘关节指向肩关节的第一单位向量,并根据肘关节实时数据和腕关节实时数据确定肘关节指向腕关节的第二单位向量;Determining the first unit vector of the elbow joint to the shoulder joint based on the shoulder joint data and the real-time data of the elbow joint, and determining the second unit vector of the elbow joint pointing to the wrist joint according to the real data of the elbow joint and the real-time data of the wrist joint;
根据所述第一单位向量和第二单位向量通过余弦定理计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。Calculating an angle between the upper arm and the forearm by using a cosine theorem according to the first unit vector and the second unit vector, and calculating forearm pose data according to the included angle to obtain a first arm posture.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据之前包括:The virtual reality driving method based on the arm motion capture, wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data includes:
接收并存储预设骨骼模型,并将所述预设骨骼模型的各关节坐标系与预设内置模型进行关联,以得到预设骨骼模型与预设内置模型的对应关系。The preset skeleton model is received and stored, and each joint coordinate system of the preset skeleton model is associated with the preset built-in model to obtain a correspondence between the preset skeleton model and the preset built-in model.
所述基于手臂动作捕捉的虚拟现实驱动方法,其中,所述根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色具体包括:The virtual reality driving method based on arm motion capture, wherein the first arm posture is converted into a second arm posture of a preset virtual character according to a preset built-in model, and the pre-drive is driven according to the second arm posture The virtual role specifically includes:
将所述第一手臂姿态重定向至预设内置模型各关节点坐标系统内;Redirecting the first arm posture to a coordinate system of each joint point of the preset built-in model;
根据所述对应关系将所述第一手臂姿态转换至预设骨骼模型的各关节点坐标系内,以得到第二手臂姿态;Converting the first arm posture into each joint point coordinate system of the preset skeleton model according to the correspondence relationship to obtain a second arm posture;
根据所述第二手臂姿态确定所述预设骨骼模型对应的预设虚拟角色。Determining a preset virtual character corresponding to the preset skeleton model according to the second arm posture.
一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上任一所述的基于手臂动作捕捉的虚拟现实驱动方法中的步骤。A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement an arm based as described above The steps in the virtual reality driving method of motion capture.
一种虚拟现实系统,其包括:动作捕捉系统以及虚拟现实设备,所述虚拟现实设备包括处理器、存储器及通信总线;所述存储器上存储有可被所述处理器执行的计算机可读程序;A virtual reality system includes: a motion capture system and a virtual reality device, the virtual reality device including a processor, a memory, and a communication bus; and the memory stores a computer readable program executable by the processor;
所述通信总线实现处理器和存储器之间的连接通信;The communication bus implements connection communication between the processor and the memory;
所述处理器执行所述计算机可读程序时实现如上任一所述的基于手臂动作捕捉的虚拟现实驱动方法中的步骤。The step in the virtual reality driving method based on the arm motion capture of any of the above is implemented when the processor executes the computer readable program.
有益效果:与现有技术相比,本发明提供了一种基于手臂动作捕捉的虚拟现实驱动方法及虚拟现实系统,所述方法包括当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据;捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态,其中,所述第一手臂姿态包括躯干关节以及手臂运动链关节;根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色。本申请根据获取初始位姿数据以及实时姿态数据,并利用手臂运动链结构以连杆间变换矩阵形式确定手臂姿态数据,提高手臂动作识别的准确性。同时,基于内置模型将手臂姿态数据转换驱动3D虚拟角色运动,保证了虚拟角色和手 臂空间位置与现实人物的空间位置保持一致。Advantageous Effects: Compared with the prior art, the present invention provides a virtual reality driving method based on arm motion capture and a virtual reality system, the method comprising: initializing a preset posture to obtain an initial position when the human body wears the motion capturing system Attitude data; capturing real-time posture data of the human body, determining a first arm posture by an arm-to-part transformation matrix method according to real-time attitude data and initial pose data, wherein the first arm posture includes a trunk joint and an arm motion chain joint Converting the first arm posture into a second arm posture of a preset virtual character according to a preset built-in model, and driving the preset virtual character according to the second arm posture. According to the initial pose data and the real-time posture data, the present invention uses the arm kinematic chain structure to determine the arm posture data in the form of a transformation matrix between the links, thereby improving the accuracy of the arm motion recognition. At the same time, based on the built-in model, the arm posture data is converted to drive the 3D virtual character movement, which ensures that the virtual character and the arm space position are consistent with the spatial position of the real character.
附图说明DRAWINGS
图1为本发明提供的基于手臂动作捕捉的虚拟现实驱动方法的一实施例的流程图。FIG. 1 is a flowchart of an embodiment of a virtual reality driving method based on arm motion capture provided by the present invention.
图2为本发明提供的基于手臂动作捕捉的虚拟现实驱动方法的实施例中佩戴动作捕捉系统的示意图。2 is a schematic diagram of a wear motion capture system in an embodiment of a virtual reality driving method based on arm motion capture provided by the present invention.
图3为本发明提供的基于手臂动作捕捉的虚拟现实驱动方法的实施例中第一姿态的示意图。FIG. 3 is a schematic diagram of a first posture in an embodiment of a virtual reality driving method based on arm motion capture according to the present invention.
图4为本发明提供的基于手臂动作捕捉的虚拟现实驱动方法的实施例中第二姿态的示意图。FIG. 4 is a schematic diagram of a second posture in an embodiment of a virtual reality driving method based on arm motion capture according to the present invention.
图5为本发明提供的一种虚拟现实系统的一实施例中虚拟现实设备的结构原理图。FIG. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of a virtual reality system according to the present invention.
具体实施方式Detailed ways
本发明提供一种基于手臂动作捕捉的虚拟现实驱动方法及虚拟现实系统,为使本发明的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。The present invention provides a virtual reality driving method and a virtual reality system based on arm motion capture. The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。The singular forms "a", "an", "the" It is to be understood that the phrase "comprise" or "an" Integers, steps, operations, components, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element. Further, "connected" or "coupled" as used herein may include either a wireless connection or a wireless coupling. The phrase "and/or" used herein includes all or any one and all combinations of one or more of the associated listed.
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义, 否则不会用理想化或过于正式的含义来解释。Those skilled in the art will appreciate that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. It should also be understood that terms such as those defined in a general dictionary should be understood to have meaning consistent with the meaning in the context of the prior art, and will not be idealized or excessive unless specifically defined as here. The formal meaning is explained.
下面结合附图,通过对实施例的描述,对发明内容作进一步说明。The contents of the invention will be further described by the following description of embodiments with reference to the accompanying drawings.
请参照图1,图1为本发明提供的应用自启动的控制方法的较佳实施例的流程图。所述方法包括:Please refer to FIG. 1. FIG. 1 is a flowchart of a preferred embodiment of a self-starting control method provided by the present invention. The method includes:
S10、当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据,其中,所述初始位姿数据包括预设位姿数据及人体原始数据。S10. When the human body wears the motion capture system, the preset posture is initialized to obtain initial pose data, wherein the initial pose data includes preset pose data and human body raw data.
具体地,所述动作捕捉系统用于捕捉人体动作,其至少包括头显、左右手柄、左右上臂追踪器以及躯干追踪器。如图2所示,所述头显用于佩戴于人体头部,所述左右手柄分别握持于人体左右手,所述左上臂追踪器用于佩戴于左上臂位置,所述右上臂追踪器用于佩戴于右上臂位置,所述躯干追踪器用于佩戴于胸前位置。所述头显用于采集头部姿态数据,所述左右手柄用于采集腕关节姿态数据,所述左右上臂追踪器用于采用肩关节姿态数据。Specifically, the motion capture system is configured to capture human body motion, and at least includes a head display, a left and right handle, a left and right upper arm tracker, and a torso tracker. As shown in FIG. 2, the head is used for wearing on a human head, the left and right handles are respectively held on the left and right hands of the human body, the left upper arm tracker is used to be worn on the left upper arm position, and the right upper arm tracker is used for wearing. In the right upper arm position, the torso tracker is used to be worn in the chest position. The head is used to collect head posture data, the left and right handles are used to collect wrist joint posture data, and the left and right upper arm trackers are used to adopt shoulder joint posture data.
此外,所述预设姿态包括第一姿态和第二姿态,所述初始化预设姿态是当人体配套动作捕捉系统后,人体分别摆放第一姿态和第二姿态,动作捕捉系统分别捕捉人体处于第一姿态的第一姿态数据和处于第二姿态的姿态数据,并根据第一姿态数据和第二姿态时间获取初始位姿数据。相应的,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据具体包括:In addition, the preset posture includes a first posture and a second posture. After the human body is equipped with a motion capture system, the human body respectively positions the first posture and the second posture, and the motion capture system respectively captures the human body. The first posture data of the first posture and the posture data of the second posture, and the initial pose data is acquired according to the first posture data and the second posture time. Correspondingly, when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data specifically includes:
S11、当人体穿戴动作捕捉系统时,捕捉人体处于预设姿态的预设位姿数据,其中,所述预设姿态包括第一姿态和第二姿态;S11, when the human body wears the motion capture system, capturing preset posture data of the human body in a preset posture, wherein the preset posture includes a first posture and a second posture;
S12、根据所述第一姿态对应的预设位姿数据校正预设骨骼模型;S12. Correct the preset skeleton model according to the preset pose data corresponding to the first posture.
S13、根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据。S13. Calculate initial body data according to the preset pose data to obtain initial pose data.
具体地,所述第一姿态优选为“I”型姿态,所述第二姿态优选为“T”型姿态。如图3和图4所示,所述“T”型姿态为身体站立、两手臂左右张开伸直;“I”型姿态为身体站立、两手臂自然下垂。所述第一姿态和第二姿态可以是用户根据提示进行摆放,并在摆放完成后通过触动手柄来进行初始化预设姿态。此外,当人体处于“I”型姿态,通过头显采集头部的第一头部位姿数据、通过左右手臂追踪器采集第一左右手臂位姿数据;当人体处于“T”型姿态时,通过头显采集头部的第二头部位姿数据、通过左右手臂追踪器采集第二左右手臂位姿数据,其中,所述位姿数据包括位置数据和姿态数据。Specifically, the first posture is preferably an "I" type posture, and the second posture is preferably a "T" type posture. As shown in FIG. 3 and FIG. 4, the "T"-shaped posture is that the body stands, and the two arms are stretched out and left; the "I"-shaped posture is that the body stands and the two arms naturally sag. The first posture and the second posture may be performed by the user according to the prompt, and the preset posture is initialized by touching the handle after the placement is completed. In addition, when the human body is in the "I"-type posture, the first head position data of the head is collected by the head display, and the first left and right arm pose data is collected by the left and right arm trackers; when the human body is in the "T" posture, the passage is passed. The second head position data of the head is collected by the head, and the second left and right arm position data is collected by the left and right arm trackers, wherein the pose data includes position data and posture data.
进一步,虚拟现实中的虚拟角色的在虚拟空间的骨骼模型为预先存储的,记为预设骨骼模型,所述预设骨骼模型的头部位置对应的坐标系设置为根坐标系,各个关节配置对应的局部坐标系相对于根坐标系设置虚拟角色。所述头部位置可以通过头显采集,胸腔关节和左右锁骨关节的相对姿态在运动过程中保持不变,即胸腔关节和左右锁骨关节位置可以通过躯干追踪器获得;所述左右肩关节姿态通过左右上臂追踪器通获取;左右手心与手柄的相对姿态在运动过程中始终不变,通过手柄位置可以计算手心姿态。也就是说,当人体处于“I”型姿态时,可以采集躯干追踪器的姿态数据、左右上臂追踪器的姿态数据,其中,所述躯干追踪器的姿态数据记为
Figure PCTCN2018097078-appb-000001
左上臂追踪器的姿态数据记为
Figure PCTCN2018097078-appb-000002
右上臂追踪器的姿态数据记为
Figure PCTCN2018097078-appb-000003
并且各姿态数据采用四元数表示。在获取到“I”型姿态数据后,采用“I”型姿态数据对预设骨骼模型进行校正。
Further, the skeleton model of the virtual character in the virtual space is pre-stored, and is recorded as a preset skeleton model, and the coordinate system corresponding to the head position of the preset skeleton model is set as a root coordinate system, and each joint configuration The corresponding local coordinate system sets a virtual character relative to the root coordinate system. The position of the head can be collected by the head, and the relative postures of the thoracic joint and the left and right clavicle joints remain unchanged during the movement, that is, the position of the thoracic joint and the left and right clavicle joints can be obtained by the torso tracker; The left and right upper arm trackers are obtained; the relative postures of the left and right palms and the handle are always unchanged during the movement, and the posture of the palm can be calculated by the position of the handle. That is to say, when the human body is in the "I" posture, the posture data of the torso tracker and the posture data of the left and right upper arm trackers can be collected, wherein the posture data of the torso tracker is recorded as
Figure PCTCN2018097078-appb-000001
The attitude data of the left upper arm tracker is recorded as
Figure PCTCN2018097078-appb-000002
The attitude data of the right upper arm tracker is recorded as
Figure PCTCN2018097078-appb-000003
And each posture data is represented by a quaternion. After obtaining the "I" type posture data, the preset skeleton model is corrected by using the "I" type posture data.
此外,在获取到第一姿态数据和第二姿态数据后,可以根据第一狮子头数据和第二姿态数据计算人体初始数据。相应的,所述根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据具体包括:In addition, after acquiring the first posture data and the second posture data, the initial body data of the human body may be calculated according to the first lion head data and the second posture data. Correspondingly, the calculating the initial data of the human body according to the preset pose data to obtain the initial pose data specifically includes:
S131、根据所述第一姿态对应的预设位姿数据计算上半身各关节的相对位置关系;S131. Calculate, according to the preset pose data corresponding to the first posture, a relative positional relationship of the joints of the upper body;
S132、将所述第二姿态对应的预设位姿数据与所述第一姿态对应的预设位姿数据相比较来计算人体原始数据,以得到初始位姿数据。S132. Compare the preset pose data corresponding to the second posture with the preset pose data corresponding to the first posture to calculate human body raw data to obtain initial pose data.
具体地,所述根据“I”型姿态获取到第一头部姿态数据和第一左右手臂位置数据计算出两肩膀之间的距离(即,身宽)和两肩膀的中心点(即,胸腔关节的位置),进而计算出胸腔关节点到头显(即,胸腔关节点的位置到头显的位置)的向量。在根据“T”型姿态获取到第二头部姿态数据和第二左右手臂位置数据计算可以计算出手臂展开距离,在根据手臂展开距离和身宽计算手臂长度,其中,所述手臂长度=(手臂伸展距离-身宽)/2。在得到手臂长度后,可以依据国家标准(GB/T1000-1988)《中国成年人人体尺寸》计算出上臂和前臂的长度。最后,根据“I”型姿态和“T”型姿态的头显Z轴高度的平均值计算得到身高,例如,身高等于平均值+一个偏移量,其中,所述偏移量为预先设置的,其可以通过大量实验数据得到。在获取身高后,可以依据《中国成年人人体尺寸》的比例计算出脊柱长度、脖子长度、腿长、大腿长以及小腿长等,从而得到人体原始数据。Specifically, the obtaining the first head posture data and the first left and right arm position data according to the “I” type posture calculates the distance between the two shoulders (ie, the body width) and the center point of the two shoulders (ie, the chest cavity) The position of the joint), and then calculate the vector of the thoracic joint point to the head (ie, the position of the thoracic joint point to the head position). The arm deployment distance can be calculated by calculating the second head posture data and the second left and right arm position data according to the "T" posture, and calculating the arm length according to the arm deployment distance and the body width, wherein the arm length = ( Arm extension distance - body width) / 2. After obtaining the length of the arm, the length of the upper arm and the forearm can be calculated according to the national standard (GB/T1000-1988) "Chinese adult human body size". Finally, the height is calculated according to the average value of the head-display Z-axis height of the "I"-type attitude and the "T"-type posture, for example, the height is equal to the average value + an offset, wherein the offset is preset. , which can be obtained from a large amount of experimental data. After obtaining the height, the length of the spine, the length of the neck, the length of the leg, the length of the thigh, and the length of the calf can be calculated according to the proportion of "Chinese adult human body size", thereby obtaining the raw data of the human body.
S20、捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手 臂连杆件间变换矩阵方法确定第一手臂姿态,其中,所述第一手臂姿态包括躯干关节以及手臂运动链关节。S20. Capture real-time attitude data of the human body, and determine a first arm posture according to the real-time attitude data and the initial pose data by a method of transforming the matrix between the arm and the link, wherein the first arm posture includes a trunk joint and an arm motion chain joint.
具体地,所述动作捕捉系统实时捕捉人体的姿态数据,所述姿态数据可以通过头显、左右手柄、左右上臂追踪器以及躯干追踪器采集。也就是说,通过头显、左右手柄、左右上臂追踪器以及躯干追踪器实时捕捉人体头部、左右手心、左右上臂以及躯干的姿态数据。在获取到实时姿态数据后,可以根据初始位姿数据和实时姿态数据计算人体各关节的位置,其中,所述位置采用四元数形式表示。也就是说,根据初始位姿数据和实时姿态数据对人体躯干以及手臂的各关节的坐标进行更新。相应的,所述捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态具体包括:Specifically, the motion capture system captures the posture data of the human body in real time, and the posture data can be collected by the head display, the left and right handles, the left and right upper arm trackers, and the torso tracker. In other words, the head data, the left and right handles, the left and right upper arm trackers, and the torso tracker capture the posture data of the human head, the left and right palms, the left and right upper arms, and the torso in real time. After the real-time attitude data is acquired, the position of each joint of the human body can be calculated according to the initial pose data and the real-time posture data, wherein the position is represented by a quaternion form. That is, the coordinates of the human torso and the joints of the arms are updated based on the initial pose data and the real-time posture data. Correspondingly, the capturing the real-time attitude data of the human body, determining the first arm posture by the arm-to-part transformation matrix method according to the real-time attitude data and the initial pose data specifically includes:
S21、捕捉人体的实时姿态数据,根据预设躯干姿态公式计算躯干关节实时数据,并根据预设上臂姿态公式计算上臂实时位置数据。S21: capturing real-time posture data of the human body, calculating real-time data of the trunk joint according to the preset torso posture formula, and calculating real-time position data of the upper arm according to the preset upper arm posture formula.
具体地,所述手臂动作通过刚体姿态(旋转)来描述,并且采用四元数法。所述四元数是图形学中用作旋转变换运算的方法,其可以进行乘法运算、求逆、求出共轭四元数和旋转插值。其中,所述四元数的形式可以:Specifically, the arm motion is described by a rigid body posture (rotation), and a quaternion method is employed. The quaternion is a method used in graphics as a rotation transform operation, which can perform multiplication, inversion, conjugate quaternion, and rotation interpolation. Wherein, the form of the quaternion can be:
(q x q y q z q w); (q x q y q z q w );
其也可以替换为如下形式表示:It can also be replaced by the following form:
q=p v+q w=q xi+q yj+q zk+q wq=p v +q w =q x i+q y j+q z k+q w ;
其中,q表示四元数,p v为虚数部分,表示向量(q x q y q z),q w为实数部分。 Where q represents a quaternion, p v is an imaginary part, represents a vector (q x q y q z ), and q w is a real part.
绕单位向量(x y z)旋转θ角度用四元数q可以表示为:Rotating the θ angle around the unit vector (x y z) with the quaternion q can be expressed as:
Figure PCTCN2018097078-appb-000004
Figure PCTCN2018097078-appb-000004
进一步,所述刚体变换可以为将刚体的位置信息和姿态信息综合起来考虑,即采用变换矩阵来表示。并且通常使用4×4齐次变换矩阵(homogeneous transform)来表示变换矩阵:Further, the rigid body transformation may be considered by integrating the position information and the attitude information of the rigid body, that is, using a transformation matrix. And usually a 4×4 homogeneous transform matrix is used to represent the transform matrix:
Figure PCTCN2018097078-appb-000005
Figure PCTCN2018097078-appb-000005
其中,
Figure PCTCN2018097078-appb-000006
表示刚体B在A坐标系下的空间描述的变换矩阵(例如,
Figure PCTCN2018097078-appb-000007
表示肩关节(shoulder)在世界坐标系(world)下的变换矩阵),(p x p y p z)表示刚体的位置信息,
Figure PCTCN2018097078-appb-000008
表示刚体的姿态信息。
among them,
Figure PCTCN2018097078-appb-000006
A transformation matrix representing the spatial description of rigid body B in the A coordinate system (for example,
Figure PCTCN2018097078-appb-000007
Representing the transformation matrix of the shoulder in the world coordinate system, (p x p y p z ) represents the position information of the rigid body,
Figure PCTCN2018097078-appb-000008
Indicates the posture information of the rigid body.
此外,变换矩阵也可以表示刚体的局部坐标系,如式
Figure PCTCN2018097078-appb-000009
的表达式中,(p x p y p z)表示刚体相对世界坐标系原点的向量,
Figure PCTCN2018097078-appb-000010
的每行表示其正交坐标轴在父坐标轴的表示,(u x u y u z)表示其x轴的向量,(v x v y v z)表示其y轴的向量,(w x w y w z)表示其z轴的向量。如果两个刚体的坐标轴一致,且它们相对位置和姿态保持不变,那么当一个刚体运动时,在同一个坐标系中,另一个刚体与其做相同的运动。
In addition, the transformation matrix can also represent the local coordinate system of the rigid body, such as
Figure PCTCN2018097078-appb-000009
In the expression, (p x p y p z ) represents the vector of the rigid body relative to the origin of the world coordinate system.
Figure PCTCN2018097078-appb-000010
Each line represents the representation of its orthogonal axis on the parent axis, (u x u y u z ) represents the vector of its x-axis, and (v x v y v z ) represents the vector of its y-axis, (w x w y w z ) represents the vector of its z-axis. If the axes of the two rigid bodies are identical and their relative positions and attitudes remain the same, then when one rigid body moves, the other rigid body performs the same motion in the same coordinate system.
相应的,根据刚体变换过程,所述预设躯干姿态公式可以为:Correspondingly, according to the rigid body transformation process, the preset torso posture formula may be:
Figure PCTCN2018097078-appb-000011
Figure PCTCN2018097078-appb-000011
其中,q BTracker为躯干追踪器的实时姿态数据,q Body为胸腔关节实时姿态;
Figure PCTCN2018097078-appb-000012
是在“I”型姿态中获取的胸腔关节初始姿态数据,
Figure PCTCN2018097078-appb-000013
是在“I”型姿态中获取的躯干追踪器的初始姿态数据。
Among them, q BTracker is the real-time attitude data of the torso tracker, and q Body is the real-time posture of the thoracic joint;
Figure PCTCN2018097078-appb-000012
Is the initial posture data of the thoracic joint obtained in the "I" posture.
Figure PCTCN2018097078-appb-000013
It is the initial posture data of the torso tracker acquired in the "I" type posture.
所述上臂实时位姿数据公式可以为:The upper arm real-time pose data formula can be:
Figure PCTCN2018097078-appb-000014
其中,所述q Shoulder为上臂实时位姿数据,所述q STracker为上臂追踪器的实时姿态,
Figure PCTCN2018097078-appb-000015
在“I”型姿态中获取的上臂初始姿态数据,
Figure PCTCN2018097078-appb-000016
在“I”型姿态中获取的上臂初始位姿数据。此外,上臂追踪器分为左上臂追踪器和右上臂追踪器各两个,所述上臂实时姿态数据通过左右追踪器采集,可以分别用q STrackerL和q STrackerR 描述左上臂追踪器和右上臂追踪器,这里统一用q STracker来描述。
Figure PCTCN2018097078-appb-000014
Wherein the q Shoulder is the upper arm real-time pose data, and the q STracker is the real-time posture of the upper arm tracker.
Figure PCTCN2018097078-appb-000015
The initial posture data of the upper arm acquired in the "I" posture,
Figure PCTCN2018097078-appb-000016
The initial position data of the upper arm obtained in the "I" type posture. In addition, the upper arm tracker is divided into two parts, a left upper arm tracker and a right upper arm tracker. The upper arm real-time attitude data is collected by the left and right trackers, and the left upper arm tracker and the right upper arm tracker can be described by q STrackerL and q STrackerR , respectively. Here, q STracker is used to describe it.
进一步,在获取到躯干姿态数据和上臂姿态数据后,所述躯干和上臂的位置还可以根据胸腔位置进行偏移调整,所述偏移调整的调整值为半个身宽。其中,所述胸腔位置等于头显的位置+头显到胸腔的向量。胸腔位置偏移调整是从胸腔的位置+半个身宽得到左上臂的位置,即左肩关节的位置,从胸腔的位置-半个身宽得到右上臂的位置,即右肩关节的位置。此外,根据头显直接采集的头部的位姿数据可以通过计算与头部的相对位置关系得到脖子、胸腔和躯干的位置信息,并且左右锁骨的姿态应该和躯干保持一致。Further, after the trunk posture data and the upper arm posture data are acquired, the positions of the trunk and the upper arm may also be offset-adjusted according to the chest position, and the adjustment value of the offset adjustment is a half body width. Wherein, the chest position is equal to the position of the head display + the vector of the head to the chest cavity. The chest position shift adjustment is obtained from the position of the chest cavity + half body width, that is, the position of the left upper arm joint, that is, the position of the right upper arm from the position of the chest cavity - half body width, that is, the position of the right shoulder joint. In addition, according to the pose data of the head directly collected by the head display, the position information of the neck, the chest cavity and the trunk can be obtained by calculating the relative positional relationship with the head, and the posture of the left and right clavicle should be consistent with the trunk.
S22、根据初始位姿数据确定肩关节位置,并根据所述肩关节数据及肩关节变化矩阵计算肘关节实时数据,其中,所述肘关节数据为肩关节所处坐标系的X轴方向偏移上臂长。S22. Determine a shoulder joint position according to the initial pose data, and calculate real-time data of the elbow joint according to the shoulder joint data and the shoulder joint change matrix, wherein the elbow joint data is an X-axis direction offset of a coordinate system of the shoulder joint. The upper arm is long.
具体地,所述初始位姿数据可以读取出肩关节位置(即,上臂起始位置)p shoulder,所述肩关节实时位置p shoulder可以由胸腔中心位置±半个身宽得到,其中+表示左肩关节位置,-表示右肩关节位置。相应的,所述肩关节位置的计算公式可以为: Specifically, the initial pose data can read the shoulder joint position (ie, the upper arm starting position) p shoulder , and the shoulder joint real-time position p shoulder can be obtained from the chest center position ± half body width, wherein + represents The position of the left shoulder joint, - indicates the position of the right shoulder joint. Correspondingly, the calculation formula of the shoulder joint position may be:
Figure PCTCN2018097078-appb-000017
Figure PCTCN2018097078-appb-000017
其中,所述
Figure PCTCN2018097078-appb-000018
为转换矩阵,其表示半个肩宽的偏移量是由转换矩阵
Figure PCTCN2018097078-appb-000019
进行,所述p Ribcage为胸腔位置,所述L bodyWidth为身长。
Wherein said
Figure PCTCN2018097078-appb-000018
For the transformation matrix, which represents the offset of the half shoulder width is determined by the transformation matrix
Figure PCTCN2018097078-appb-000019
In progress, the p Ribcage is a chest position, and the L bodyWidth is a body length.
进一步,肘关节作为肩关节的子节点,肘关节位置p elbow为沿着肩关节坐标系的x轴方向偏移上臂长,由如下公式获得 Further, the elbow joint is a child node of the shoulder joint, and the elbow joint position p elbow is offset from the upper arm length along the x-axis direction of the shoulder joint coordinate system, and is obtained by the following formula
Figure PCTCN2018097078-appb-000020
Figure PCTCN2018097078-appb-000020
其中,p elbow为肘关节位置,p shoulder为肩关节位置,
Figure PCTCN2018097078-appb-000021
为变换矩阵,其表示在world坐标系下shoulder的变换矩阵,L Upperarm为上臂长。
Where p elbow is the elbow joint position and p shoulder is the shoulder joint position.
Figure PCTCN2018097078-appb-000021
For the transformation matrix, it represents the transformation matrix of the shoulder in the world coordinate system, and L Upperarm is the upper arm length.
S23、根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。S23. Calculate an angle between the upper arm and the forearm according to the shoulder joint data and the elbow joint data, and calculate forearm pose data according to the included angle to obtain a first arm posture.
具体地,所述肘关节坐标系与肩关节的坐标系相同,肘关节的为回转关节且 旋转自由度只有1个。也就是说,前臂仅可以绕肘关节的z轴进行旋转,从而确定上臂和前臂的夹角θ即可以确定前臂位姿。在本实施例中,所述肘关节的局部坐标系和肩关节的局部坐标系是相同的,两者夹角等于α elbow=180°-θ,相应的,肘关节的姿态计算公式可以为: Specifically, the elbow joint coordinate system is the same as the shoulder joint coordinate system, and the elbow joint is a swivel joint and the degree of rotational freedom is only one. That is to say, the forearm can only be rotated about the z-axis of the elbow joint to determine the angle θ between the upper arm and the forearm to determine the forearm posture. In this embodiment, the local coordinate system of the elbow joint and the local coordinate system of the shoulder joint are the same, and the angle between the two is equal to α elbow =180°-θ. Accordingly, the posture calculation formula of the elbow joint can be:
q elbow=(α elbow+q elbow.yaw).toquaternions q elbow =(α elbow +q elbow .yaw).toquaternions
其中,
Figure PCTCN2018097078-appb-000022
是“I”型姿态下肘关节的姿态,q elbow.yaw是“I”型姿态下肘关节的姿态下的绕z轴旋转的角度,toquaternions将角坐标系法转为四元数法。
among them,
Figure PCTCN2018097078-appb-000022
It is the attitude of the elbow joint in the "I" posture. q elbow .yaw is the angle of rotation around the z-axis under the attitude of the elbow joint in the "I" posture, and toquaternions converts the angular coordinate system to the quaternion method.
所述上臂和前臂的夹角θ可以根据肩关节位置p shoulder、肘关节位置p elbow和手柄位置p hand计算得到,所述上臂和前臂的夹角的计算公式可以为: The angle θ between the upper arm and the forearm can be calculated according to the shoulder joint position p shoulder , the elbow joint position p elbow and the handle position p hand , and the calculation formula of the angle between the upper arm and the forearm can be:
θ=arccos(V e2s V e2h) θ = arccos (V e2s V e2h )
其中,V e2s=p shoulder-p elbow表示肘关节指向肩关节的单位向量, Where V e2s =p shoulder -p elbow represents the unit vector of the elbow joint pointing to the shoulder joint.
V e2h=p hand-p elbow表示肘关节指向腕关节的单位向量 V e2h =p hand -p elbow denotes the unit vector of the elbow joint pointing to the wrist joint
相应的,所述α elbow的计算公式可以为: Correspondingly, the calculation formula of the α elbow can be:
α elbow=180°-arccos(V e2s,V e2h) El elbow =180°-arccos(V e2s , V e2h )
从而,在原来的肘关节姿态的基础上加上α elbow得到新的肘关节姿态,即实现了肘关节姿态的捕捉。 Therefore, based on the original elbow joint attitude, α elbow is added to obtain a new elbow joint posture, that is, the elbow joint posture is captured.
示例性的,所述根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态具体包括:Illustratively, the calculating the angle between the upper arm and the forearm according to the shoulder joint data and the elbow joint data, and calculating the forearm posture data according to the included angle to obtain the first arm posture specifically includes:
根据肩关节数据和肘关节实时数据确定肘关节指向肩关节的第一单位向量,并根据肘关节实时数据和腕关节实时数据确定肘关节指向腕关节的第二单位向量;Determining the first unit vector of the elbow joint to the shoulder joint based on the shoulder joint data and the real-time data of the elbow joint, and determining the second unit vector of the elbow joint pointing to the wrist joint according to the real data of the elbow joint and the real-time data of the wrist joint;
根据所述第一单位向量和第二单位向量通过余弦定理计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。Calculating an angle between the upper arm and the forearm by using a cosine theorem according to the first unit vector and the second unit vector, and calculating forearm pose data according to the included angle to obtain a first arm posture.
S30、根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色。S30. Convert the first arm posture into a second arm posture of the preset virtual character according to the preset built-in model, and drive the preset virtual character according to the second arm posture.
具体地,所述预设内置模型为预先设置,并与预设骨骼模型中各关节局部坐标系无关,所述预设内置模型采用前向轴、水平轴和垂直轴构成正交基。所述虚拟现实系统预先存储的各预设骨骼模型终端关节可以采用所述内置模型的坐标 轴标识,这样可以忽略不同骨骼模型局部坐标轴的差异。相应的,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据之前包括:Specifically, the preset built-in model is preset and is independent of each joint local coordinate system in the preset bone model, and the preset built-in model forms an orthogonal base by using a forward axis, a horizontal axis, and a vertical axis. Each of the preset skeleton model terminal joints pre-stored by the virtual reality system may be identified by a coordinate axis of the built-in model, so that differences in local coordinate axes of different skeleton models may be ignored. Correspondingly, when the human body wears the motion capture system, before initializing the preset posture to obtain the initial pose data, the method includes:
S030、接收并存储预设骨骼模型,并将所述预设骨骼模型的各关节坐标系与预设内置模型进行关联,以得到预设骨骼模型与预设内置模型的对应关系。S030. Receive and store a preset skeleton model, and associate each joint coordinate system of the preset skeleton model with a preset built-in model to obtain a correspondence between the preset skeleton model and the preset built-in model.
具体地,所述内置模型与预设骨骼模型的各关节相对应,可以将内置模型导入的预设骨骼模型的关节名字,然后根据将内置模型坐标系分别与预设骨骼模型的各关节的坐标系建立对应关系,以建立预设内置模型与预设骨骼模型各关节之间的对应关系,这样捕捉的实时姿态数据分别导入预设内置模型时,可以自动导入至预设骨骼模型,从而对预设骨骼模型对应的虚拟角色进行控制。在实际应用中,所述虚拟现实系统中可以预设设置多个骨骼模型,各骨骼模型的关节点的坐标系不同,当时各预设模型中具有部分相同性质。从而可以获取各预设骨骼模型的相同性质,根据所述相同性质生成内置骨骼模型。例如,导入的各骨骼模型均采用头相对于脚的上面,即预设内置模型朝上的向量可以确定;左手在右手的左边,即模型朝右的方向可以确定;由朝上和朝右的向量通过叉乘运算可以确定朝前的向量,从而确定了内置骨骼模型的正交基。在本实施例中,所述内置模型根据所述对应关系确定内置模型的骨骼关节,所述内置模型的骨骼关节可以由3个自定义数据结构组成,所述数据结构包含坐标轴的类型和一个标记,其中,类型表示其属于坐标轴的类型,标记指示该坐标轴与内置模型的方向关系,所述标记可以采用1或-1,1表示方向相同,-1表示方向相反。Specifically, the built-in model corresponds to each joint of the preset skeleton model, and the joint name of the preset skeleton model that can be imported into the built-in model can be based on the coordinates of the joints of the preset skeleton model and the preset skeleton model respectively. Corresponding relationship is established to establish a correspondence between the preset built-in model and the joints of the preset skeleton model, so that when the captured real-time pose data is respectively imported into the preset built-in model, the preset skeleton model can be automatically imported, thereby Set the virtual character corresponding to the skeleton model to control. In a practical application, a plurality of skeleton models may be preset in the virtual reality system, and the coordinate systems of the joint points of the skeleton models are different, and each of the preset models has partial identical properties at that time. Thereby, the same properties of each preset skeleton model can be acquired, and a built-in skeleton model is generated according to the same property. For example, each imported skeletal model uses the head above the foot, that is, the vector with the preset built-in model facing up; the left hand is on the left side of the right hand, that is, the direction of the model toward the right can be determined; from the top to the right The vector can determine the forward vector by cross-multiplication, which determines the orthonormal basis of the built-in bone model. In this embodiment, the built-in model determines a bone joint of the built-in model according to the correspondence relationship, and the bone joint of the built-in model may be composed of three custom data structures, the data structure including a type of the coordinate axis and one A tag, where the type indicates the type of axis it belongs to, and the tag indicates the directional relationship of the axis to the built-in model. The mark may be 1 or -1, with 1 indicating the same direction and -1 indicating the opposite direction.
进一步,对于预设骨骼模型的各关节与内置模型的对应关系可以依次建立。也就是说,首先读取预设骨骼模型的关节点的空间信息时,在将各关节点的空间信息与预设内置模型进行关联,即使用内置模型对不同的导入骨骼模型资源的关节点坐标轴进行标示,采用如下数据结构进行标示。例如,对于胸腔关节点来说,内置模型的前向轴对应胸腔关节点的y轴(方向相反)、水平轴对应胸腔关节点的x轴(方向相反),以及垂直轴对应胸腔关节点的z轴(方向一致),这样预设内置模型与预设骨骼模型的胸腔关节建立对应关系。当然,对于预设骨骼模型的其他关节点坐标轴分别与内置模型坐标轴进行比较,并记录预设骨骼模型的各关节点的坐标轴与内置模型的三个轴(前向轴、水平轴和垂直轴)的对应,同时记录坐标轴与内置模型的方向,从而实现了预设骨骼模型与预设内置模型的对应 关系的建立。Further, the correspondence between the joints of the preset skeleton model and the built-in model can be established in sequence. That is to say, when the spatial information of the joint point of the preset skeleton model is first read, the spatial information of each joint point is associated with the preset built-in model, that is, the joint point coordinates of the different imported skeleton model resources are used by using the built-in model. The axes are marked and marked with the following data structure. For example, for a thoracic joint point, the forward axis of the built-in model corresponds to the y-axis (opposite direction) of the thoracic joint point, the horizontal axis corresponds to the x-axis of the thoracic joint point (in the opposite direction), and the vertical axis corresponds to the thoracic joint point. The axis (consistent direction), so that the preset built-in model is associated with the thoracic joint of the preset skeletal model. Of course, the other joint point axes of the preset skeleton model are respectively compared with the built-in model axes, and the coordinate axes of the joint points of the preset skeleton model and the three axes of the built-in model (forward axis, horizontal axis, and The correspondence of the vertical axis) simultaneously records the direction of the coordinate axis and the built-in model, thereby realizing the establishment of the correspondence between the preset skeleton model and the preset built-in model.
此外,在预设内置模型与预设骨骼模型对应关系建立后,在获取到手臂姿态数据后,可以通过预设内置模型进行重定向,以将其与其对应的预设骨骼模型相对应。这样对于不同的预设骨骼模型,其关节点坐标系与捕捉数据的关节点坐标系不同的情况下,也能够将捕捉数据转换到预设骨骼模型的关节坐标系下。相应的,所述根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色具体包括:In addition, after the preset built-in model and the preset skeleton model are established, after the arm pose data is acquired, the preset built-in model can be used to redirect to correspond to the corresponding preset skeleton model. In this way, for different preset skeleton models, when the joint point coordinate system is different from the joint point coordinate system of the captured data, the captured data can also be converted into the joint coordinate system of the preset skeleton model. Correspondingly, the converting the first arm posture into the second arm posture of the preset virtual character according to the preset built-in model, and driving the preset virtual character according to the second arm posture comprises:
S31、将所述第一手臂姿态重定向至预设内置模型各关节点坐标系统内;S31, redirecting the first arm posture to a coordinate system of each joint point of the preset built-in model;
S32、根据所述对应关系将所述第一手臂姿态转换至预设骨骼模型的各关节点坐标系内,以得到第二手臂姿态;S32. Convert the first arm posture to each joint point coordinate system of the preset bone model according to the correspondence relationship to obtain a second arm posture;
S33、根据所述第二手臂姿态确定所述预设骨骼模型对应的预设虚拟角色。S33. Determine a preset virtual character corresponding to the preset skeleton model according to the second arm posture.
具体地,将获取到第一手臂姿态包含的各关节点的位姿数据(包括位置和姿态数据),并将各关节点的位姿数据的x轴、y轴和z轴对应至预设内置模型坐标轴,并根据预设内置模型坐标轴与预设骨骼模型的关节点的坐标轴的对应关系,将与内置模型坐标轴对应的捕捉数据的坐标轴转为预设骨骼模型的坐标轴,从而将第一手臂姿态转换为第二手臂姿态并驱动预设骨骼模型对应的虚拟角色。在本实施例中,所述第一手臂姿态重定向的过程可以为:将捕捉数据的坐标轴与预设内置模型的坐标轴进行对应,将捕捉数据的坐标系进行重定向。其具体过程可以为:捕捉数据中的关节点的三个坐标轴与内置模型三个坐标轴,并且将内置模型的前向轴、水平轴和垂直轴对应上捕捉数据模型的坐标轴;若前向轴标示为x轴,向量x=捕捉数据的x轴向量;若前向轴为y轴,向量x=捕捉数据的y轴向量;若前向轴为z轴,向量x=捕捉数据的z轴向量;若水平轴标示为x轴,向量y=捕捉数据的x轴向量;若水平轴为y轴,向量y=捕捉数据的y轴向量;若水平轴为z轴,向量y=捕捉数据的z轴向量;若垂直轴标示为x轴,向量z=捕捉数据的x轴向量;若垂直轴为y轴,向量z=捕捉数据的y轴向量;若垂直轴为z轴,向量z=捕捉数据的z轴向量;最后根据向量x,向量y和向量z组成新坐标轴,已完成捕捉数据的重定向。Specifically, the pose data (including the position and posture data) of each joint point included in the first arm posture is acquired, and the x-axis, the y-axis, and the z-axis of the pose data of each joint point are corresponding to the preset built-in. a coordinate axis of the model, and according to a correspondence relationship between the preset built-in model coordinate axis and the coordinate axis of the joint point of the preset skeleton model, the coordinate axis of the captured data corresponding to the built-in model coordinate axis is converted into the coordinate axis of the preset skeleton model, Thereby, the first arm posture is converted into the second arm posture and the virtual character corresponding to the preset skeleton model is driven. In this embodiment, the process of reorienting the first arm posture may be: matching a coordinate axis of the captured data with a coordinate axis of the preset built-in model, and redirecting the coordinate system of the captured data. The specific process may be: capturing three coordinate axes of joint points in the data and three coordinate axes of the built-in model, and correspondingly positioning the forward axis, the horizontal axis and the vertical axis of the built-in model on the coordinate axes of the data model; The axis is labeled as the x-axis, the vector x = the x-axis vector of the captured data; if the forward axis is the y-axis, the vector x = the y-axis vector of the captured data; if the forward axis is the z-axis, the vector x = the captured data The z-axis vector; if the horizontal axis is labeled as the x-axis, the vector y = the x-axis vector of the captured data; if the horizontal axis is the y-axis, the vector y = the y-axis vector of the captured data; if the horizontal axis is the z-axis, Vector y= captures the z-axis vector of the data; if the vertical axis is labeled x-axis, the vector z=x-axis vector of the captured data; if the vertical axis is the y-axis, the vector z=the y-axis vector of the captured data; if vertical The axis is the z-axis, the vector z=the z-axis vector of the captured data; finally, the new coordinate axis is formed according to the vector x, the vector y and the vector z, and the redirection of the captured data has been completed.
基于上述基于手臂动作捕捉的虚拟现实驱动方法,本申请还提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程 序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述实施例所述的基于手臂动作捕捉的虚拟现实驱动方法中的步骤。Based on the above-described virtual reality driving method based on arm motion capture, the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores one or more programs, the one or more The program may be executed by one or more processors to implement the steps in the virtual motion driving method based on arm motion capture as described in the above embodiments.
基于上述基于手臂动作捕捉的虚拟现实驱动方法,本发明还提供了一种虚拟现实系统,其包括:动作捕捉系统以及虚拟现实设备,如图5所示,所述虚拟现实设备包括至少一个处理器(processor)20;显示屏21;以及存储器(memory)22,还可以包括通信接口(Communications Interface)23和总线24。其中,处理器20、显示屏21、存储器22和通信接口23可以通过总线24完成相互间的通信。显示屏21设置为显示初始设置模式中预设的用户引导界面。通信接口23可以传输信息。处理器20可以调用存储器22中的逻辑指令,以执行上述实施例中的方法。Based on the above-described virtual reality driving method based on arm motion capture, the present invention further provides a virtual reality system, including: a motion capture system and a virtual reality device, as shown in FIG. 5, the virtual reality device includes at least one processor (processor) 20; display 21; and memory 22, which may also include a communication interface 23 and a bus 24. Among them, the processor 20, the display screen 21, the memory 22, and the communication interface 23 can complete communication with each other through the bus 24. The display screen 21 is set to display a user guidance interface preset in the initial setting mode. The communication interface 23 can transmit information. Processor 20 may invoke logic instructions in memory 22 to perform the methods in the above-described embodiments.
此外,上述的存储器22中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。In addition, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and sold or used as separate products, and may be stored in a computer readable storage medium.
存储器22作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令或模块。处理器20通过运行存储在存储器22中的软件程序、指令或模块,从而执行功能应用以及数据处理,即实现上述实施例中的方法。The memory 22 is a computer readable storage medium, and can be configured to store a software program, a computer executable program, a program instruction or a module corresponding to the method in the embodiment of the present disclosure. The processor 20 performs the functional application and data processing by executing software programs, instructions or modules stored in the memory 22, i.e., implements the methods in the above embodiments.
存储器22可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器22可以包括高速随机存取存储器,还可以包括非易失性存储器。例如,U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory, and may also include a nonvolatile memory. For example, a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., may also be used to store a program code. State storage medium.
此外,上述存储介质以及移动终端中的多条指令处理器加载并执行的具体过程在上述方法中已经详细说明,在这里就不再一一陈述。In addition, the above-described storage medium and the specific processes loaded and executed by the plurality of instruction processors in the mobile terminal have been described in detail in the above methods, and will not be further described herein.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。It should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, and are not limited thereto; although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that The technical solutions described in the foregoing embodiments are modified, or the equivalents of the technical features are replaced. The modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

  1. 一种基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,其包括:A virtual reality driving method based on arm motion capture, characterized in that it comprises:
    当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据,其中,所述初始位姿数据包括预设位姿数据及人体原始数据;When the human body wears the motion capture system, initializing the preset posture to obtain initial pose data, wherein the initial pose data includes preset pose data and human body raw data;
    捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态,其中,所述第一手臂姿态包括躯干关节以及手臂运动链关节;Capturing the real-time posture data of the human body, determining the first arm posture according to the real-time attitude data and the initial pose data by using an arm-to-link transformation matrix method, wherein the first arm posture includes a trunk joint and an arm motion chain joint;
    根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色。Converting the first arm posture into a second arm posture of a preset virtual character according to a preset built-in model, and driving the preset virtual character according to the second arm posture.
  2. 根据权利要求1所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述动作捕捉系统至少包括头显、左右手柄、左右上臂追踪器以及躯干追踪器。The virtual reality driving method based on arm motion capture according to claim 1, wherein the motion capture system comprises at least a head display, a left and right handle, a left and right upper arm tracker, and a torso tracker.
  3. 根据权利要求1所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据具体包括:The virtual reality driving method based on the arm motion capture according to claim 1, wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data specifically includes:
    当人体穿戴动作捕捉系统时,捕捉人体处于预设姿态的预设位姿数据,其中,所述预设姿态包括第一姿态和第二姿态;When the human body wears the motion capture system, capturing preset posture data of the human body in a preset posture, wherein the preset posture includes a first posture and a second posture;
    根据所述第一姿态对应的预设位姿数据校正预设骨骼模型;Correcting a preset skeleton model according to the preset pose data corresponding to the first posture;
    根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据。Calculating initial body data based on the preset pose data to obtain initial pose data.
  4. 根据权利要求3所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述根据所述预设位姿数据计算人体初始数据,以得到初始位姿数据具体包括:The virtual reality driving method based on the arm motion capture according to claim 3, wherein the calculating the initial body data according to the preset pose data to obtain the initial pose data specifically includes:
    根据所述第一姿态对应的预设位姿数据计算上半身各关节的相对位置关系;Calculating a relative positional relationship of each joint of the upper body according to the preset pose data corresponding to the first posture;
    将所述第二姿态对应的预设位姿数据与所述第一姿态对应的预设位姿数据相比较来计算人体原始数据,以得到初始位姿数据。Comparing the preset pose data corresponding to the second posture with the preset pose data corresponding to the first posture to calculate human body raw data to obtain initial pose data.
  5. 根据权利要求1所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述捕捉人体的实时姿态数据,根据实时姿态数据以及初始位姿数据以手臂连杆件间变换矩阵方法确定第一手臂姿态具体包括:The virtual reality driving method based on arm motion capture according to claim 1, wherein the capturing real-time attitude data of the human body is determined according to the real-time attitude data and the initial pose data by using an arm-to-link transformation matrix method. The arm posture specifically includes:
    捕捉人体的实时姿态数据,根据预设躯干姿态公式计算躯干关节实时数据,并根据预设上臂姿态公式计算上臂实时位置数据;Capturing the real-time posture data of the human body, calculating the real-time data of the trunk joint according to the preset torso posture formula, and calculating the real-time position data of the upper arm according to the preset upper arm posture formula;
    根据初始位姿数据确定肩关节位置,并根据所述肩关节数据及肩关节变化矩阵计算肘关节实时数据,其中,所述肘关节数据为肩关节所处坐标系的X轴方向偏移上臂长;Determining the shoulder joint position according to the initial pose data, and calculating real-time data of the elbow joint according to the shoulder joint data and the shoulder joint change matrix, wherein the elbow joint data is offset from the upper arm length in the X-axis direction of the coordinate system of the shoulder joint ;
    根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。Calculating an angle between the upper arm and the forearm according to the shoulder joint data and the elbow joint data, and calculating forearm pose data according to the included angle to obtain a first arm posture.
  6. 根据权利要求5所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述根据所述肩关节数据以及肘关节数据计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态具体包括:The virtual reality driving method based on arm motion capture according to claim 5, wherein the angle between the upper arm and the forearm is calculated according to the shoulder joint data and the elbow joint data, and the forearm posture is calculated according to the angle The data to get the first arm pose specifically includes:
    根据肩关节数据和肘关节实时数据确定肘关节指向肩关节的第一单位向量,并根据肘关节实时数据和腕关节实时数据确定肘关节指向腕关节的第二单位向量;Determining the first unit vector of the elbow joint to the shoulder joint based on the shoulder joint data and the real-time data of the elbow joint, and determining the second unit vector of the elbow joint pointing to the wrist joint according to the real data of the elbow joint and the real-time data of the wrist joint;
    根据所述第一单位向量和第二单位向量通过余弦定理计算上臂与前臂的夹角,并根据所述夹角计算前臂位姿数据,以得到第一手臂姿态。Calculating an angle between the upper arm and the forearm by using a cosine theorem according to the first unit vector and the second unit vector, and calculating forearm pose data according to the included angle to obtain a first arm posture.
  7. 根据权利要求1所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述当人体穿戴动作捕捉系统时,初始化预设姿态以得初始位姿数据之前包括:The virtual reality driving method based on the arm motion capture according to claim 1, wherein when the human body wears the motion capture system, initializing the preset posture to obtain the initial pose data includes:
    接收并存储预设骨骼模型,并将所述预设骨骼模型的各关节坐标系与预设内置模型进行关联,以得到预设骨骼模型与预设内置模型的对应关系。The preset skeleton model is received and stored, and each joint coordinate system of the preset skeleton model is associated with the preset built-in model to obtain a correspondence between the preset skeleton model and the preset built-in model.
  8. 根据权利要求7所述基于手臂动作捕捉的虚拟现实驱动方法,其特征在于,所述根据预设内置模型将所述第一手臂姿态转换为预设虚拟角色的第二手臂姿态,并根据所述第二手臂姿态驱动预设虚拟角色具体包括:The virtual reality driving method based on arm motion capture according to claim 7, wherein the first arm posture is converted into a second arm posture of a preset virtual character according to a preset built-in model, and according to the The second arm gesture driving the preset virtual character specifically includes:
    将所述第一手臂姿态重定向至预设内置模型各关节点坐标系统内;Redirecting the first arm posture to a coordinate system of each joint point of the preset built-in model;
    根据所述对应关系将所述第一手臂姿态转换至预设骨骼模型的各关节点坐标系内,以得到第二手臂姿态;Converting the first arm posture into each joint point coordinate system of the preset skeleton model according to the correspondence relationship to obtain a second arm posture;
    根据所述第二手臂姿态确定所述预设骨骼模型对应的预设虚拟角色。Determining a preset virtual character corresponding to the preset skeleton model according to the second arm posture.
  9. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1~8任意一项所述的基于手臂动作捕捉的虚拟现实驱动方法中的步骤。A computer readable storage medium, wherein the computer readable storage medium stores one or more programs, the one or more programs being executable by one or more processors to implement claim 1 The steps in the virtual reality driving method based on arm motion capture according to any one of the above.
  10. 一种虚拟现实系统,其特征在于,其包括:动作捕捉系统以及虚拟现实设备,所述虚拟现实设备包括处理器、存储器及通信总线;所述存储器上存储有可被所述处理器执行的计算机可读程序;A virtual reality system, comprising: a motion capture system and a virtual reality device, the virtual reality device comprising a processor, a memory and a communication bus; wherein the memory stores a computer executable by the processor Readable program
    所述通信总线实现处理器和存储器之间的连接通信;The communication bus implements connection communication between the processor and the memory;
    所述处理器执行所述计算机可读程序时实现如权利要求1-8任意一项所述的基于手臂动作捕捉的虚拟现实驱动方法中的步骤。The step in the virtual reality driving method based on the arm motion capture according to any one of claims 1-8 when the processor executes the computer readable program.
PCT/CN2018/097078 2018-05-18 2018-07-25 Virtual reality driving method based on arm motion capture, and virtual reality system WO2019218457A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810479630.X 2018-05-18
CN201810479630.XA CN108762495B (en) 2018-05-18 2018-05-18 Virtual reality driving method based on arm motion capture and virtual reality system

Publications (1)

Publication Number Publication Date
WO2019218457A1 true WO2019218457A1 (en) 2019-11-21

Family

ID=64007279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097078 WO2019218457A1 (en) 2018-05-18 2018-07-25 Virtual reality driving method based on arm motion capture, and virtual reality system

Country Status (2)

Country Link
CN (1) CN108762495B (en)
WO (1) WO2019218457A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814714B (en) * 2019-01-21 2020-11-20 北京诺亦腾科技有限公司 Method and device for determining installation posture of motion sensor and storage medium
CN110327048B (en) * 2019-03-11 2022-07-15 浙江工业大学 Human upper limb posture reconstruction system based on wearable inertial sensor
CN110269623A (en) * 2019-06-24 2019-09-24 京东方科技集团股份有限公司 Method for determining speed and device, virtual reality display methods and device
CN110780738B (en) * 2019-10-17 2023-07-04 深圳市创凯智能股份有限公司 Virtual reality simulation walking method, device, equipment and readable storage medium
CN110930483B (en) * 2019-11-20 2020-11-24 腾讯科技(深圳)有限公司 Role control method, model training method and related device
CN111079616B (en) * 2019-12-10 2022-03-04 西安电子科技大学 Single-person movement posture correction method based on neural network
CN111382194A (en) * 2020-03-09 2020-07-07 北京如影智能科技有限公司 Method and device for acquiring mechanical arm control data
CN111539299B (en) * 2020-04-20 2024-03-01 上海曼恒数字技术股份有限公司 Human motion capturing method, device, medium and equipment based on rigid body
CN111880657B (en) * 2020-07-30 2023-04-11 北京市商汤科技开发有限公司 Control method and device of virtual object, electronic equipment and storage medium
CN112571416B (en) * 2020-12-10 2022-03-22 北京石油化工学院 Coordinate system calibration method suitable for robot system and motion capture system
CN112818898B (en) * 2021-02-20 2024-02-20 北京字跳网络技术有限公司 Model training method and device and electronic equipment
CN113190112A (en) * 2021-04-08 2021-07-30 深圳市瑞立视多媒体科技有限公司 Method for driving target model by extensible data glove and related device
CN113205557B (en) * 2021-05-20 2022-07-15 上海曼恒数字技术股份有限公司 Whole body posture reduction method and system
CN113967910B (en) * 2021-09-22 2023-03-24 香港理工大学深圳研究院 Man-machine cooperative control method and system based on augmented reality and digital twins
CN114089833A (en) * 2021-11-23 2022-02-25 清华大学 Method and system for quantifying ownership of virtual reality body and electronic equipment
CN116394265B (en) * 2023-06-08 2023-11-07 帕西尼感知科技(张家港)有限公司 Attitude sensor calibration method, attitude sensor calibration device, attitude sensor calibration equipment and storage medium
CN116501175B (en) * 2023-06-25 2023-09-22 江西格如灵科技股份有限公司 Virtual character moving method, device, computer equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot
CN105904457A (en) * 2016-05-16 2016-08-31 西北工业大学 Heterogeneous redundant mechanical arm control method based on position tracker and data glove
CN107818318A (en) * 2017-11-27 2018-03-20 华南理工大学 A kind of anthropomorphic robot imitates method for evaluating similarity

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10011890C2 (en) * 2000-03-03 2003-04-24 Jena Optronik Gmbh Method for determining the state variables of a moving rigid body in space
FR2870618B1 (en) * 2004-05-21 2007-04-06 Kenneth Kuk Kei Wang METHOD FOR ACQUIRING AND MANAGING ON A NETWORK OF PERSONAL MORPHOLOGICAL DATA COMPUTERS AND DEVICE ADAPTED FOR CARRYING OUT SAID METHOD
US20120095596A1 (en) * 2010-10-14 2012-04-19 Special Applications Technology, Inc. Modular apparatuses
CN102672719B (en) * 2012-05-10 2014-11-19 浙江大学 Dynamic stability control method for operation of humanoid robot arm
CN103112007B (en) * 2013-02-06 2015-10-28 华南理工大学 Based on the man-machine interaction method of hybrid sensor
CN106313049B (en) * 2016-10-08 2017-09-26 华中科技大学 A kind of apery mechanical arm motion sensing control system and control method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot
CN105904457A (en) * 2016-05-16 2016-08-31 西北工业大学 Heterogeneous redundant mechanical arm control method based on position tracker and data glove
CN107818318A (en) * 2017-11-27 2018-03-20 华南理工大学 A kind of anthropomorphic robot imitates method for evaluating similarity

Also Published As

Publication number Publication date
CN108762495B (en) 2021-06-29
CN108762495A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
WO2019218457A1 (en) Virtual reality driving method based on arm motion capture, and virtual reality system
JP7273880B2 (en) Virtual object driving method, device, electronic device and readable storage medium
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
JP4149213B2 (en) Pointed position detection device and autonomous robot
Riley et al. Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids
WO2022002133A1 (en) Gesture tracking method and apparatus
CN110570455A (en) Whole body three-dimensional posture tracking method for room VR
US10976863B1 (en) Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user
CN103529944A (en) Human body movement identification method based on Kinect
CN102350700A (en) Method for controlling robot based on visual sense
JP2015102913A (en) Attitude estimation apparatus and attitude estimation method
CN110609621B (en) Gesture calibration method and human motion capture system based on microsensor
CN109781104B (en) Motion attitude determination and positioning method and device, computer equipment and medium
Maycock et al. Robust tracking of human hand postures for robot teaching
CN114503057A (en) Orientation determination based on both image and inertial measurement units
JP4765075B2 (en) Object position and orientation recognition system using stereo image and program for executing object position and orientation recognition method
CN115469576A (en) Teleoperation system based on human-mechanical arm heterogeneous motion space hybrid mapping
JP2009258884A (en) User interface
Xiang et al. Comparing real-time human motion capture system using inertial sensors with microsoft kinect
JP6455869B2 (en) Robot, robot system, control device, and control method
KR102456872B1 (en) System and method for tracking hand motion using strong coupling fusion of image sensor and inertial sensor
CN114954723A (en) Humanoid robot
CN114546135A (en) Method and system for virtual walking based on inertial sensor
JP2004163990A5 (en)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919159

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18919159

Country of ref document: EP

Kind code of ref document: A1