WO2022002132A1 - 多传感器手柄控制器混合追踪方法及装置 - Google Patents

多传感器手柄控制器混合追踪方法及装置 Download PDF

Info

Publication number
WO2022002132A1
WO2022002132A1 PCT/CN2021/103544 CN2021103544W WO2022002132A1 WO 2022002132 A1 WO2022002132 A1 WO 2022002132A1 CN 2021103544 W CN2021103544 W CN 2021103544W WO 2022002132 A1 WO2022002132 A1 WO 2022002132A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
handle controller
tracking
time
pose
Prior art date
Application number
PCT/CN2021/103544
Other languages
English (en)
French (fr)
Inventor
吴涛
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Priority to EP21834056.0A priority Critical patent/EP4155873A4/en
Publication of WO2022002132A1 publication Critical patent/WO2022002132A1/zh
Priority to US18/086,425 priority patent/US20230119687A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the invention relates to the technical field of computer vision, and in particular, to a hybrid tracking method and device for a multi-sensor handle controller.
  • the built-in IMU (Inertial measurement unit) inertial navigation sensor of the controller uses computer vision technology to track the position and attitude information of the handle controller in space with 6DoF (degree of freedom);
  • the handle controller has built-in ultrasonic sensors and IMU inertial navigation sensors, combined with the ultrasonic data and IMU inertial navigation data corresponding to the movement of the handle in space, to track the position and attitude information 6DoF of the handle controller in space in real time;
  • the handle controller has a built-in electromagnetic sensor and IMU inertial navigation sensor, combined with the electromagnetic data and IMU inertial navigation data corresponding to the movement of the handle in space, to track the position and attitude information of the handle controller in space 6DoF in real time.
  • the electromagnetic sensor on the handle controller is sensitive to electromagnetic signals in the environment, and is easily interfered by complex electromagnetic signals in the environment, so that the electromagnetic sensor will generate wrong electromagnetic tracking data of the handle controller.
  • the tracking performance of the handle controller will also be affected. Therefore, the current handle controller attitude tracking method will be affected by different interference sources in the normal environment, and there are many limitations, which cause the handle controller to drift, shake, freeze and other phenomena in the virtual scene, which is more serious affect the user experience.
  • the purpose of the present invention is to provide a hybrid tracking method and device for a multi-sensor joystick controller, so as to solve the problem that the existing joystick controller tracking method is easily affected by different interference sources, has limitations in use in virtual scenes, and affects user experience.
  • the problem is to provide a hybrid tracking method and device for a multi-sensor joystick controller, so as to solve the problem that the existing joystick controller tracking method is easily affected by different interference sources, has limitations in use in virtual scenes, and affects user experience.
  • One aspect of the present invention is to provide a hybrid tracking method for a multi-sensor handle controller, comprising:
  • the tracking data includes optical tracking data, electromagnetic tracking data and inertial navigation data
  • the position and attitude information of the handle controller in space are obtained.
  • Another aspect of the present invention is to provide a multi-sensor handle controller hybrid tracking device, wherein the surface of the handle controller is provided with optical pattern marking points, and the device includes:
  • a plurality of cameras arranged on the helmet display, for tracking and photographing the optical pattern marking points;
  • an optical sensor for acquiring optical tracking data of the optical pattern marking points
  • Electromagnetic sensor used to obtain the electromagnetic tracking data of the handle controller
  • Inertial navigation sensor used to obtain the inertial navigation data of the handle controller
  • the model building module is used to create the state transition model and observation model of the extended Kalman filter iteration strategy, and perform extended Kalman filter fusion on optical tracking data, electromagnetic tracking data and inertial navigation data;
  • the tracking module is used to obtain the position and attitude information of the handle controller in space according to the extended Kalman filter iteration strategy.
  • the present invention has the following advantages and beneficial effects:
  • the invention obtains the tracking data of the handle controller based on the optical sensor, the electromagnetic sensor and the inertial navigation sensor, performs mixed tracking on the handle controller, takes into account the influence of the tracking performance of the handle controller by different interference sources, and improves the use of the tracking method limitation.
  • the invention uses the three-way sensor data of the handle controller to create the state transition model and observation model of the extended Kalman filter, optimizes the application stability of the handle controller in the virtual scene, and can maintain the high-precision tracking quality of the handle controller at the same time. .
  • FIG. 1 is a schematic flowchart of the hybrid tracking method of the multi-sensor handle controller according to the present invention.
  • the joystick controller is an indispensable interactive device in the field of VR/AR/MR. It is also an essential device for users to interact with virtual reality, augmented reality or mixed reality scenes. It is used in conjunction with VR headsets. Therefore, key performance parameters such as tracking accuracy, tracking delay, and tracking stability of the controller directly affect the user experience.
  • the hybrid tracking method based on multiple sensors of the present invention can track the position and attitude information of the handle controller in the three-dimensional space.
  • FIG. 1 is a schematic flowchart of the hybrid tracking method of the multi-sensor handle controller according to the present invention. As shown in FIG. 1 , the hybrid tracking method of the multi-sensor handle controller according to the present invention includes:
  • Step S1 obtaining the tracking data of the handle controller, the tracking data includes optical tracking data, electromagnetic tracking data and inertial navigation data; wherein, the optical tracking data is measured and obtained by an optical sensor, the electromagnetic tracking data is measured and obtained by an electromagnetic sensor, and the inertial navigation data is obtained.
  • the data is acquired by the gravitational acceleration sensor and the gyroscope sensor, which are used to measure the movement and rotation in the three directions of the x-axis, the y-axis and the z-axis.
  • the inertial measurement unit can be used for measurement; IMU Including accelerometers, gyroscopes and magnetometers, accelerometers are used to measure movement along the x, y, and z axes, gyroscopes measure 360° rotational motion, and magnetometers are used to measure the strength and direction of the magnetic field, and the orientation of the positioning device;
  • Step S2 creating a state transition model and an observation model of the extended Kalman filter iteration strategy according to the tracking data, and performing extended Kalman filter fusion on the optical tracking data, the electromagnetic tracking data, and the inertial navigation data;
  • Step S3 obtaining the position and attitude information of the handle controller in space according to the extended Kalman filter iteration strategy, wherein the attitude information refers to the orientation of the handle controller.
  • optical tracking data electromagnetic tracking data and inertial navigation data of the joystick controller into account when creating the state transition model and observation model of the extended Kalman filter iterative strategy
  • the influence of different interference factors can be reduced and the joystick control can be enhanced.
  • the stability of the tracking of the device makes it suitable for many different environments. In particular, in virtual environment applications, the user experience can be enhanced.
  • the implementation of the multi-sensor-based handle controller hybrid tracking method of the present invention can be roughly divided into two stages, namely the acquisition stage of the tracking data and the creation stage of the extended Kalman filter iterative strategy.
  • the step of obtaining optical tracking data for the handle controller includes:
  • the movement of the handle controller in space is captured in real time, and the handle image is obtained, and the handle image includes the optical pattern marking points arranged on the handle controller; There are multiple, through the camera to capture the movement of the handle controller;
  • the position and attitude data of the optical pattern marker point relative to the camera are solved by the PNP (pespective-n-point) algorithm, which is used as the optical tracking data of the handle controller.
  • PNP pespective-n-point
  • the electromagnetic sensor includes an electromagnetic signal generator and an electromagnetic signal receiver, wherein the electromagnetic signal generator is built in the handle controller, and the electromagnetic signal receiver is built in the helmet display, and the movement of the handle controller is tracked by the electromagnetic sensor.
  • the step of obtaining electromagnetic tracking data of the handle controller includes:
  • the position and attitude data of the electromagnetic signal generator relative to the electromagnetic signal receiver are obtained through the six-degree-of-freedom solution model of electromagnetic positioning, which is used as the electromagnetic tracking data of the handle controller.
  • the IMU inertial navigation data of the handle controller is received through a wireless transmission module built into the helmet display, wherein the IMU inertial navigation data includes three axes (x-axis, y-axis and z-axis) measured by the gravitational acceleration sensor data and three-axis (x, y, and z) data measured by the gyro sensor.
  • the step of creating the state transition model and the observation model of the extended Kalman filter iterative strategy according to the tracking data includes:
  • the 6DoF data of the second camera, the third camera and the fourth camera are all converted into 6DoF data in the coordinate system with the first camera as the origin, and the first camera is used as the origin.
  • the 6DoF optical data in the coordinate system is denoted as Pose optic , which includes the angle and translation information in the x-axis direction, the angle and translation information in the y-axis direction, and the angle and translation information in the z-axis direction;
  • the electromagnetic 6DoF data of the handle controller is converted into 6DoF data in the coordinate system with the first camera as the origin, denoted as Pose EM , which includes x The angle and translation information in the axis direction, the angle and translation information in the y-axis direction, and the angle and translation information in the z-axis direction;
  • Pose OEM Integrate optical data Pose optic and electromagnetic data Pose EM into new position data
  • Pose OEM uses the new position data Pose OEM to build the state transition model and observation model of the extended Kalman filter iterative strategy.
  • the steps of fusing the optical data Pose optic and the electromagnetic data Pose EM into new position data denoted as Pose OEM include:
  • the average value of the data Pose optic and the data Pose EM in the x-axis direction, the y-axis direction and the z-axis direction are taken as the data Pose OEM ;
  • the data Pose optic and the data Pose EM in the x-axis direction, the y-axis direction and the z-axis direction are weighted and averaged respectively, Obtain the data Pose OEM ; wherein, the weight of the optical data and the electromagnetic data can be determined according to the actual application environment of the handle controller, preferably, the weight of the optical data Pose optic is set to 0.65, and the weight of the electromagnetic data Pose EM is set to 0.35.
  • the state transition model of the extended Kalman filter iterative strategy is as follows:
  • the value of the velocity displacement components in the three axis directions of x, y, and z at the initial moment is 0;
  • ⁇ T represents the time difference between time k and time k-1.
  • the refresh rate of the tracking data of the Pose OEM is 200 Hz, that is, the time difference ⁇ T between time k and time k-1 is 5 ms.
  • observation model of the extended Kalman filter iterative strategy is as follows:
  • the method further includes: setting a process noise variance matrix shown in the following formula:
  • PNoiseCov represents the process noise variance matrix
  • p_error represents the displacement noise error
  • v_error represents the velocity noise error
  • the process noise variance matrix is set as a diagonal matrix.
  • the displacement noise errors of the joystick controller in the three directions of x-axis, y-axis and z-axis are set to be equal, and the x-axis, y-axis and z-axis are equal.
  • the velocity noise errors in each direction are equal; and, the smaller the handle movement speed, the greater the process noise of position translation and velocity in the three axis directions of x, y, and z, and the greater the handle movement speed.
  • the process noise of position translation and velocity in each axis direction is smaller.
  • the displacement noise error and the velocity noise error in the process noise are adaptively adjusted by the following model:
  • x represents the smooth confidence of the position movement data in the three axis directions of x, y, and z at time k in the Pose OEM data of the controller of the frame controller at the current moment.
  • x is inversely proportional to the movement speed of the handle controller.
  • the method further includes: setting a measurement noise variance matrix as shown in the following formula:
  • MNoiseCov represents the measurement noise variance matrix
  • M_error represents the measurement noise error
  • the measurement noise variance matrix is also set as a diagonal matrix, and the values of the three axis directions of x, y, and z can be considered to be equal.
  • M_error can be set to 2. Of course, M_error can also be other values.
  • the tracking of each frame of the handle controller can optimize the 6DoF position data according to the extended Kalman filter iteration strategy.
  • the final controller tracking data 6DoF is generated for extended Kalman filter iterative optimization Position data, position smoothing data in x, y, z axis directions and attitude data in Pose OEM.
  • the hybrid tracking method of the multi-sensor handle controller of the present invention can be applied to a virtual scene to track the movement of the handle controller.
  • the multi-sensor handle controller hybrid tracking method of the present invention can be applied to a multi-sensor handle controller hybrid tracking device.
  • the surface of the handle controller is provided with optical pattern marking points, and the optical pattern marking points are used to obtain optical tracking data, wherein the device includes:
  • a plurality of cameras, arranged on the helmet display, are used to track and photograph the optical pattern marking points.
  • four cameras can be arranged in a rectangular distribution and are respectively arranged at the four corners of the helmet display; the optical pattern marking points can be set The ring pattern or other marking pattern on the handle controller, which is not specifically limited in the present invention;
  • an optical sensor for acquiring optical tracking data of the optical pattern marking points
  • Electromagnetic sensor used to obtain the electromagnetic tracking data of the handle controller
  • Inertial navigation sensor used to obtain the inertial navigation data of the handle controller
  • the model building module is used to create the state transition model and observation model of the extended Kalman filter iteration strategy, and perform extended Kalman filter fusion on optical tracking data, electromagnetic tracking data and inertial navigation data;
  • the tracking module is used to obtain the position and attitude information of the handle controller in space according to the extended Kalman filter iteration strategy.
  • the state transition model of the extended Kalman filter iterative strategy created by the model building module is:
  • the value of the velocity displacement components in the three axis directions of x, y, and z at the initial moment is 0;
  • ⁇ T represents the time difference between time k and time k-1.
  • observation model of the extended Kalman filter iterative strategy is as follows:
  • the invention takes into account optical tracking data, electromagnetic tracking data and inertial navigation data, reduces the influence of environmental interference factors on it, and improves the tracking stability of the handle controller.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明公开了一种多传感器手柄控制器混合追踪方法及装置,其中,方法包括:获取手柄控制器的追踪数据,所述追踪数据包括光学追踪数据、电磁追踪数据和惯性导航数据;根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对所述光学追踪数据、所述电磁追踪数据和所述惯性导航数据进行扩展卡尔曼滤波融合;根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息。本发明基于光学传感器、电磁传感器、惯性导航传感器对手柄控制器进行混合追踪,可以在保持手柄控制器高精度的追踪质量的前提下,优化手柄控制器的追踪稳定性。

Description

多传感器手柄控制器混合追踪方法及装置 技术领域
本发明涉及计算机视觉技术领域,具体地,涉及一种多传感器手柄控制器混合追踪方法及装置。
背景技术
目前,在VR/AR/MR领域,大多会采用相机设备,通过在手柄控制器上内置一些特殊光学图案的标记点,实时捕捉手柄控制器上的光学图案在空间中的运动状态,结合手柄控制器内置的IMU(惯性测量单元,Inertial measurement unit)惯性导航传感器,通过计算机视觉技术追踪手柄控制器在空间中的位置和姿态信息的6DoF(自由度,degree of freedom);也有一些会采用通过在手柄控制器内置超声波传感器和IMU惯性导航传感器,结合手柄在空间中运动对应的超声波数据和IMU惯性导航数据,实时追踪手柄控制器在空间中的位置和姿态信息6DoF;还有一些会采用通过在手柄控制器内置电磁传感器和IMU惯性导航传感器,结合手柄在空间中运动对应的电磁数据和IMU惯性导航数据,实时追踪手柄控制器在空间中的位置和姿态信息6DoF。
但是,目前现有的手柄控制器姿态追踪方法对手柄控制器的姿态信息进行追踪控制时,容易出现以下几种影响因素导致手柄控制器的追踪性能受到较大影响。例如,由于相机对环境光比较敏感,环境光复杂程度会直接影响相机的成像质量,进而影响手柄控制器的追踪性能;在实际环境下,手柄控制器上的光学图案的多个标记点位置信息和摄像头很容易保持同一个入射角度或者当手柄控制器距离摄像头比较近的时候,这样手柄控制器上的光学图案标记点的物块在追踪摄像头成像图像上的信息就会有重合或者粘连,进而影响手柄控制器的追踪性能;手柄控制器上的电磁传感器对 环境中的电磁信号比较敏感,容易受到环境中复杂的电磁信号干扰,使其电磁传感器会产生错误的手柄控制器的电磁追踪数据,比如当手柄控制器的电磁传感器距离电脑主机比较近,或者距离音响麦克、电视、冰箱等比较近的环境下,手柄控制器的追踪性能也会受到影响。因此,目前手柄控制器姿态追踪方法会受到正常环境下的不同干扰源的影响,存在较多的局限性,导致手柄控制器在虚拟场景中,会有漂移,抖动,卡顿等现象,比较严重影响用户体验。
发明内容
鉴于以上问题,本发明的目的是提供一种多传感器手柄控制器混合追踪方法及装置,以解决现有手柄控制器追踪方法易受不同干扰源影响,在虚拟场景中存在使用局限,影响用户体验的问题。
为了实现上述目的,本发明采用以下技术方案:
本发明的一个方面是提供一种多传感器手柄控制器混合追踪方法,包括:
获取手柄控制器的追踪数据,所述追踪数据包括光学追踪数据、电磁追踪数据和惯性导航数据;
根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对所述光学追踪数据、所述电磁追踪数据和所述惯性导航数据进行扩展卡尔曼滤波融合;
根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息。
本发明的另一个方面是提供一种多传感器手柄控制器混合追踪装置,其中,手柄控制器的表面设置有光学图案标记点,所述装置包括:
多个相机,设置于头盔显示器上,用于追踪拍摄所述光学图案标记点;
光学传感器,用于获取所述光学图案标记点的光学追踪数据;
电磁传感器,用于获取手柄控制器的电磁追踪数据;
惯性导航传感器,用于获取手柄控制器的惯性导航数据;
头盔显示器,内置有无线传输模块,用于接收光学追踪数据、电磁追踪数据和惯性导航数据;
模型构建模块,用于创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对光学追踪数据、电磁追踪数据和惯性导航数据进行扩展卡尔曼滤波融合;
追踪模块,用于根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息。
与现有技术相比,本发明具有以下优点和有益效果:
本发明基于光学传感器、电磁传感器和惯性导航传感器获取手柄控制器的追踪数据,对手柄控制器进行混合追踪,将手柄控制器的追踪性能受到不同干扰源的影响考虑在内,改善追踪方法的使用局限性。
本发明利用手柄控制器的三路传感器数据,创建扩展卡尔曼滤波的状态转移模型和观测模型,优化手柄控制器在虚拟场景中应用的稳定性,同时能够保持手柄控制器的高精度的追踪质量。
附图说明
图1是本发明所述多传感器手柄控制器混合追踪方法的流程示意图。
具体实施方式
下面将参考附图来描述本发明所述的实施例。本领域的普通技术人员可以认识到,在不偏离本发明的精神和范围的情况下,可以用各种不同的方式或其组合对所描述的实施例进行修正。因此,附图和描述在本质上是说明性的,而不是用于限制权利要求的保护范围。此外,在本说明书中,附图未按比例画出,并且相同的附图标记表示相同的部分。
手柄控制器在VR/AR/MR领域,是一个不可或缺的交互设备,也是用户与虚拟现实场景、增强现实场景或混合现实场景交互的必备设备,与VR头戴式一体机配套使用,因此手柄控制器的追踪精度和追踪延时、追踪稳 定性等关键性能参数直接影响用户体验。本发明的基于多个传感器的混合追踪方法可以追踪手柄控制器在三维空间中的位置和姿态信息。图1是本发明所述多传感器手柄控制器混合追踪方法的流程示意图,如图1所示,本发明所述多传感器手柄控制器混合追踪方法,包括:
步骤S1,获取手柄控制器的追踪数据,所述追踪数据包括光学追踪数据、电磁追踪数据和惯性导航数据;其中,光学追踪数据由光学传感器测量获取,电磁追踪数据由电磁传感器测量获取,惯性导航数据由重力加速度传感器和陀螺仪传感器测量获取,用于测量在x轴、y轴和z轴三个方向的移动和转动,例如,可以使用惯性测量单元(Inertial measurement unit,IMU)进行测量;IMU包括加速度计、陀螺仪和磁力计,加速度计用于测量沿着x、y、z轴线的移动,陀螺仪测量360°的旋转运动,磁力计用于测量磁场强度和方向、定位设备的方位;
步骤S2,根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对所述光学追踪数据、所述电磁追踪数据和所述惯性导航数据进行扩展卡尔曼滤波融合;
步骤S3,根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息,其中,姿态信息指的是手柄控制器的朝向。
通过在创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型时,将手柄控制器的光学追踪数据、电磁追踪数据和惯性导航数据均考虑在内,可以降低不同干扰因素的影响,增强手柄控制器追踪的稳定性,使其适用于多种不同的环境。尤其是,在虚拟环境应用中,可以增强用户体验。
本发明所述基于多传感器的手柄控制器混合追踪方法的实现可以大致分为两个阶段,分别为追踪数据的获取阶段和扩展卡尔曼滤波迭代策略的创建阶段。
在一个实施例中,获取手柄控制器的光学追踪数据的步骤包括:
实时捕捉手柄控制器在空间中的运动,并得到手柄图像,所述手柄图像中包括设置在手柄控制器上的光学图案标记点;例如,可以通过设置于头盔显示器表面的多个相机实现,相机有多个,通过相机对手柄控制器的 运动进行捕捉摄像;
对所述手柄图像进行特征检测,并获取手柄控制器上的光学图案标记点在所述手柄图像上的位置坐标;
通过PNP(pespective-n-point)算法解算光学图案标记点相对于相机的位置和姿态数据,作为手柄控制器的光学追踪数据。
电磁传感器包括电磁信号发生器和电磁信号接收器,其中,电磁信号发生器内置于手柄控制器中,电磁信号接收器内置于头盔显示器中,通过电磁传感器对手柄控制器的运动进行追踪。在一个实施例中,获取手柄控制器的电磁追踪数据的步骤包括:
利用内置于手柄控制器的电磁信号发生器发出电磁信号;
通过内置于头盔显示器中的电磁信号接收器接收所述电磁信号;
通过电磁定位六自由度解算模型获取电磁信号发生器相对于电磁信号接收器的位置和姿态数据,作为手柄控制器的电磁追踪数据。
在一个实施例中,通过内置于头盔显示器中的无线传输模块接收手柄控制器的IMU惯性导航数据,其中IMU惯性导航数据包括重力加速度传感器测得的三轴(x轴、y轴和z轴)数据和陀螺仪传感器测得的三轴(x轴、y轴和z轴)数据。
优选地,在头盔显示器上设置四个相机,用于捕捉手柄控制器的运动。在一个实施例中,根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型的步骤包括:
根据光学追踪数据实时获取手柄控制器上的光学图案标记点相对于头盔显示器上的第一相机的6DoF数据和速度信息,速度信息记为V x,V y,V z
利用各个相机之间的标定参数,将第二相机、第三相机和第四相机的6DoF数据均转换为以第一相机为原点的坐标系下的6DoF数据,并将以第一相机为原点的坐标系下的6DoF光学数据,记为Pose optic,其包括x轴方向的角度和平移信息,y轴方向的角度和平移信息,z轴方向的角度和平移信息;
通过头盔显示器上内置的电磁信号接收器和第一相机的标定参数,将 手柄控制器的电磁6DoF数据转换为以第一相机为原点的坐标系下的6DoF数据,记为Pose EM,其包括x轴方向的角度和平移信息,y轴方向的角度和平移信息,z轴方向的角度和平移信息;
将光学数据Pose optic和电磁数据Pose EM融合为新的位置数据,记为Pose OEM
利用新的位置数据Pose OEM构建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型。
进一步地,将光学数据Pose optic和电磁数据Pose EM融合为新的位置数据,记为Pose OEM的步骤包括:
分别判断数据Pose optic和数据Pose EM在x轴、y轴和z轴上的角度相差是否小于第一设定阈值,位移相差是否小于第二设定阈值;其中,将x轴、y轴和z轴三个方向上的角度和位移进行分别对比,例如,比较数据Pose optic和数据Pose EM中在x轴方向的角度相差值,比较数据Pose optic和数据Pose EM中在y轴方向的角度相差值,比较数据Pose optic和数据Pose EM中在z轴方向的角度相差值,并依此对位移信息进行比较得到三个轴方向上的位移相差值;其中,第一设定阈值可以设定为3°左右,第二设定阈值可以设定为20mm左右;
若角度相差小于第一设定阈值,且位移相差小于第二设定阈值,则分别取x轴方向、y轴方向和z轴方向的数据Pose optic和数据Pose EM的平均值,作为数据Pose OEM;例如,将数据Pose optic和数据Pose EM在x轴方向上的角度取平均值,作为数据Pose OEM在x轴方向上的角度;
若角度相差不小于第一设定阈值,和/或位移相差不小于第二设定阈值,则分别对x轴方向、y轴方向和z轴方向的数据Pose optic和数据Pose EM做加权平均,得到数据Pose OEM;其中,光学数据和电磁数据的权重可以根据手柄控制器的实际应用环境确定,优选地,光学数据Pose optic的权重设定为0.65,电磁数据Pose EM的权重设定为0.35。
优选地,扩展卡尔曼滤波迭代策略的状态转移模型如下式所示:
Figure PCTCN2021103544-appb-000001
其中,
Figure PCTCN2021103544-appb-000002
Figure PCTCN2021103544-appb-000003
分别是第k时刻和k-1时刻的优化迭代后的x轴方向上的位移分量,
Figure PCTCN2021103544-appb-000004
Figure PCTCN2021103544-appb-000005
分别是第k时刻和k-1时刻的优化迭代后的y轴方向上的位移分量,
Figure PCTCN2021103544-appb-000006
Figure PCTCN2021103544-appb-000007
分别是第k时刻和k-1时刻的优化迭代后的z轴方向上的位移分量;初始时刻x,y,z三个轴方向上的位移分量等于Pose OEM
Figure PCTCN2021103544-appb-000008
Figure PCTCN2021103544-appb-000009
分别是第k时刻和k-1时刻的光学追踪数据中x轴方向上的速度位移分量,
Figure PCTCN2021103544-appb-000010
Figure PCTCN2021103544-appb-000011
分别是第k时刻和k-1时刻的光学追踪数据中y轴方向上的速度位移分量,
Figure PCTCN2021103544-appb-000012
Figure PCTCN2021103544-appb-000013
分别是第k时刻和k-1时刻的光学追踪数据中z轴方向上的速度位移分量;初始时刻x,y,z三个轴方向上的速度位移分量的值均为0;
Figure PCTCN2021103544-appb-000014
分别表示第k时刻去掉重力方向之后的重力加速度传感器在x轴、y轴和z轴的运动数据;
ΔT表示k时刻和k-1时刻的时间差。
进一步地,可选地,Pose OEM的追踪数据的刷新率为200Hz,即k时刻和k-1时刻的时间差ΔT为5ms。
进一步地,扩展卡尔曼滤波迭代策略的观测模型如下式所示:
Figure PCTCN2021103544-appb-000015
其中,
Figure PCTCN2021103544-appb-000016
分别是按照扩展卡尔曼滤波迭代优化生成的x方向、y方向和z方向的位置数据;
k表示时刻;
Figure PCTCN2021103544-appb-000017
分别是融合数据Pose OEM中的在k时刻的x,y,z三个轴方向上的位置移动数据。
本发明的一个实施例中,所述方法还包括:设定如下式所示的过程噪声方差矩阵:
Figure PCTCN2021103544-appb-000018
其中,PNoiseCov表示过程噪声方差矩阵,p_error表示位移噪声误差,v_error表示速度噪声误差。
将过程噪声方差矩阵设定为一个对角矩阵,根据系统噪声属性,设定手柄控制器在x轴、y轴、z轴三个方向的位移噪声误差相等,x轴、y轴、z轴三个方向上的速度噪声误差相等;并且,手柄运动速度越小,x,y,z三个轴方向上的位置平移和速度的过程噪声越大,手柄运动速度越大,x,y,z三个轴方向上的位置平移和速度的过程噪声越小。优选地,通过如下模型对过程噪声中位移噪声误差和速度噪声误差进行自适应调整:
p_error=(1-x) 4*10,x∈[0.01,0.1]
v_error=(1-x) 5*10,x∈[0.01,0.1]
其中,x表示当前时刻帧手柄控制器的Pose OEM数据中的在k时刻的x,y,z三个轴方向上的位置移动数据的平滑置信度。其中,x和手柄控制器的运动速度成反比。
在一个实施例中,所述方法还包括:设定如下式所示的测量噪声方差矩阵:
Figure PCTCN2021103544-appb-000019
其中,MNoiseCov表示测量噪声方差矩阵,M_error表示测量噪声误差。
本发明中,测量噪声方差矩阵也设定为一个对角矩阵,x、y、z三个轴 方向的值可以认为是相等。根据对本发明所述多传感器手柄控制器混合追踪装置的整体噪声评价,可以令M_error=2,当然,M_error也可以是其他数值。
构建了状态转移模型和观测模型,且设定模型的过程噪声方差矩阵和测量噪声方差矩阵之后,手柄控制器每一帧的追踪即可按照扩展卡尔曼滤波迭代策略进行优化6DoF的位置数据。最终的手柄控制器的追踪数据6DoF为扩展卡尔曼滤波迭代优化生成的
Figure PCTCN2021103544-appb-000020
位置数据,x、y、z三个轴方向上的位置平滑数据和Pose OEM中的姿态数据。
需要说明的是,本发明中提及的扩展卡尔曼滤波迭代是一种常用的运动估计算法,关于该算法在本发明中不做详细描述。
本发明的多传感器手柄控制器混合追踪方法可以应用于虚拟场景中,对手柄控制器的运动进行追踪。
本发明的多传感器手柄控制器混合追踪方法可以应用于多传感器手柄控制器混合追踪装置中。手柄控制器的表面设置有光学图案标记点,利用光学图案标记点获取光学追踪数据,其中,所述装置包括:
多个相机,设置于头盔显示器上,用于追踪拍摄所述光学图案标记点,例如,相机可以设置为四个,呈矩形分布设置,分别设置于头盔显示器的四角;光学图案标记点可以是设置于手柄控制器上的圆环图案或者其他标记图案,对此,本发明不做具体限制;
光学传感器,用于获取所述光学图案标记点的光学追踪数据;
电磁传感器,用于获取手柄控制器的电磁追踪数据;
惯性导航传感器,用于获取手柄控制器的惯性导航数据;
头盔显示器,内置有无线传输模块,用于接收光学追踪数据、电磁追踪数据和惯性导航数据;
模型构建模块,用于创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对光学追踪数据、电磁追踪数据和惯性导航数据进行扩展卡尔曼滤波融合;
追踪模块,用于根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间 中的位置和姿态信息。
在一个实施例中,模型构建模块创建的扩展卡尔曼滤波迭代策略的状态转移模型为:
Figure PCTCN2021103544-appb-000021
其中,
Figure PCTCN2021103544-appb-000022
Figure PCTCN2021103544-appb-000023
分别是第k时刻和k-1时刻的优化迭代后的x轴方向上的位移分量,
Figure PCTCN2021103544-appb-000024
Figure PCTCN2021103544-appb-000025
分别是第k时刻和k-1时刻的优化迭代后的y轴方向上的位移分量,
Figure PCTCN2021103544-appb-000026
Figure PCTCN2021103544-appb-000027
分别是第k时刻和k-1时刻的优化迭代后的z轴方向上的位移分量;初始时刻x,y,z三个轴方向上的位移分量等于Pose OEM
Figure PCTCN2021103544-appb-000028
Figure PCTCN2021103544-appb-000029
分别是第k时刻和k-1时刻的光学追踪数据中x轴方向上的速度位移分量,
Figure PCTCN2021103544-appb-000030
Figure PCTCN2021103544-appb-000031
分别是第k时刻和k-1时刻的光学追踪数据中y轴方向上的速度位移分量,
Figure PCTCN2021103544-appb-000032
Figure PCTCN2021103544-appb-000033
分别是第k时刻和k-1时刻的光学追踪数据中z轴方向上的速度位移分量;初始时刻x,y,z三个轴方向上的速度位移分量的值均为0;
Figure PCTCN2021103544-appb-000034
分别表示第k时刻去掉重力方向之后的重力加速度传感器在x轴、y轴和z轴的运动数据;
ΔT表示k时刻和k-1时刻的时间差。
进一步地,扩展卡尔曼滤波迭代策略的观测模型如下式所示:
Figure PCTCN2021103544-appb-000035
其中,
Figure PCTCN2021103544-appb-000036
分别是按照扩展卡尔曼滤波迭代优化生成的x方向、y方向和z方向的位置数据;
k表示时刻;
Figure PCTCN2021103544-appb-000037
分别是融合数据Pose OEM中的在k时刻的x,y,z三个轴方向上的位置移动数据。
需要说明的是,本发明之多传感器手柄控制器混合追踪装置的具体实施方式与上述多传感器手柄控制器混合追踪方法的具体实施方式大致相同,在此不再赘述。
本发明通过构建的混合追踪系统,将光学追踪数据、电磁追踪数据和惯性导航数据均考虑在内,降低环境干扰因素对其的影响,提高手柄控制器的追踪稳定性。
以上所述,仅为本发明的具体实施方式,在本发明的上述教导下,本领域技术人员可以在上述实施例的基础上进行其他的改进或变形。本领域技术人员应该明白,上述的具体描述只是更好的解释本发明的目的,本发明的保护范围以权利要求的保护范围为准。

Claims (10)

  1. 一种多传感器手柄控制器混合追踪方法,其特征在于,包括:
    获取手柄控制器的追踪数据,所述追踪数据包括光学追踪数据、电磁追踪数据和惯性导航数据;
    根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对所述光学追踪数据、所述电磁追踪数据和所述惯性导航数据进行扩展卡尔曼滤波融合;
    根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息。
  2. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,根据所述追踪数据创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型的步骤包括:
    根据光学追踪数据实时获取手柄控制器上的光学图案标记点相对于头盔显示器上的第一相机的6DoF数据和速度信息;
    利用各个相机之间的标定参数,将其他相机的6DoF数据均转换为以第一相机为原点的坐标系下的6DoF数据,记为Pose optic
    通过头盔显示器上内置的电磁信号接收器和第一相机的标定参数,将手柄控制器的电磁6DoF数据转换为以第一相机为原点的坐标系下的6DoF数据,记为Pose EM
    将数据Pose optic和数据Pose EM融合为新的位置数据,记为Pose OEM
    利用新的位置数据构建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型。
  3. 根据权利要求2所述的多传感器手柄控制器混合追踪方法,其特征在于,将数据Pose optic和数据Pose EM融合为新的位置数据,记为Pose OEM的步骤包括:
    分别判断数据Pose optic和数据Pose EM在x轴、y轴和z轴上的角度相差是否小于第一设定阈值,位移相差是否小于第二设定阈值;
    若角度相差小于第一设定阈值,且位移相差小于第二设定阈值,则分别取x轴方向、y轴方向和z轴方向的数据Pose optic和数据Pose EM的平均值,作为数据Pose OEM
    若角度相差不小于第一设定阈值,和/或位移相差不小于第二设定阈值,则分别对x轴方向、y轴方向和z轴方向的数据Pose optic和数据Pose EM做加权平均,得到数据Pose OEM
  4. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,扩展卡尔曼滤波迭代策略的状态转移模型如下式所示:
    Figure PCTCN2021103544-appb-100001
    其中,
    Figure PCTCN2021103544-appb-100002
    Figure PCTCN2021103544-appb-100003
    分别是第k时刻和k-1时刻的优化迭代后的x轴方向上的位移分量,
    Figure PCTCN2021103544-appb-100004
    Figure PCTCN2021103544-appb-100005
    分别是第k时刻和k-1时刻的优化迭代后的y轴方向上的位移分量,
    Figure PCTCN2021103544-appb-100006
    Figure PCTCN2021103544-appb-100007
    分别是第k时刻和k-1时刻的优化迭代后的z轴方向上的位移分量;
    Figure PCTCN2021103544-appb-100008
    Figure PCTCN2021103544-appb-100009
    分别是第k时刻和k-1时刻的光学追踪数据中x轴方向上的速度位移分量,
    Figure PCTCN2021103544-appb-100010
    Figure PCTCN2021103544-appb-100011
    分别是第k时刻和k-1时刻的光学追踪数据中y轴方向上的速度位移分量,
    Figure PCTCN2021103544-appb-100012
    Figure PCTCN2021103544-appb-100013
    分别是第k时刻和k-1时刻的光学追踪数据中z轴方向上的速度位移分量;
    Figure PCTCN2021103544-appb-100014
    分别表示第k时刻去掉重力方向之后的重力加速度传感器在x轴、y轴和z轴的运动数据;
    ΔT表示k时刻和k-1时刻的时间差。
  5. 根据权利要求4所述的多传感器手柄控制器混合追踪方法, 其特征在于,扩展卡尔曼滤波迭代策略的观测模型如下式所示:
    Figure PCTCN2021103544-appb-100015
    其中,
    Figure PCTCN2021103544-appb-100016
    分别是按照扩展卡尔曼滤波迭代优化生成的x方向、y方向和z方向的位置数据;
    k表示时刻;
    Figure PCTCN2021103544-appb-100017
    分别是融合数据中的在k时刻的x,y,z三个轴方向上的位置移动数据。
  6. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,所述方法还包括:设定如下式所示的过程噪声方差矩阵:
    Figure PCTCN2021103544-appb-100018
    其中,PNoiseCov表示过程噪声方差矩阵,p_error表示位移噪声误差,v_error表示速度噪声误差。
  7. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,所述方法还包括:设定如下式所示的测量噪声方差矩阵:
    Figure PCTCN2021103544-appb-100019
    其中,MNoiseCov表示测量噪声方差矩阵,M_error表示测量噪声误差。
  8. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,获取手柄控制器的光学追踪数据的步骤包括:
    实时捕捉手柄控制器在空间中的运动,并得到手柄图像;
    对所述手柄图像进行特征检测,并获取手柄控制器上的光学图案标记点在所述手柄图像上的位置坐标;
    通过PNP算法解算光学图案标记点相对于相机的位置和姿态数据,作为手柄控制器的光学追踪数据。
  9. 根据权利要求1所述的多传感器手柄控制器混合追踪方法,其特征在于,获取手柄控制器的电磁追踪数据的步骤包括:
    利用内置于手柄控制器的电磁信号发生器发出电磁信号;
    通过内置于头盔显示器中的电磁信号接收器接收所述电磁信号;
    通过电磁定位六自由度解算模型获取电磁信号发生器相对于电磁信号接收器的位置和姿态数据,作为手柄控制器的电磁追踪数据。
  10. 一种多传感器手柄控制器混合追踪装置,其特征在于,手柄控制器的表面设置有光学图案标记点,所述装置包括:
    多个相机,设置于头盔显示器上,用于追踪拍摄所述光学图案标记点;
    光学传感器,用于获取所述光学图案标记点的光学追踪数据;
    电磁传感器,用于获取手柄控制器的电磁追踪数据;
    惯性导航传感器,用于获取手柄控制器的惯性导航数据;
    头盔显示器,内置有无线传输模块,用于接收光学追踪数据、电磁追踪数据和惯性导航数据;
    模型构建模块,用于创建扩展卡尔曼滤波迭代策略的状态转移模型和观测模型,对光学追踪数据、电磁追踪数据和惯性导航数据进行扩展卡尔曼滤波融合;
    追踪模块,用于根据扩展卡尔曼滤波迭代策略获取手柄控制器在空间中的位置和姿态信息。
PCT/CN2021/103544 2020-07-01 2021-06-30 多传感器手柄控制器混合追踪方法及装置 WO2022002132A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21834056.0A EP4155873A4 (en) 2020-07-01 2021-06-30 HYBRID TRACKING METHOD AND APPARATUS FOR MULTIPLE-SENSOR HANDLE CONTROL
US18/086,425 US20230119687A1 (en) 2020-07-01 2022-12-21 Multi-sensor handle controller hybrid tracking method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010625647.9A CN111949123B (zh) 2020-07-01 2020-07-01 多传感器手柄控制器混合追踪方法及装置
CN202010625647.9 2020-07-01

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/086,425 Continuation US20230119687A1 (en) 2020-07-01 2022-12-21 Multi-sensor handle controller hybrid tracking method and device

Publications (1)

Publication Number Publication Date
WO2022002132A1 true WO2022002132A1 (zh) 2022-01-06

Family

ID=73337432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103544 WO2022002132A1 (zh) 2020-07-01 2021-06-30 多传感器手柄控制器混合追踪方法及装置

Country Status (4)

Country Link
US (1) US20230119687A1 (zh)
EP (1) EP4155873A4 (zh)
CN (1) CN111949123B (zh)
WO (1) WO2022002132A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111949123B (zh) * 2020-07-01 2023-08-08 青岛小鸟看看科技有限公司 多传感器手柄控制器混合追踪方法及装置
CN112416125A (zh) * 2020-11-17 2021-02-26 青岛小鸟看看科技有限公司 Vr头戴式一体机
CN113992841A (zh) * 2021-09-27 2022-01-28 青岛小鸟看看科技有限公司 室外强光下的控制器光学追踪方法、系统
CN115480656A (zh) * 2021-10-11 2022-12-16 深圳市瑞立视多媒体科技有限公司 一种基于光惯融合原理的无线手柄交互系统
CN115311353B (zh) * 2022-08-29 2023-10-10 玩出梦想(上海)科技有限公司 一种多传感器多手柄控制器图优化紧耦合追踪方法及系统
CN116257134B (zh) * 2023-02-09 2024-04-09 玩出梦想(上海)科技有限公司 非惯性参考系下手柄头盔追踪方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305784A1 (en) * 2015-04-17 2016-10-20 Regents Of The University Of Minnesota Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
US20190243472A1 (en) * 2018-02-02 2019-08-08 Sony Interactive Entertainment Inc. Head-mounted display to controller clock synchronization over em field
CN110782492A (zh) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 位姿跟踪方法及装置
CN111949123A (zh) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 多传感器手柄控制器混合追踪方法及装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101813780B (zh) * 2010-04-12 2012-08-15 杭州掌图信息技术有限公司 一种集成gps和多传感器的移动目标实时追踪方法
CN104764452A (zh) * 2015-04-23 2015-07-08 北京理工大学 一种基于惯性和光学跟踪系统的混合位姿跟踪方法
CN104764451A (zh) * 2015-04-23 2015-07-08 北京理工大学 一种基于惯性和地磁传感器的目标姿态跟踪方法
CN106500695B (zh) * 2017-01-05 2019-02-01 大连理工大学 一种基于自适应扩展卡尔曼滤波的人体姿态识别方法
US10216265B1 (en) * 2017-08-07 2019-02-26 Rockwell Collins, Inc. System and method for hybrid optical/inertial headtracking via numerically stable Kalman filter
CN108759826B (zh) * 2018-04-12 2020-10-27 浙江工业大学 一种基于手机和无人机多传感参数融合的无人机运动跟踪方法
WO2020023524A1 (en) * 2018-07-23 2020-01-30 Magic Leap, Inc. Method and system for resolving hemisphere ambiguity using a position vector
CN109376785B (zh) * 2018-10-31 2021-09-24 东南大学 基于迭代扩展卡尔曼滤波融合惯性与单目视觉的导航方法
CN110081881B (zh) * 2019-04-19 2022-05-10 成都飞机工业(集团)有限责任公司 一种基于无人机多传感器信息融合技术的着舰引导方法
CN110503667A (zh) * 2019-08-06 2019-11-26 北京超维度计算科技有限公司 一种基于扩展卡尔曼滤波和交互多模型的目标追踪方法
CN110530356B (zh) * 2019-09-04 2021-11-23 海信视像科技股份有限公司 位姿信息的处理方法、装置、设备及存储介质
CN110956653B (zh) * 2019-11-29 2021-05-04 中国科学院空间应用工程与技术中心 相关滤波器与运动估计融合的卫星视频动态目标追踪方法
CN111174683B (zh) * 2020-01-07 2021-11-30 青岛小鸟看看科技有限公司 手柄定位方法、头戴显示设备以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305784A1 (en) * 2015-04-17 2016-10-20 Regents Of The University Of Minnesota Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
US20190243472A1 (en) * 2018-02-02 2019-08-08 Sony Interactive Entertainment Inc. Head-mounted display to controller clock synchronization over em field
CN110782492A (zh) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 位姿跟踪方法及装置
CN111949123A (zh) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 多传感器手柄控制器混合追踪方法及装置

Also Published As

Publication number Publication date
US20230119687A1 (en) 2023-04-20
CN111949123B (zh) 2023-08-08
EP4155873A4 (en) 2023-11-08
CN111949123A (zh) 2020-11-17
EP4155873A1 (en) 2023-03-29

Similar Documents

Publication Publication Date Title
WO2022002132A1 (zh) 多传感器手柄控制器混合追踪方法及装置
Rambach et al. Learning to fuse: A deep learning approach to visual-inertial camera pose estimation
CN111156998B (zh) 一种基于rgb-d相机与imu信息融合的移动机器人定位方法
TWI397671B (zh) 定位載體、估測載體姿態與建地圖之系統與方法
TWI722280B (zh) 用於多個自由度之控制器追蹤
CN106525074B (zh) 一种云台漂移的补偿方法、装置、云台和无人机
CN108846867A (zh) 一种基于多目全景惯导的slam系统
CN111091587B (zh) 一种基于视觉标志物的低成本动作捕捉方法
CN110533719B (zh) 基于环境视觉特征点识别技术的增强现实定位方法及装置
WO2019104571A1 (zh) 图像处理方法和设备
CN112146678B (zh) 一种确定校准参数的方法及电子设备
JP2006099109A (ja) 2つの2軸線形加速度計を用いて画像取込装置の動きを検出するためのシステムおよび方法
CN113551665B (zh) 一种用于运动载体的高动态运动状态感知系统及感知方法
CN108154533A (zh) 一种位置姿态确定方法、装置及电子设备
JPH11306363A (ja) 画像入力装置及び画像入力方法
Satoh et al. A head tracking method using bird's-eye view camera and gyroscope
CN112985450B (zh) 一种具有同步时间误差估计的双目视觉惯性里程计方法
Satoh et al. Robot vision-based registration utilizing bird's-eye view with user's view
CN114972514A (zh) Slam定位方法、装置、电子设备和可读存储介质
CN110415329B (zh) 三维建模装置及应用于其的校准方法
Jama et al. Parallel tracking and mapping for controlling vtol airframe
Blomster Orientation estimation combining vision and gyro measurements
JP3655065B2 (ja) 位置・姿勢検出装置と位置・姿勢検出方法と3次元形状復元装置及び3次元形状復元方法
Lang et al. A new combination of vision-based and inertial tracking for fully mobile, wearable, and real-time operation
CN111553933A (zh) 一种应用于不动产测量基于优化的视觉惯性组合测量方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834056

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021834056

Country of ref document: EP

Effective date: 20221219

NENP Non-entry into the national phase

Ref country code: DE