CN115311353A - Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system - Google Patents

Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system Download PDF

Info

Publication number
CN115311353A
CN115311353A CN202211036999.6A CN202211036999A CN115311353A CN 115311353 A CN115311353 A CN 115311353A CN 202211036999 A CN202211036999 A CN 202211036999A CN 115311353 A CN115311353 A CN 115311353A
Authority
CN
China
Prior art keywords
handle
pose
determining
camera
helmet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211036999.6A
Other languages
Chinese (zh)
Other versions
CN115311353B (en
Inventor
朱张豪
费越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yuweia Technology Co ltd
Original Assignee
Shanghai Yuweia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yuweia Technology Co ltd filed Critical Shanghai Yuweia Technology Co ltd
Priority to CN202211036999.6A priority Critical patent/CN115311353B/en
Publication of CN115311353A publication Critical patent/CN115311353A/en
Application granted granted Critical
Publication of CN115311353B publication Critical patent/CN115311353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention relates to a multi-sensor multi-handle controller graph optimization tight coupling tracking method and a multi-sensor multi-handle controller graph optimization tight coupling tracking method. The method comprises the following steps: acquiring helmet tracking data and handle tracking data; determining the camera pose of a multi-view camera coordinate system under a world coordinate system according to helmet inertial navigation data in a vision + IMU tightly coupled SLAM mode; referring to the camera pose of the camera No. 0, determining the handle pose of the handle in a world coordinate system; constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and handle inertial navigation data, and determining an initial predicted pose of the handle; according to the initial predicted pose of the handle, extracting and matching the 2D coordinates of the infrared light spot with the 3D model coordinates, and determining the initial value of the system state quantity; determining the current system state by adopting a close-coupled BA diagram optimization mode according to the initial value of the system state quantity; and enabling the handle to continuously output the 6DoF pose according to the current system state. The invention can realize that the handle tracking can be stable and not lost at high speed.

Description

Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system
Technical Field
The invention relates to the technical field of SLAM, in particular to a method and a system for optimizing tight coupling tracking of a multi-sensor multi-handle controller graph.
Background
Visual instantaneous positioning anD Mapping (SLAM) refers to the process of creating a map according to images acquired by a camera in a completely unknown environment under the condition that the position of a robot is uncertain, anD meanwhile, performing autonomous positioning anD navigation by using the map.
Meanwhile, in the VR/AR/MR multi-field, a multi-view camera is mostly adopted to observe some built-in mark points of special optical patterns on the handle controller, such as infrared LED light points, and meanwhile, by combining with an Inertial Measurement Unit (IMU) built in the handle, through a computer vision technology, mainly referring to the SLAM, the motion state of the optical patterns on the handle controller in space, such as system state quantity information of the position, the posture, the speed and the like of the handle in space, is captured in real time, and the motion state generally including the position and the posture is called 6 degrees of freeDom (Degree of freeDom, doF).
First, some prior art problems are: electromagnetic data carried by a built-in IMU unit of the handle can be fused, although more accurate postures and gravity directions can be obtained more easily, objects such as ironware and the like which interfere with the electromagnetic data may exist in the actual use scene range, so that the tracking result of the fused electromagnetic data is unstable, and hardware cost and software computing power are increased on the contrary.
Secondly, when the system state information of the handle is tracked by the existing handle tracking method, the following factors are easy to occur, so that the tracking performance is greatly influenced. For example, since the camera is sensitive to the ambient light, the complexity of the ambient light directly affects the imaging quality of the camera, and further affects the tracking performance of the handle; in actual operation, the superposition or adhesion of a plurality of marking points on an image is caused due to different angles of the handheld handle, so that the tracking performance of the handle is influenced; if the hand-held handle is swung at the extreme speed, the acceleration can reach more than one hundred meters per square second when the hand-held handle is in sudden stop, so that the 2D coordinate position uv of a mark point on an image and the predicted position deviation are large, the tracking failure of the handle is caused, and the phenomenon of blocking is caused. Therefore, the existing gesture tracking method for the handle controller has more limitations, which causes phenomena such as drift, jitter, and jerk in a virtual scene, which more affects user experience, and is applied to applications with higher requirements on the handle tracking performance, such as the expert mode of beat saber in VR and the above difficulties.
The unstable tracking method is caused by a plurality of reasons, which may be inaccurate estimation of tracking pose caused by the loose coupling technology in the field of the existing handle, inaccurate and fast initialization of the IMU, or insufficient realization of the tight coupling filter mode to provide a pose accurate enough.
Disclosure of Invention
The invention aims to provide a multi-sensor multi-handle controller graph optimization tight coupling tracking method and a system, so as to solve the problem of unstable tracking method.
In order to achieve the purpose, the invention provides the following scheme:
a multi-sensor multi-handle controller graph optimization tight coupling tracking method comprises the following steps:
acquiring helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by a multi-view camera and helmet inertial navigation data; the handle tracking data comprises images shot by the multi-view camera and handle inertial navigation data;
determining the camera pose of a multi-view camera coordinate system under a world coordinate system according to the helmet inertial navigation data in a vision + IMU tightly coupled SLAM mode;
referring to the camera pose of the camera No. 0, determining the handle pose of the handle in a world coordinate system;
constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determining an initial predicted pose of the handle; the system state quantity comprises a 3D vector position of the handle, a 3D vector speed, a gyroscope deviation and an accelerometer deviation;
extracting and matching 2D coordinates of the infrared light spots to 3D model coordinates according to the initial predicted pose of the handle, and determining an initial value of a system state quantity;
determining the current system state by adopting a close-coupled BA (building block diagram) optimization mode according to the initial value of the system state quantity; the current state of the system is a low-frequency handle state;
and enabling the handle to continuously output the 6DoF pose according to the current system state.
Optionally, the determining, in a SLAM manner of tight coupling of vision + IMU, the camera pose of the multi-view camera coordinate system in the world coordinate system according to the helmet inertial navigation data specifically includes:
calibrating internal parameters of the multi-view cameras and external parameters among the multi-view cameras;
acquiring external parameters between a camera No. 0 and a helmet IMU sensor, time delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor;
and determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a vision + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
Optionally, the constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determining an initial predicted handle pose specifically includes:
pre-integrating gyroscope 3d data and accelerometer 3d data of the handle IMU sensor, determining rotation amount, translation amount and speed variation of the current moment relative to the previous moment, and updating gyroscope deviation and accelerometer deviation;
and determining the initial predicted pose of the handle according to the updated deviation of the spirometer and the deviation of the accelerometer.
Optionally, the extracting and matching the 2D coordinate of the infrared light spot to the 3D model coordinate according to the initial predicted pose of the handle to determine the initial value of the system state quantity specifically includes:
and extracting and matching the 2D coordinates of the infrared light spots with the 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the initial value of the system state quantity.
A multi-sensor multi-handle controller graph optimized tight-coupled tracking system, comprising:
the tracking data acquisition module is used for acquiring helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by the multi-view camera and helmet inertial navigation data; the handle tracking data comprises images shot by the multi-view camera and handle inertial navigation data;
the camera pose determination module is used for determining the camera pose of the multi-view camera coordinate system in a world coordinate system according to the helmet inertial navigation data in a vision + IMU tightly coupled SLAM mode;
the handle pose determining module is used for determining the handle pose of the handle in a world coordinate system by referring to the camera pose of the camera No. 0;
the handle initial prediction pose determining module is used for constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data and determining a handle initial prediction pose; the system state quantity comprises a 3D vector position, a 3D vector speed, a gyroscope deviation and an accelerometer deviation of the handle;
the system state quantity initial value determining module is used for extracting and matching 2D coordinates of the infrared light spots to 3D model coordinates according to the initial predicted pose of the handle, and determining a system state quantity initial value;
the system current state determining module is used for determining the current system state by adopting a close-coupled BA diagram optimization mode according to the system state quantity initial value; the current state of the system is a low-frequency handle state;
and the 6DoF pose output module is used for enabling the handle to continuously output the 6DoF pose according to the current system state.
Optionally, the camera pose determination module specifically includes:
the calibration unit is used for calibrating internal parameters of the multi-view cameras and external parameters among the multi-view cameras;
the parameter acquisition unit is used for acquiring external parameters between the No. 0 camera and the helmet IMU sensor, delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor;
and the camera pose determining unit is used for determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a visual + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
Optionally, the module for determining the initial predicted pose of the handle specifically includes:
the updating unit is used for pre-integrating the gyroscope 3d data and the accelerometer 3d data of the handle IMU sensor, determining the rotation amount, the translation amount and the speed variation of the current moment relative to the previous moment, and updating the gyroscope deviation and the accelerometer deviation;
and the handle initial prediction pose determining unit is used for determining the handle initial prediction pose according to the updated spirometer deviation and the accelerometer deviation.
Optionally, the system state quantity initial value determining module specifically includes:
and the system state quantity initial value determining unit is used for extracting and matching 2D coordinates of the infrared light spots to 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the system state quantity initial value.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a method and a device for optimizing and tightly coupling tracking of a multi-sensor multi-handle controller graph, wherein a tightly coupled BA graph optimization mode is used for multi-sensor multi-handle tracking, the current system state is determined, the system state at multiple moments is optimized, and the current system state quantity is more accurate, so that light point coordinates can be stably and accurately extracted and matched each time, the calculation quantity of front-end lifting points is reduced, the robustness of the lifting points is increased, the handle tracking can be stable and not lost at high speed, and the precision of Root Mean square Root (RMSE) can reach millimeter level.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method for optimizing tight coupling tracking of a multi-sensor multi-handle controller graph according to the present invention;
fig. 2 is a schematic diagram illustrating a connection relationship between a helmet, a multi-view camera and a helmet IMU sensor according to the present invention;
FIG. 3 is a schematic diagram of the relationship of the handle, optical sensor and handle IMU sensor provided by the present invention;
fig. 4 is a block diagram of a diagram optimizing a tightly coupled tracking system for a multi-sensor multi-handle controller according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a device for optimizing tight coupling tracking of a multi-sensor multi-handle controller graph, wherein the handle tracking can be stable and cannot be lost even at high speed.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
In the technical field of SLAM, loose coupling refers to the fact that the measurement error of only one sensor is optimized independently when the pose is optimized, tight coupling refers to the fact that an optimization objective function contains the measurement errors of all the sensors when the pose is optimized, and a typical mode is a filter mode and a graph optimization mode. The filter mode refers to a two-stage optimization state, firstly, a part of data sources of the optimized objective function, such as data of an IMU sensor, are used for transmitting Propagate and extending augmenter to obtain predicted pose for visual matching, then, the rest of data sources of the optimized objective function, such as visual data, are used for updating UpDate, and common algorithms include MSCKF, ESCKF and the like. The graph optimization mode refers to that an optimization objective function and a state to be optimized are put together and optimized at the same time, commonly called bundle adjustment BA (BunDleAdjustment) optimization, generally called the state to be optimized is a node, the optimization objective function is called an objective function for short, and is composed of a plurality of error function blocks or commonly called edges, common optimization algorithm libraries include cerees, g2o and the like, and common algorithms include Levenberg-Marquardt, gauss-Newton method and the like. Other differences between the two methods and the state of filter optimization are that the state of filter optimization is only the state at the current moment, and graph optimization can involve the states at more moments and can process larger data.
Fig. 1 is a flowchart of a method for optimizing tight coupling tracking of a multi-sensor multi-handle controller graph provided in the present invention, and as shown in fig. 1, the method for optimizing tight coupling tracking of a multi-sensor multi-handle controller graph includes:
step 101: acquiring helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by the multi-view camera and helmet inertial navigation data; the handle tracking data includes images captured by the multi-view camera and handle inertial navigation data, fig. 2 is a schematic diagram of a connection relationship between the helmet, the multi-view camera and the helmet IMU sensor provided by the present invention, and fig. 3 is a schematic diagram of a relationship between the handle, the optical sensor and the handle IMU sensor provided by the present invention.
In practical application, the pose Twc is obtained as follows:
calibrating internal parameter and external parameter T between multiple eyes of multi-view camera by Kalibr-like open source or self-developed calibration software cic0 ,c i Represents camera number i; and reference is transmitted by No. 0 camera and IMUExternal reference Tbc between the sensors 0 (ii) a And IMU time delay t relative to the camera d And internal references to the IMU.
Based on the internal and external parameters, acquiring a pose Twb _ SLAM of the IMU under a world coordinate system W through open source or self-researched vision and SLAM tightly coupled with the IMU, wherein the W is generally a static inertial coordinate system with gravity just being one axis in an xyz axis; so as to transform Tb _ slam _ c from calibrated helmet IMU to No. 0 camera 0 And T cic0 Obtaining Twc of multi-view camera i =Twb_slam*Tb_slam_c 0 *T cic0 -1
The main flow of the SLAM technology of the open source or self-developed vision and IMU tight coupling comprises the following steps:
and extracting key points and descriptors of the multi-view image.
And predicting the pose of the current frame by utilizing IMU data of the helmet and the last pose Twb _ slam so as to perform multi-purpose matching of the extracted key points and historical map points.
Generating new map points through a triangulation algorithm according to the matching, combining historical map points to project together, and enabling the new map points to have pixel-level reprojection 2d errors with the extracted key points; meanwhile, 15d of errors exist between IMU integral and the actual relative pose; twb _ slam, vwb _ slam, bg _ slam, and ba _ slam are optimized using a non-linear optimization method to reduce these errors.
The loop 101 is repeated in the thread dedicated to SLAM, but the Twc data is passed to the front-end thread of the next step.
Step 102: and determining the camera pose of the multi-view camera coordinate system under a world coordinate system according to the helmet inertial navigation data in a vision + IMU tightly-coupled SLAM mode.
The step 102 specifically includes: calibrating internal parameters of the multi-view cameras and external parameters among the multi-view cameras; acquiring external parameters between a camera No. 0 and a helmet IMU sensor, delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor; and determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a vision + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
Step 103: and determining the handle pose of the handle in the world coordinate system by referring to the camera pose of the camera No. 0.
Step 104: constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determining an initial predicted pose of the handle; the system state quantities comprise a 3D vector position of the handle, a 3D vector velocity, a gyroscope bias and an accelerometer bias.
The step 104 specifically includes: pre-integrating gyroscope 3d data and accelerometer 3d data of the handle IMU sensor, determining rotation amount, translation amount and speed variation of the current moment relative to the previous moment, and updating gyroscope deviation and accelerometer deviation; and determining the initial predicted pose of the handle according to the updated deviation of the spirometer and the deviation of the accelerometer.
In practical application, IMU pre-integration is carried out according to known last system state quantity and IMU data to obtain a predicted current handle pose Twb0, wherein the system state quantity comprises a 3d vector position Twb of a handle, a direction or rotation matrix Rwb (the Twb and the Rwb are jointly called Twb), a 3d vector speed vwb, a gyroscope deviation bg, an accelerometer deviation ba, a gravity acceleration g, a transformation Tbc and a transformation Tbh from a calibrated handle IMU coordinate system to an LED light spot model coordinate system.
Last system state quantity is x i =[Twb i ,vwb i ,bg i ,ba i ]。
The IMU pre-integration is performed through gyroscope 3d data gyr and accelerometer 3d data acc of the IMU (the integration of the last system state quantity is not contained, and is equivalent to the relative quantity of a non-inertial system with gravity g and constant velocity vwbi), and the current time t is obtained j Relative to last system time t i Amount of rotation of
Figure BDA0003819418050000081
And amount of translation
Figure BDA0003819418050000082
And amount of speed variation
Figure BDA0003819418050000083
To about gyroscope bias
Figure BDA0003819418050000084
The rotational part of the lower IMU pre-integration,
Figure BDA0003819418050000085
for the 3d gyro bias used for IMU pre-integration, g is the gravity vector,
Figure BDA0003819418050000086
to relate to IMU deviation
Figure BDA0003819418050000087
The translational part of the lower IMU pre-integration,
Figure BDA0003819418050000088
for gyroscope bias for IMU pre-integration
Figure BDA0003819418050000089
And 3d accelerometer bias
Figure BDA00038194180500000810
The formed 6d vector is then used as a vector,
Figure BDA00038194180500000811
to relate to IMU deviation
Figure BDA00038194180500000812
The velocity part of the IMU pre-integration below, i is the sequence number of the current time frame, and j is the sequence number of the last system time frame. At the same time because of the current t i Bg and ba at time, i.e
Figure BDA00038194180500000813
May be updated at the back end, assuming the amount of update is
Figure BDA00038194180500000814
More accurate predicted relative quantity can be obtained
Figure BDA00038194180500000815
And
Figure BDA00038194180500000816
and
Figure BDA00038194180500000817
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00038194180500000818
for deviations more accurate
Figure BDA00038194180500000819
First order approximation of Taylor's expansion, Δ p ij And Δ v ij Are respectively as
Figure BDA00038194180500000820
And
Figure BDA00038194180500000821
to a first-order approximation of the first,
Figure BDA00038194180500000822
and
Figure BDA00038194180500000823
respectively a matrix of coefficients or a jacobian matrix corresponding to the first order terms.
Thereby obtaining the initial value Twb of the predicted pose of the current handle 0 =[Rwb j ,twb j ]=[Rwb i *ΔR ij ,twb i +Rwb i *Δp ij +vwb i *Δt ij +(gΔt ij ^2)/2]Wherein Rwb j Is the current t j Rotation of the handle IMU at time in the world W coordinate system, twb j Is that whenFront t j Corresponding translation at time, rwb i Is the last system t i Rotation of the handle IMU at time in the world W coordinate system, twb i For the last system t i Corresponding translation at time vwb i Is the last system t i Corresponding speed at the moment; the initial value vwb of the current handle speed is updated 0 =vwb i +Rwb i *Δv ij +gΔt ij For next search matching; meanwhile, an initial value is provided for the optimization of the front-end close-coupled BA diagram.
Step 105: and extracting and matching the 2D coordinates of the infrared light spots to the 3D model coordinates according to the initial predicted pose of the handle, and determining the initial value of the system state quantity.
The step 105 specifically includes: and extracting and matching the 2D coordinates of the infrared light spot with the 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the initial value of the system state quantity.
Step 106: determining the current system state by adopting a close-coupled BA (building block diagram) optimization mode according to the initial value of the system state quantity; the current state of the system is a low-frequency handle state.
Step 107: and enabling the handle to continuously output the 6DoF pose according to the current system state.
In practical application, according to Twb in the system state quantity 0 And Tb h Obtaining Twh 0 =Twb 0 *Tb h Extracting and matching the 2d coordinate uv of the infrared LED light spot with the 3d model coordinate P, mainly obtaining more accurate matching through an N-point perspective algorithm PNP, entering front-end tight coupling BA diagram optimization, and obtaining the initial value x of the system state at the moment ti i =[Twb i ,vwb i ,bg i ,ba i ]And the front end is a lightweight thread. At this time, the low-frequency handle state xi is output to the system, and the system performs smooth filtering and prediction by utilizing the handle IMU data and then renders the handle state seen by the user in advance.
This is achieved in steps 103 to 105 of the iterative loop where the front end thread processes each frame of hand grip LED light point data picture.
And judging whether the handle LED light spot data picture is a key frame, wherein the simplest judgment mode is that the picture is considered as the key frame if the picture is separated from the previous key frame for a certain time of more than 0.2s, if the picture is the key frame, a rear-end sliding window type BA picture optimization is added, the system state xi of the current time is updated, and the system state used in the next frame at the last time is updated to the system state of the current time.
The back end runs slower threads except the front end, including but not limited to a BA optimization thread and an IMU initialization thread; and simultaneously, initializing relevant parameters of the IMU by an IMU initialization thread at the rear end, wherein the initialization refers to calculating initial values of IMU deviation bg, BA, gravity g, scale s of a light spot 3d model coordinate and speed vwb of a key frame through mathematical derivation under a certain priori condition, and finally, optimizing the initial values through a BA diagram and providing the initial values for the front end to predict the pose.
The backend process repeatedly and circularly executes the parts of step 105, namely the sliding window BA optimization and the IMU initialization, every time a new key frame is added, and only repeatedly and circularly executes the sliding window BA optimization if all handles execute the IMU initialization.
The BA optimization is carried out for improving the precision of the system state xi at this time, and particularly, the speed v, the IMU deviation bg and the BA can be more accurate and smooth; the IMU initialization is performed to make the IMU pre-integration and prediction of step 104 more accurate, so that the matching obtained after the PNP algorithm of step 105 is more accurate, the mismatching situation is reduced, and further the low-frequency handle state output to the system is more accurate, and the jitter is reduced.
The two optimizations work together to reduce the handle loss as much as possible, even if the handle loss happens occasionally, the handle loss can be basically covered by the prediction state under the condition that the handle loss is less than a certain time threshold value such as 1s, namely the state output to the system at the moment is directly the prediction result xi 0 The last system state used in the next frame i +1 is xi here 0 . And if the time threshold value is exceeded, judging that the handle is lost, and entering a pure rotation 3DOF mode only using gyroscope data in the IMU.
The PNP algorithm is a method for solving point pair motion from 3D to 2D, and is used for eliminating unreasonable 2D to 3D matching, and the main method is as follows:
the 3d model coordinate Pk of the LED light spot is hPk under the handle h model coordinate system, so that the pose Twh of the handle h model coordinate system is provided 0 =Twb 0 *Tb h Then, the 3d coordinates wPk in the world coordinate system can be obtained.
Once a plurality of groups of matched 2d coordinates uv exist, the pose Twh of the handle h coordinate system can be obtained by the P3P or EPnP method of the open source algorithm library opencv 1 Comparing and predicting pose Twh 0 If the error exceeds a threshold, the set of matches is discarded.
If the IMU predicted pose is accurate, the initially assumed continuously traceable matching relationship can be directly used by PNP, otherwise wPk needs to be projected to the camera ci plane, and uv is at the predicted 2d position 0 If the spot z can be found in a certain circle, then the above-mentioned sets of matches are added.
And once the error of the PNP result has a matching pair which does not exceed the threshold, outputting a group of matching with the minimum fusion error of the PNP pose error and the reprojection error under the matching after the timeout or the complete detection of a plurality of groups of matching conditions is finished, and using the group of matching with the minimum fusion error for the reprojection error of the next BA.
The BA-diagram optimization specifically refers to a method of nonlinear optimization, including but not limited to ceres, g2o, and other well-known optimization libraries in the industry, and is characterized in that system state quantities, a given error function e and a covariance matrix Cov need to be determined, and a mathematical jacobian matrix J needs to be calculated in order to accelerate operations.
Here, a comparison core in the error function e is the observation uv of the 3d model coordinates P projected onto a distorted image plane (which can be considered as an image curved surface for a fisheye camera):
Figure BDA0003819418050000111
wherein, pi () is the projection model of the camera, including the corresponding distortion model.
The 2N-dimensional reprojection error vector uv _ all may be composed of N2D vectors uv, N being the viewedThe multi-view LED light spot matching number is mainly used in an objective function of BA optimization, and the essence of the objective function is the Mahalanobis distance of various error vectors. The main function of uv is to position the handle in the position Twb i Is optimized to the best, while the IMU error is to constrain the relative x i-1 And x i And erroneous matching is reduced, so that the uv error can better reflect the error of Twbi.
Specifically, the input of each projection function pi () in uv is that a handle LED light spot is projected into a handle IMU coordinate system B at a 3D position P of a model coordinate system H through a calibrated handle IMU external parameter Tbh, then projected into a world coordinate system W through a variable Twb needing to be optimized, and finally projected into a corresponding camera coordinate system C through the Twc obtained in the first step; the projection function pi () has the function of converting the 3D coordinates under C into 2D coordinates under the camera image plane with fisheye distortion, and then subtracting the 2D coordinates of the LED extracted at the front end to obtain a 2-dimensional error.
The tight coupling is specifically referred to herein as optimizing the system state quantity, and the error function includes both the visual error caused by the 2d coordinate position uv obtained from the visual measurement data and the influence of the system state quantity, and the IMU error caused by the IMU pre-integration and the influence of the system state quantity.
In the optimization of the tightly-coupled BA diagram, an optimization core equation, namely a Levenberg-Marquardt (LM) equation for solving a 15d increment delta x of a state quantity x = [ Twb, vwb, bg, BA ]:
(H+λI)Δx * =-b
Figure BDA0003819418050000112
Figure BDA0003819418050000113
wherein, lambda is a damping factor in the LM method, and the smaller lambda can make the LM method closer to Gauss Newton GN method; h is an information matrix of the increment delta x or the inverse of a covariance matrix thereof, and H is a square matrix of MxM; i is an identity matrix, b is a matrix corresponding to different sensorsThe total error term of (2) is a vector of Mx 1. H ij Giving a block matrix of NixNj of the ith row and jth column, wherein Ni represents an ith state quantity x i The dimension of the delta Δ xi, bi gives the block vector for row i, nix1, where r (-) represents the observed error of the sensor, and Jr (-) xi represents the Jacobian matrix of this error to Δ xi. ρ(s) represents a robust kernel function,
Figure BDA0003819418050000121
representing its first derivative with respect to a real number s. Therefore, the concrete optimization is to solve a linear equation with a large dimension.
Fig. 4 is a structural diagram of a diagram optimized tight coupling tracking system of a multi-sensor multi-handle controller according to the present invention, and as shown in fig. 4, a diagram optimized tight coupling tracking system of a multi-sensor multi-handle controller includes:
a tracking data obtaining module 401, configured to obtain helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by the multi-view camera and helmet inertial navigation data; the handle tracking data includes images taken by the multi-view camera and handle inertial navigation data.
And the camera pose determination module 402 is configured to determine, according to the helmet inertial navigation data, a camera pose of the multi-view camera coordinate system in a world coordinate system in a SLAM mode of visual + IMU tight coupling.
The camera pose determination module 402 specifically includes: the calibration unit is used for calibrating internal parameters of the multi-view camera and external parameters among the multi-view cameras; the parameter acquisition unit is used for acquiring external parameters between the No. 0 camera and the helmet IMU sensor, delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor; and the camera pose determining unit is used for determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a visual + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
And a handle pose determining module 403, configured to determine a handle pose of the handle in the world coordinate system by referring to the camera pose of the camera No. 0.
A handle initial prediction pose determination module 404, configured to construct a system state quantity based on the handle pose, perform IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determine a handle initial prediction pose; the system state quantities comprise a 3D vector position of the handle, a 3D vector velocity, a gyroscope bias and an accelerometer bias.
The initial handle prediction pose determination module 404 specifically includes: the updating unit is used for pre-integrating the gyroscope 3d data and the accelerometer 3d data of the handle IMU sensor, determining the rotation amount, the translation amount and the speed variation of the current moment relative to the previous moment, and updating the gyroscope deviation and the accelerometer deviation; and the handle initial prediction pose determining unit is used for determining the handle initial prediction pose according to the updated spirometer deviation and the accelerometer deviation.
And the system state quantity initial value determining module 405 is configured to extract and match the 2D coordinates of the infrared light spot with the 3D model coordinates according to the initial predicted pose of the handle, and determine a system state quantity initial value.
The system state quantity initial value determining module 405 specifically includes: and the system state quantity initial value determining unit is used for extracting and matching the 2D coordinates of the infrared light spot to the 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the system state quantity initial value.
A system current state determining module 406, configured to determine a current system state by using a tightly-coupled BA diagram optimization manner according to the system state quantity initial value; the current state of the system is a low-frequency handle state.
And the 6DoF pose output module 407 is configured to enable the handle to continuously output the 6DoF pose according to the current system state.
The scheme for optimizing the multi-time system state applied to the multi-sensor multi-handle is not limited to a graph optimization mode, and can expand the state variable of a filter mode, for example, the state variable of 15+6n dimension of MSCKF is expanded to 15n, so that the effect of simultaneously optimizing the multi-time system state of the scheme can be achieved to a certain extent.
The multithreading scheme applied to the multi-sensor multi-handle is not limited to the graph optimization mode, and the optimization mode can also achieve similar effects by respectively adopting a common filter scheme and the expanded filter scheme.
The projection model node applied to the multi-sensor multi-handle can be expanded to include internal parameters of the camera, namely focal length parameters fx and fy, optical center coordinate parameters cx and cy, distortion parameters and the like, and similar effects can be realized if the parameters are fixed or not fixed.
The infrared LED light spot in the present invention, i.e. the aforementioned optical sensor, includes, but is not limited to, an infrared LED light spot with known 3d model coordinate P, and may be a visible light mark marker with known 3d model coordinate P, or the like.
For multi-sensor multi-handle tracking, the invention uses multiple threads including but not limited to a front-end thread, a rear-end BA optimization thread and an IMU initialization thread belonging to the rear end, so that the system state at the moment can be optimized only when the optimization of the system state at the moment or the IMU initialization is not too fast to output, and the handle tracking can output the 6DoF pose continuously and stably.
For multi-sensor multi-handle tracking, the invention also uses an observed projection model with a number of nodes in the distorted image plane exceeding 3 nodes of the conventional projection model, including but not limited to handle pose T wb Transformation T of the model coordinate system into the handle coordinate system bh Camera pose T wc And the light spot model P, so that the tracking effect of the handle at different picture positions is stable and accurate.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the foregoing, the description is not to be taken in a limiting sense.

Claims (8)

1. A multi-sensor multi-handle controller graph optimization tight coupling tracking method is characterized by comprising the following steps:
acquiring helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by a multi-view camera and helmet inertial navigation data; the handle tracking data comprises images shot by the multi-view camera and handle inertial navigation data;
determining the camera pose of a multi-view camera coordinate system under a world coordinate system according to the helmet inertial navigation data in a vision + IMU tightly coupled SLAM mode;
referring to the camera pose of the camera No. 0, determining the handle pose of the handle in a world coordinate system;
constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determining an initial predicted pose of the handle; the system state quantity comprises a 3D vector position, a 3D vector speed, a gyroscope deviation and an accelerometer deviation of the handle;
extracting and matching the 2D coordinates of the infrared light spot to the 3D model coordinates according to the initial predicted pose of the handle, and determining the initial value of the system state quantity;
determining the current system state by adopting a close-coupled BA (building block diagram) optimization mode according to the initial value of the system state quantity; the current state of the system is a low-frequency handle state;
and enabling the handle to continuously output the 6DoF pose according to the current system state.
2. The method for optimizing the close-coupled tracking of the multi-sensor multi-handle controller graph according to claim 1, wherein the determining the camera pose of the multi-view camera coordinate system in the world coordinate system according to the helmet inertial navigation data in a SLAM mode of visual + IMU close coupling specifically comprises:
calibrating internal parameters of the multi-view cameras and external parameters among the multi-view cameras;
acquiring external parameters between a camera No. 0 and a helmet IMU sensor, delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor;
and determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a vision + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
3. The multi-sensor multi-handle controller graph optimization close-coupled tracking method according to claim 1, wherein the step of constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data, and determining an initial predicted handle pose comprises the steps of:
pre-integrating gyroscope 3d data and accelerometer 3d data of the handle IMU sensor, determining rotation amount, translation amount and speed variation of the current moment relative to the previous moment, and updating gyroscope deviation and accelerometer deviation;
and determining the initial predicted pose of the handle according to the updated deviation of the spirometer and the deviation of the accelerometer.
4. The multi-sensor multi-handle controller graph optimization close-coupled tracking method according to claim 1, wherein the extracting and matching 2D coordinates of infrared light spots to 3D model coordinates according to the initial predicted handle pose to determine an initial system state quantity value specifically comprises:
and extracting and matching the 2D coordinates of the infrared light spots with the 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the initial value of the system state quantity.
5. A multi-sensor multi-handle controller graph optimized tight-coupled tracking system, comprising:
the tracking data acquisition module is used for acquiring helmet tracking data and handle tracking data; the helmet tracking data comprises images shot by the multi-view camera and helmet inertial navigation data; the handle tracking data comprises images shot by the multi-view camera and handle inertial navigation data;
the camera pose determination module is used for determining the camera pose of the multi-camera coordinate system in a world coordinate system according to the helmet inertial navigation data in a vision + IMU tightly-coupled SLAM mode;
the handle pose determination module is used for determining the handle pose of the handle in a world coordinate system by referring to the camera pose of the camera No. 0;
the handle initial prediction pose determining module is used for constructing a system state quantity based on the handle pose, performing IMU pre-integration according to the last system state quantity and the handle inertial navigation data and determining the handle initial prediction pose; the system state quantity comprises a 3D vector position of the handle, a 3D vector speed, a gyroscope deviation and an accelerometer deviation;
the system state quantity initial value determining module is used for extracting and matching 2D coordinates of the infrared light spot to 3D model coordinates according to the initial predicted pose of the handle, and determining a system state quantity initial value;
the system current state determining module is used for determining the current system state by adopting a close-coupled BA diagram optimization mode according to the system state quantity initial value; the current state of the system is a low-frequency handle state;
and the 6DoF pose output module is used for enabling the handle to continuously output the 6DoF pose according to the current system state.
6. The multi-sensor multi-handle controller graph optimization tight-coupling tracking system of claim 5, wherein the camera pose determination module specifically comprises:
the calibration unit is used for calibrating internal parameters of the multi-view cameras and external parameters among the multi-view cameras;
the parameter acquisition unit is used for acquiring external parameters between the No. 0 camera and the helmet IMU sensor, delay of the helmet IMU sensor relative to the multi-view camera and internal parameters of the helmet IMU sensor;
and the camera pose determining unit is used for determining the camera pose of the multi-view camera coordinate system under a world coordinate system through a visual + IMU tightly coupled SLAM mode according to the internal parameters of the multi-view camera, the external parameters among the multi-view cameras, the external parameters among the No. 0 camera and the helmet IMU sensor, the delay of the helmet IMU sensor relative to the multi-view camera and the internal parameters of the helmet IMU sensor.
7. The multi-sensor multi-handle controller graph optimization tight-coupling tracking system of claim 5, wherein the handle initial prediction pose determination module specifically comprises:
the updating unit is used for pre-integrating the gyroscope 3d data and the accelerometer 3d data of the handle IMU sensor, determining the rotation amount, the translation amount and the speed variation of the current moment relative to the previous moment, and updating the gyroscope deviation and the accelerometer deviation;
and the handle initial prediction pose determining unit is used for determining the handle initial prediction pose according to the updated spirometer deviation and the accelerometer deviation.
8. The multi-sensor multi-handle controller graph optimization tight-coupling tracking system according to claim 5, wherein the system state quantity initial value determination module specifically comprises:
and the system state quantity initial value determining unit is used for extracting and matching the 2D coordinates of the infrared light spot to the 3D model coordinates by using an N-point perspective algorithm according to the initial predicted pose of the handle, and determining the system state quantity initial value.
CN202211036999.6A 2022-08-29 2022-08-29 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system Active CN115311353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211036999.6A CN115311353B (en) 2022-08-29 2022-08-29 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211036999.6A CN115311353B (en) 2022-08-29 2022-08-29 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system

Publications (2)

Publication Number Publication Date
CN115311353A true CN115311353A (en) 2022-11-08
CN115311353B CN115311353B (en) 2023-10-10

Family

ID=83864073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211036999.6A Active CN115311353B (en) 2022-08-29 2022-08-29 Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system

Country Status (1)

Country Link
CN (1) CN115311353B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880189A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Combined calibration method and combined calibration device thereof and electronic equipment
CN111949123A (en) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 Hybrid tracking method and device for multi-sensor handle controller
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112085790A (en) * 2020-08-14 2020-12-15 香港理工大学深圳研究院 Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN112179338A (en) * 2020-09-07 2021-01-05 西北工业大学 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
CN113838141A (en) * 2021-09-02 2021-12-24 中南大学 External parameter calibration method and system for single line laser radar and visible light camera
CN114170308A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 All-in-one machine pose true value calculating method and device, electronic equipment and storage medium
CN114295127A (en) * 2021-12-21 2022-04-08 上海鱼微阿科技有限公司 RONIN and 6DOF positioning fusion method and hardware system framework
CN114332423A (en) * 2021-12-30 2022-04-12 深圳创维新世界科技有限公司 Virtual reality handle tracking method, terminal and computer-readable storage medium
CN114935975A (en) * 2022-05-13 2022-08-23 歌尔股份有限公司 Multi-user interaction method for virtual reality, electronic equipment and readable storage medium
CN114943773A (en) * 2022-04-06 2022-08-26 阿里巴巴(中国)有限公司 Camera calibration method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880189A (en) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 Combined calibration method and combined calibration device thereof and electronic equipment
CN111949123A (en) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 Hybrid tracking method and device for multi-sensor handle controller
CN112085790A (en) * 2020-08-14 2020-12-15 香港理工大学深圳研究院 Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112179338A (en) * 2020-09-07 2021-01-05 西北工业大学 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
CN113838141A (en) * 2021-09-02 2021-12-24 中南大学 External parameter calibration method and system for single line laser radar and visible light camera
CN114170308A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 All-in-one machine pose true value calculating method and device, electronic equipment and storage medium
CN114295127A (en) * 2021-12-21 2022-04-08 上海鱼微阿科技有限公司 RONIN and 6DOF positioning fusion method and hardware system framework
CN114332423A (en) * 2021-12-30 2022-04-12 深圳创维新世界科技有限公司 Virtual reality handle tracking method, terminal and computer-readable storage medium
CN114943773A (en) * 2022-04-06 2022-08-26 阿里巴巴(中国)有限公司 Camera calibration method, device, equipment and storage medium
CN114935975A (en) * 2022-05-13 2022-08-23 歌尔股份有限公司 Multi-user interaction method for virtual reality, electronic equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晨曦: "基于IMU与单目视觉融合的位姿估计方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2, pages 20 - 48 *

Also Published As

Publication number Publication date
CN115311353B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
Qin et al. Vins-mono: A robust and versatile monocular visual-inertial state estimator
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN107990899B (en) Positioning method and system based on SLAM
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110880189B (en) Combined calibration method and combined calibration device thereof and electronic equipment
Rambach et al. Learning to fuse: A deep learning approach to visual-inertial camera pose estimation
EP2959315B1 (en) Generation of 3d models of an environment
Dong-Si et al. Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration
KR100855657B1 (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
JP2011175477A (en) Three-dimensional measurement apparatus, processing method and program
CN111707261A (en) High-speed sensing and positioning method for micro unmanned aerial vehicle
CN111932674A (en) Optimization method of line laser vision inertial system
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
Wang et al. LF-VIO: A visual-inertial-odometry framework for large field-of-view cameras with negative plane
JP6922348B2 (en) Information processing equipment, methods, and programs
Huai et al. Real-time large scale 3D reconstruction by fusing Kinect and IMU data
JP5698815B2 (en) Information processing apparatus, information processing apparatus control method, and program
JP5267100B2 (en) Motion estimation apparatus and program
CN115410233B (en) Gesture attitude estimation method based on Kalman filtering and deep learning
Ling et al. RGB-D inertial odometry for indoor robot via keyframe-based nonlinear optimization
CN111145267A (en) IMU (inertial measurement unit) assistance-based 360-degree panoramic view multi-camera calibration method
KR102456872B1 (en) System and method for tracking hand motion using strong coupling fusion of image sensor and inertial sensor
CN115311353B (en) Multi-sensor multi-handle controller graph optimization tight coupling tracking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 501, Building 3, No. 1 Jiusong Road, Xinqiao Town, Songjiang District, Shanghai, 2016

Applicant after: Play Out Dreams (Shanghai) Technology Co.,Ltd.

Address before: 201600 Room 501, Building 3, No. 1 Caosung Road, Xinqiao Town, Songjiang District, Shanghai

Applicant before: Shanghai yuweia Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant