CN105931275A - Monocular and IMU fused stable motion tracking method and device based on mobile terminal - Google Patents

Monocular and IMU fused stable motion tracking method and device based on mobile terminal Download PDF

Info

Publication number
CN105931275A
CN105931275A CN201610346191.6A CN201610346191A CN105931275A CN 105931275 A CN105931275 A CN 105931275A CN 201610346191 A CN201610346191 A CN 201610346191A CN 105931275 A CN105931275 A CN 105931275A
Authority
CN
China
Prior art keywords
pose
camera
imu
tracking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610346191.6A
Other languages
Chinese (zh)
Inventor
邓欢军
方维
乔羽
李�根
古鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Storm Mirror Technology Co Ltd
Original Assignee
Beijing Storm Mirror Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Storm Mirror Technology Co Ltd filed Critical Beijing Storm Mirror Technology Co Ltd
Priority to CN201610346191.6A priority Critical patent/CN105931275A/en
Publication of CN105931275A publication Critical patent/CN105931275A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a monocular and IMU fused stable motion tracking method and device based on a mobile terminal, belonging to the technical field of AR/VR motion tracking. The method comprises the following steps of: judging whether the number of tracking feature points of a current frame of an image is greater than a pre-set threshold value or not, if so, performing feature point tracking by adopting an optical flow method so as to obtain the current pose of a camera, if not, obtaining feature points by adopting a FAST feature detection operator, and performing feature matching of the image by adopting a BRIEF algorithm calculation descriptor so as to obtain the current pose of the camera; performing Kalman filtering of the current pose of the camera so as to obtain a visual pose; obtaining acceleration and angular speed values generated by an IMU in a three-dimensional space, and performing integral operation of the acceleration and angular speed values so as to obtain the pose of the IMU; and performing Kalman fusion of the visual pose and the pose of the IMU, and performing motion tracking. Compared with the prior art, more stable and rapid motion tracking can be obtained on mobile terminal equipment.

Description

The stable motion tracking merged based on mobile terminal monocular and IMU and device
Technical field
The present invention relates to AR/VR motion tracking technology field, particularly relate to a kind of merge based on mobile terminal monocular and IMU Stable motion tracking and device.
Background technology
Motion tracking technology is intended to measure, follows the tracks of, records object movement locus in three dimensions, and it mainly passes through Sensor technology obtains the information of moving scene, and is calculated the tracked object attitude in space in real time.It is mainly applied In AR (Augmented Reality, augmented reality)/VR (Virtual Reality, virtual reality), wearable device, machine The fields such as people and automatic Pilot navigation.Currently, move the motion trackings such as AR/VR and mainly use handle to interact, handing over The gyroscope simply using mobile phone during Hu carries out rotating tracking.To Nister in 2004, visual odometry is proposed first Since concept, the method for view-based access control model speedometer has become the main flow of real-time Attitude estimation and motion tracking.It is by estimating phase Machine, in the incremental motion in space, determines the movement locus of camera at time and space.
At present the motion tracking method of main flow has a visual tracking method based on monocular or binocular, wherein single objective vision with Track method is relatively low due to its equipment cost, the current mobile platform being widely present in, and in mobile phone, flat board, therefore suffers from Increasing attention.But owing to cost is limited, the photographic head frame per second of the mobile phone terminal of current main-stream is the most relatively low and image passes The noise of sensor is relatively large so that it is poor to the adaptability of environment during motion tracking.Currently, based on mobile terminal Monocular movement tracking be primarily present on sensor and not enough of both in principle.Itself say from sensor, due to Limited by mobile terminal image quality and frame per second, when in ambient image, less being easily caused of characteristic point is followed the tracks of unsuccessfully;When equipment enters During row rapid movement, image can be made to produce motion blur, cause motion tracking failure;From the principle, motion based on monocular Tracking can only illustrate the relative motion trend of camera by increment list, and does not have absolute dimensional information.These two aspects is very the biggest Constrain the concrete actual application being currently based on mobile terminal.
Summary of the invention
The technical problem to be solved in the present invention is to provide one and can obtain more stable and quick on the equipment of mobile terminal The stable motion tracking merged based on mobile terminal monocular and IMU of motion tracking and device.
For solving above-mentioned technical problem, the present invention provides technical scheme as follows:
A kind of stable motion tracking merged based on mobile terminal monocular and IMU, including:
Obtain image;
Judge that whether the tracking characteristics of the present frame of image counts out more than predetermined threshold value, if it is, use optical flow method Carry out feature point tracking, obtain the current pose of camera, if it is not, then use FAST feature detection operator to obtain characteristic point, and Use BRIEF algorithm to calculate description and image is carried out characteristic matching, obtain the current pose of camera;
The current pose of camera is carried out Kalman filtering, obtains vision pose;
Obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and acceleration and magnitude of angular velocity are integrated Computing, obtains IMU pose and predicts the outcome;
Vision pose and IMU pose are predicted the outcome and carries out Kalman's fusion, enter according to the posture information obtained after merging Row motion tracking.
Further, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, it is calculated the basis between two two field pictures Matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
Further, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, remove not Meet the characteristic point of threshold value.
Further, described employing FAST feature detection operator obtains characteristic point, and uses BRIEF algorithm to calculate description Image is carried out characteristic matching, and the current pose obtaining camera includes:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son also With initial frame characteristic matching, directly calculate the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
Further, the described current pose to camera carries out Kalman filtering, obtains vision pose and includes:
Step 1: for each frame of image, uses optical flow method accumulation acquired results and characteristic point directly to mate eligible result and enter Row Kalman filtering, obtains the current pose of camera more accurately, and carries out continuous iterated transform;
Step 2: use the renewal equation of Kalman Filtering for Discrete, is calculated current covariance and estimates Pk -, it is concrete, The renewal equation of Kalman Filtering for Discrete is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is The covariance of former frame is estimated, Q is noise covariance matrix;
Step 3: use observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent and see Surveying noise, R is the covariance matrix of observation noise;
Step 4: according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
Device is followed the tracks of in a kind of stable motion merged based on mobile terminal monocular and IMU, including:
Acquisition module: be used for obtaining image;
Visual tracking module: whether be more than predetermined threshold value for judging that the tracking characteristics of the present frame of image is counted out, as Fruit is, then use optical flow method to carry out feature point tracking, obtain the current pose of camera, if it is not, then use FAST feature detection Operator obtains characteristic point, and uses BRIEF algorithm calculating description that image is carried out characteristic matching, obtains the present bit of camera Appearance;
Filtration module: for the current pose of camera is carried out Kalman filtering, obtain vision pose;
IMU pose computing module: for obtaining acceleration and the magnitude of angular velocity that IMU produces at three dimensions, and to acceleration Degree and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
Fusion Module: carry out Kalman's fusion for predicting the outcome vision pose and IMU pose, obtains according to after merging Posture information carry out motion tracking.
Further, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, it is calculated the basis between two two field pictures Matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
Further, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, remove not Meet the characteristic point of threshold value.
Further, described employing FAST feature detection operator obtains characteristic point, and uses BRIEF algorithm to calculate description Image is carried out characteristic matching, and the current pose obtaining camera includes:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son also With initial frame characteristic matching, directly calculate the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
Further, described filtration module, it is additionally operable to:
Kalman filtering module: use optical flow method accumulation acquired results and direct of characteristic point for each frame for image Join eligible result and carry out Kalman filtering, obtain the current pose of camera more accurately, and carry out continuous iterated transform;
First computing module: for using the renewal equation of Kalman Filtering for Discrete, be calculated current covariance and estimate Meter Pk -, concrete, the renewal equation of Kalman Filtering for Discrete is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is The covariance of former frame is estimated, Q is noise covariance matrix;
Second computing module: be used for using observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent and see Surveying noise, R is the covariance matrix of observation noise;
Update system module: for according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
The method have the advantages that
In the present invention, for being currently based on the problem of mobile tracking, the present invention uses quick FAST based on monocular to calculate Method and optical flow method are main, merge the existing IMU in mobile terminal (Inertial Measurement Unit Inertial Measurement Unit) simultaneously Hardware, on the premise of not by external equipment, it is achieved the stable motion tracking merged based on monocular and IMU.The present invention Using quick FAST algorithm and optical flow method, processing speed is fast, can realize real-time tracking;Feature Points Matching and optical flow method are merged Following the tracks of, the single method precision that visual tracking result ratio of precision is traditional is high;Melting of vision and IMU data is carried out under EKF framework Close, combine camera and the respective advantage of inertial sensor achieves pose quickly and accurately and estimates.Further, the present invention is led to Cross the stability by IMU data acquisition and high frame per second thereof, can effectively overcome and produce based on image characteristic point deficiency, motion blur etc. Raw tracking failure problem.While vision carries out tenacious tracking, by Kalman filtering realize the obtained attitude of vision with Track, to obtain mobile terminal spatial pose more steady, accurate.Meanwhile, the accurate camera pose obtained by Kalman filtering IMU data are corrected, reduce the IMU drift impact on precision itself.Finally, utilize Kalman filtering to IMU and monocular The pose that camera obtains merges, while obtaining stable motion tracking, it is achieved the size estimation rebuilding monocular.With existing Having technology to compare, the present invention can obtain more stable and quick motion tracking on the equipment of mobile terminal.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the stable motion tracking merged based on mobile terminal monocular and IMU of the present invention;
Fig. 2 is the vision pose calculating of the stable motion tracking merged based on mobile terminal monocular and IMU of the present invention With Kalman filter theory schematic diagram;
Fig. 3 is vision pose and the IMU of the stable motion tracking merged based on mobile terminal monocular and IMU of the present invention Pose Kalman merges principle schematic;
Fig. 4 is the coordinate system signal of the stable motion tracking merged based on mobile terminal monocular and IMU of the present invention Figure;
Fig. 5 is monocular vision and the IMU of the stable motion tracking merged based on mobile terminal monocular and IMU of the present invention System schematic;
Fig. 6 is that the technical scheme based on mobile terminal monocular and the stable motion tracking of IMU fusion of the present invention always flows Journey block diagram;
Fig. 7 is the structural representation of the stable motion tracking device merged based on mobile terminal monocular and IMU of the present invention.
Detailed description of the invention
For making the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and tool Body embodiment is described in detail.
On the one hand, the present invention provides a kind of stable motion tracking merged based on mobile terminal monocular and IMU, such as Fig. 1 Shown in, including:
Step S101: obtain image;
Step S102: judge that whether the tracking characteristics of the present frame of image counts out more than predetermined threshold value, if it is, adopt Carry out feature point tracking by optical flow method, obtain the current pose of camera, if it is not, then use FAST feature detection operator to obtain spy Levy a little, and use BRIEF algorithm calculating description that image is carried out characteristic matching, obtain the current pose of camera;
This step is the motion tracking process obtaining visual pattern based on monocular, carries out obtaining visual pattern based on monocular Before motion tracking, in terms of Image Feature Detection, it is contemplated that the computing capability that mobile terminal is more weak, use speed FAST feature detection operator, is tracked in conjunction with optical flow method.For picture frame sequence: I0,...,Ik,Ik+1,...,Ik+n..., I0,...,IkTwo field picture uses FAST feature detection operator, and BRIEF algorithm calculates and describes son, and mates, until IkFrame The match is successful, and some logarithm is more than threshold value, then initialize successfully.
In this step, the detailed process of motion tracking based on monocular acquisition visual pattern can be:
With position corresponding to the first frame as initial point, and the camera pose state position of the first two field picture [I | 0], IkThe phase of two field picture The absolute pose of machine is [R(0,k)|t(0,k)].During the tracking of subsequent frame, the efficiency processed for raising, by using light stream Method carries out consecutive frame feature point tracking, until after n frame, when tracing into Ik+nWhen image characteristic point number is less than threshold value, for ensureing The robustness followed the tracks of, to Ik+nTwo field picture is reused FAST feature detection operator and is obtained the continuation tracking of more characteristic point, and Use BRIEF algorithm to calculate and describe son and IkTwo field picture carries out characteristic matching and obtains pose more accurately.By above-mentioned tracking Method, has taken into account the efficiency during monocular vision is followed the tracks of and robustness to a certain extent.
Step S103: the current pose of camera is carried out Kalman filtering, obtains vision pose;
In this step, Kalman filtering (Kalman filtering) is a kind of state equation using linear system, logical Cross system input and output observation data, system mode is carried out the algorithm of optimal estimation.This step uses Kalman filtering energy Enough effective filter out the impact of the noise in system and interference, thus improve the stability that camera is followed the tracks of.
Step S104: obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and to acceleration and magnitude of angular velocity It is integrated computing, obtains IMU pose and predict the outcome;
In this step, the IMU (Inertial measurement unit is called for short IMU) related to is Measuring Object three axle appearance State angular velocity (or angular speed) and the device of acceleration.General, IMU contains the accelerometer of three single shafts and three The gyro of single shaft, accelerometer founds the acceleration signal of three axles for detecting object in carrier coordinate system unification and independence, and gyro is used for examining Survey carrier relative to the angular velocity signal of navigational coordinate system, in this step, IMU can 3 vertical axis produce acceleration and Magnitude of angular velocity, is integrated predicting pose, and the monocular vision sensor in mobile device can provide the 3D not having yardstick Position and the measured value of pose.Obtaining IMU data front and back between consecutive frame and carry out pose prediction, a later frame vision pose is estimated It is counted as being updated for measured value.
Step S105: predict the outcome vision pose and IMU pose and carry out Kalman's fusion, according to the position obtained after merging Appearance information carries out motion tracking;
In this step, for obtaining stable tracking pose, make full use of the information that the sensor of vision and IMU obtains, this Invent by using Kalman's fusion method, the pose prediction knot that the vision pose obtained by visual pattern and IMU integration obtain Fruit is merged, to realize message complementary sense and the Target state estimator of two Dissimilar sensors, thus the most accurate after obtaining fusion, Pose reliably.And then, carry out motion tracking according to the posture information after merging.
In the present invention, for being currently based on the problem of mobile tracking, the present invention uses quick FAST based on monocular to calculate Method and optical flow method are main, merge the existing IMU in mobile terminal (Inertial Measurement Unit Inertial Measurement Unit) simultaneously Hardware, on the premise of not by external equipment, it is achieved the stable motion tracking merged based on monocular and IMU.The present invention Using quick FAST algorithm and optical flow method, processing speed is fast, can realize real-time tracking;Feature Points Matching and optical flow method are merged Following the tracks of, the single method precision that visual tracking result ratio of precision is traditional is high;Melting of vision and IMU data is carried out under EKF framework Close, combine camera and the respective advantage of inertial sensor achieves pose quickly and accurately and estimates.Further, the present invention is led to Cross the stability by IMU data acquisition and high frame per second thereof, can effectively overcome and produce based on image characteristic point deficiency, motion blur etc. Raw tracking failure problem.While vision carries out tenacious tracking, by Kalman filtering realize the obtained attitude of vision with Track, to obtain mobile terminal spatial pose more steady, accurate.Meanwhile, the accurate camera pose obtained by Kalman filtering IMU data are corrected, reduce the IMU drift impact on precision itself.Finally, utilize Kalman filtering to IMU and monocular The pose that camera obtains merges, while obtaining stable motion tracking, it is achieved the size estimation rebuilding monocular.With existing Having technology to compare, the present invention can obtain more stable and quick motion tracking on the equipment of mobile terminal.
As a modification of the present invention, the employing optical flow method in step S102 carries out feature point tracking, obtains camera Current pose includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, it is calculated the basis between two two field pictures Matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
Carry out the improvement of feature point tracking method for above-mentioned employing optical flow method, the present invention provides the most concrete a kind of reality Under executing such as:
By the Corresponding matching feature point set (X on two two field pictures before and after during following the tracks ofL,XR), by computer vision Basic skills, can obtain its corresponding relation XL TFXR=0, thus can be calculated the basis matrix F between two width images further.
By mutual relation between E between basis matrix F and essential matrix: E=KL TFKR, wherein (KL,KR) camera is interior respectively Parameter, in the monocular system of this mobile terminal, this intrinsic parameter can be demarcated and K in advanceL=KR
According to the essential matrix E obtained, utilize SVD can recover to obtain the relative pose [R of adjacent interframe(k,k+1)| t(k,k+1)]。
With position corresponding to the first frame as initial point, with this relative pose [R(k,k+1)|t(k,k+1)] and the absolute position of former frame camera Appearance [R(0,k)|t(0,k)] be multiplied, i.e. can get the absolute pose [R of Current camera(0,k+1)|t(0,k+1)]。
The present invention in motor process, can obtain a series of relative poses corresponding to every frame picture at camera successively, And then obtain absolute pose.
As the another kind of improvement of the present invention, the employing optical flow method in step S102 carries out feature point tracking, obtains camera Current pose include:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, remove not Meet the characteristic point of threshold value.
In the present invention, during using optical flow method to be tracked, if IkTwo field picture calculates characteristic point and description After son, Ik+1Frame figure uses optical flow method tracking characteristics point, for ensureing the correctness of optical flow tracking characteristic point, is following the tracks of successfully Characteristic point takes the block of 8*8 size around it respectively, uses image correlation algorithm SSD, removes the characteristic point being unsatisfactory for threshold value, with Improve the accuracy of optical flow tracking.
As a further improvement on the present invention, the employing FAST feature detection operator in step S102 obtains characteristic point, and Using BRIEF algorithm to calculate description and image is carried out characteristic matching, the current pose obtaining camera includes:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son also With initial frame characteristic matching, directly calculate the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
In the present invention, for ensureing accuracy and the flatness of above-mentioned tracking gained camera real-time pose, it is simple to motion tracking In the application in AR/VR field, the feature successfully tracked when optical flow method after n frame is counted less than threshold value, Ik+nTwo field picture weight New use FAST feature detection operator and BRIEF calculate describe son and with IkFrame characteristic matching, directly calculates IkFrame is to Ik+nTransformation matrix [the R of frame(k,k+n)|t(k,k+n)], be multiplied IkThe camera absolute position pose [R of frame(0,k)|t(0,k)] ', and then Obtain accurate Ik+nFrame camera absolute pose [R(0,k+n)|t(0,k+n)]'。
As the further improvement of the present invention, schematic diagram is with reference to shown in Fig. 2, and step S103 includes:
Step 1: for each frame of image, uses optical flow method accumulation acquired results and characteristic point directly to mate eligible result and enter Row Kalman filtering, obtains the current pose of camera more accurately, and carries out continuous iterated transform;
Step 2: use the renewal equation of Kalman Filtering for Discrete, is calculated current covariance and estimates Pk -, it is concrete, The renewal equation of Kalman Filtering for Discrete is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is The covariance of former frame is estimated, Q is noise covariance matrix;
Step 3: use observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent and see Surveying noise, R is the covariance matrix of observation noise;
Step 4: according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
In the present invention, by the Kalman filtering to monocular camera attitude, while improving camera tracking stability, also Fusion for follow-up IMU and camera provides more accurately attitude measurement value [R smoothly(0,k)|t(0,k)]”。
In the present invention, vision pose and IMU pose carry out Kalman's fusion process and those skilled in the art can be used public The accomplished in many ways known, it is preferred that the present invention is referred to below embodiment and carries out:
Vision and IMU merge schematic flow sheet, as shown in Figure 3.Describe for convenience, define subscript w, i, v, c table respectively Show world coordinate system, IMU coordinate, visual coordinate system and camera coordinates system.Coordinate system defines, as shown in Figure 4;
1) assume that inertia measurement includes specific deviation b and white Gaussian noise n, then actual angular velocity omega and reality Acceleration a is as follows:
ω=ωm-bω-nω, a=am-ba-na
Wherein subscript m represents measured value, and dynamic deviation b can be represented as a stochastic process:
b · ω = n b ω , b · a = n b a
The quantity of state of wave filter includes IMU position in world coordinate systemAnd world coordinate system is relative to IMU coordinate The speed of systemWith attitude four elementMeanwhile, also gyroscope and deviation b of accelerometerω, baAnd the Ocular measure factor λ.And demarcate the rotation relationship between the IMU of gained and cameraTranslation relationIt is hereby achieved that one comprises 24 units Element state vector X:
X = { p w i T v w i T q w i T b ω T b a T λ p i c q i c }
2) in above-mentioned state is expressed and described, we use four elements to be described attitude.In this case, I Use four element errors to represent error and its covariance, so can increase numerical stability and express in minimum.So, The error condition that we define 22 elements is vectorial:
x ~ = { Δp w i T Δv w i T δθ w i T Δb ω T Δb a T Δ λ Δp i c T δθ i c T }
In view of estimated valueWith its true value x, such asIn addition to four element errors, all states are become by we Amount uses the method, and wherein, four element errors are defined as:
δq w i = q w i ⊗ q ^ w i - 1 ≈ 1 2 δθ w i T 1 T , δq i c = q i c ⊗ q ^ i c - 1 ≈ 1 2 δθ i c T 1 T
Thus, it is possible to obtain the lienarized equation of continuous error condition:
x ~ · = F c x ~ + G c n
Wherein,It it is noise vector.In current solution, we are outstanding to the speed of algorithm It is concerned about, to this end, within the time of integration of two adjacent states, it will be assumed that FcAnd GcIt it is steady state value.In order to it is carried out discretization Represent:
F d = exp ( F c Δ t ) = I d + F c Δ t + 1 2 F c 2 Δt 2 + ...
Meanwhile, the covariance matrix Q of discrete time can be obtained by integrationd:
Q d = ∫ Δ t F d ( τ ) G c Q c G C T F d ( τ ) T d τ
By calculating gained FdAnd Qd, according to Kalman filtering, it is calculated state covariance matrix:
Pk+1|k=FdPk|kFd T+Qd
3) for the position measurement of cameraThe absolute pose that we obtain according to the visual tracking of one camera, and obtain The measurement position of its correspondence.Thus obtain following measurement model:
z p = p v c = C ( q v w ) T ( p w i + C ( q w i ) T p i c ) λ + n p
Wherein,It is the spin matrix of IMU attitude under world coordinate system,It is that visual coordinate system is relative to the world The spin matrix of coordinate system.
4) definition errors in position measurement model:
z ~ p = z p - z ^ p = C ( q v w ) T ( p w i + C ( q w i ) T p i c ) λ + n p - C ( q v w ) T ( p ^ w i + C ( q ^ w i ) T p ^ i c ) λ ^
Definition wheel measuring error model:
z ~ q = z q - z ^ q = H q w i δq w i = H q i c δq i c
Wherein,WithIt is to be error state amount respectivelyWithWrong calculation matrix.Finally, calculation matrix can To be accumulated as:
z ~ p z ~ q = H p 0 3 × 6 H ~ q w i 0 3 × 10 H ~ q i c x ~
5) when we get calculation matrix H, we can be updated according to the step of Kalman filter.
As it is shown in figure 5, be monocular vision and IMU fusion schematic diagram, by above-mentioned monocular vision based on Kalman filtering With IMU data fusion, obtain the attitude output that mobile terminal is stable, and then realize stable motion tracking, the technical side of the present invention Case overall block flow diagram, as shown in Figure 6.
The vision pose of above-described embodiment only present invention and IMU pose carry out a citing of Kalman's fusion, remove Beyond this embodiment, it is also possible to use and well known to a person skilled in the art other method, it is also possible to realize the technology effect of the present invention Really.
On the other hand, a kind of stable motion merged based on mobile terminal monocular and IMU of the present invention follows the tracks of device, such as Fig. 7 institute Show, including:
Acquisition module 11: be used for obtaining image;
Visual tracking module 12: whether be more than predetermined threshold value for judging that the tracking characteristics of each frame of image is counted out, as Fruit is, then use optical flow method to carry out feature point tracking, obtain the current pose of camera, if it is not, then use FAST special image Levy detective operators and obtain characteristic point, and use BRIEF algorithm calculating description that image is carried out characteristic matching, obtain working as of camera Front pose;
Filtration module 13: for the current pose of camera is carried out Kalman filtering, obtain vision pose;
IMU pose computing module 14: for obtaining acceleration and the magnitude of angular velocity that IMU produces at three dimensions, and to adding Speed and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
Fusion Module 15: carry out Kalman's fusion for predicting the outcome vision pose and IMU pose, obtains according to after merging To posture information carry out motion tracking.
With said method accordingly, compared with prior art, present invention is equally capable to obtain energy on the equipment of mobile terminal Add stable and quick motion tracking.
As a modification of the present invention, the employing optical flow method that visual tracking module 12 is carried out carries out feature point tracking, Current pose to camera includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, it is calculated the basis between two two field pictures Matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
The present invention in motor process, can obtain a series of relative poses corresponding to every frame picture at camera successively, And then obtain absolute pose.
As the another kind of improvement of the present invention, the employing optical flow method that visual tracking module 12 is carried out carries out feature point tracking, The current pose obtaining camera includes:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, remove not Meet the characteristic point of threshold value.
In the present invention, remove the characteristic point being unsatisfactory for threshold value, it is possible to increase the accuracy of optical flow tracking.
As a further improvement on the present invention, what visual tracking module 12 was carried out uses FAST feature detection operator to image Obtain characteristic point, and use BRIEF algorithm calculating description that image is carried out characteristic matching, obtain the current pose bag of camera Include:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son also With initial frame characteristic matching, directly calculate the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
In the present invention, it is possible to ensure accuracy and the flatness of above-mentioned tracking gained camera real-time pose, it is simple to move with Track is in the application in AR/VR field.
As the further improvement of the present invention, filtration module 13, it is additionally operable to:
Kalman filtering module: use optical flow method accumulation acquired results and direct of characteristic point for each frame for image Join eligible result and carry out Kalman filtering, obtain the current pose of camera more accurately, and carry out continuous iterated transform;
First computing module: for using the renewal equation of Kalman Filtering for Discrete, be calculated current covariance and estimate Meter Pk -, concrete, the renewal equation of Kalman Filtering for Discrete is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is The covariance of former frame is estimated, Q is noise covariance matrix;
Second computing module: be used for using observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent and see Surveying noise, R is the covariance matrix of observation noise;
Update system module: for according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
In the present invention, by the Kalman filtering to monocular camera attitude, while improving camera tracking stability, also Fusion for follow-up IMU and camera provides more accurately attitude measurement value [R smoothly(0,k)|t(0,k)]”。
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, on the premise of without departing from principle of the present invention, it is also possible to make some improvements and modifications, these improvements and modifications are also Should be regarded as protection scope of the present invention.

Claims (10)

1. the stable motion tracking merged based on mobile terminal monocular and IMU, it is characterised in that including:
Obtain image;
Judge that whether the tracking characteristics of the present frame of image counts out more than predetermined threshold value, if it is, use optical flow method to carry out Feature point tracking, obtains the current pose of camera, if it is not, then use FAST feature detection operator to obtain characteristic point, and uses BRIEF algorithm calculates description and image is carried out characteristic matching, obtains the current pose of camera;
The current pose of camera is carried out Kalman filtering, obtains vision pose;
Obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and acceleration and magnitude of angular velocity be integrated computing, Obtain IMU pose to predict the outcome;
Vision pose and IMU pose are predicted the outcome and carries out Kalman's fusion, transport according to the posture information obtained after merging Motion tracking.
The stable motion tracking merged based on mobile terminal monocular and IMU the most according to claim 1, its feature exists In, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, the basic square being calculated between two two field pictures Battle array;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
The stable motion tracking merged based on mobile terminal monocular and IMU the most according to claim 1, its feature exists In, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, removal is unsatisfactory for The characteristic point of threshold value.
4. according to the described stable motion tracking merged based on mobile terminal monocular and IMU arbitrary in claim 1-3, its Being characterised by, described employing FAST feature detection operator obtains characteristic point, and uses BRIEF algorithm calculating description to enter image Row characteristic matching, the current pose obtaining camera includes:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son and with just Beginning frame characteristic matching, directly calculates the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
The stable motion tracking merged based on mobile terminal monocular and IMU the most according to claim 4, its feature exists In, the described current pose to camera carries out Kalman filtering, obtains vision pose and includes:
Step 1: for each frame of image, uses optical flow method accumulation acquired results and characteristic point directly to mate eligible result and block Kalman Filtering, obtains the current pose of camera more accurately, and carries out continuous iterated transform;
Step 2: use the renewal equation of Kalman Filtering for Discrete, is calculated current covariance and estimates Pk -, concrete, discrete The renewal equation of Kalman filtering is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is previous The covariance of frame is estimated, Q is noise covariance matrix;
Step 3: use observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent that observation is made an uproar Sound, R is the covariance matrix of observation noise;
Step 4: according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
6. device is followed the tracks of in the stable motion merged based on mobile terminal monocular and IMU, it is characterised in that including:
Acquisition module: be used for obtaining image;
Visual tracking module: for judging that whether the tracking characteristics of the present frame of image counts out more than predetermined threshold value, if it is, Then use optical flow method to carry out feature point tracking, obtain the current pose of camera, if it is not, then use FAST feature detection operator to obtain Take characteristic point, and use BRIEF algorithm calculating description that image is carried out characteristic matching, obtain the current pose of camera;
Filtration module: for the current pose of camera is carried out Kalman filtering, obtain vision pose;
IMU pose computing module: for obtaining acceleration and the magnitude of angular velocity that IMU produces at three dimensions, and to acceleration with Magnitude of angular velocity is integrated computing, obtains IMU pose and predicts the outcome;
Fusion Module: carry out Kalman's fusion for predicting the outcome vision pose and IMU pose, according to the position obtained after merging Appearance information carries out motion tracking.
Device is followed the tracks of in the stable motion merged based on mobile terminal monocular and IMU the most according to claim 6, and its feature exists In, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
To the Corresponding matching feature point set on two two field pictures adjacent before and after image, the basic square being calculated between two two field pictures Battle array;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
According to essential matrix, SVD is used to recover to obtain the relative pose of adjacent interframe;
Relative pose is multiplied with the absolute pose of the camera of the former frame obtained, obtains the current pose of camera.
Device is followed the tracks of in the stable motion merged based on mobile terminal monocular and IMU the most according to claim 6, and its feature exists In, described employing optical flow method carries out feature point tracking, and the current pose obtaining camera includes:
Following the tracks of the block taking a certain size around successful characteristic point respectively, using image correlation algorithm SSD, removal is unsatisfactory for The characteristic point of threshold value.
9. according to the described stable motion tracking device merged based on mobile terminal monocular and IMU arbitrary in claim 6-8, its Being characterised by, described employing FAST feature detection operator obtains characteristic point, and uses BRIEF algorithm calculating description to enter image Row characteristic matching, the current pose obtaining camera includes:
Use FAST feature detection operator to obtain characteristic point image present frame, use BRIEF algorithm to calculate and describe son and with just Beginning frame characteristic matching, directly calculates the initial frame transformation matrix to present frame;
Transformation matrix is multiplied with the absolute pose of the camera of initial frame, obtains the current pose of camera.
Device is followed the tracks of in the stable motion merged based on mobile terminal monocular and IMU the most according to claim 9, and its feature exists In, described filtration module, it is additionally operable to:
Kalman filtering module: for using optical flow method accumulation acquired results and characteristic point directly to mate institute for each frame of image Obtain result and carry out Kalman filtering, obtain the current pose of camera more accurately, and carry out continuous iterated transform;
First computing module: for using the renewal equation of Kalman Filtering for Discrete, is calculated current covariance and estimates Pk -, Concrete, the renewal equation of Kalman Filtering for Discrete is:
x ^ k = A x ^ k - 1 + Bu k - 1
Pk -=APk-1AT+Q
Wherein,Being optical flow method calculated camera attitude, A is state-transition matrix, and B is to control gain, Pk-1It is previous The covariance of frame is estimated, Q is noise covariance matrix;
Second computing module: be used for using observational equation, calculates Kalman gain Kk, concrete, observational equation is:
zk=Hxk+vk
Kk=Pk -HT(HPk -HT+R)-1
Wherein, zkIt is that H is observing matrix, v by FAST Feature Points Matching algorithm calculated camera attitudekRepresent that observation is made an uproar Sound, R is the covariance matrix of observation noise;
Update system module: for according to Kalman Filtering for Discrete device state renewal equation, the system that updates is arranged:
x ^ k = x ^ k - + K k ( z k - H x ^ k - )
P k = ( I - K k H ) P k - .
CN201610346191.6A 2016-05-23 2016-05-23 Monocular and IMU fused stable motion tracking method and device based on mobile terminal Pending CN105931275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610346191.6A CN105931275A (en) 2016-05-23 2016-05-23 Monocular and IMU fused stable motion tracking method and device based on mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610346191.6A CN105931275A (en) 2016-05-23 2016-05-23 Monocular and IMU fused stable motion tracking method and device based on mobile terminal

Publications (1)

Publication Number Publication Date
CN105931275A true CN105931275A (en) 2016-09-07

Family

ID=56841104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610346191.6A Pending CN105931275A (en) 2016-05-23 2016-05-23 Monocular and IMU fused stable motion tracking method and device based on mobile terminal

Country Status (1)

Country Link
CN (1) CN105931275A (en)

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106546238A (en) * 2016-10-26 2017-03-29 北京小鸟看看科技有限公司 Wearable device and the method that user's displacement is determined in wearable device
CN106705965A (en) * 2017-01-12 2017-05-24 苏州中德睿博智能科技有限公司 Scene three-dimensional data registration method and navigation system error correction method
CN106780608A (en) * 2016-11-23 2017-05-31 北京地平线机器人技术研发有限公司 Posture information method of estimation, device and movable equipment
CN106842625A (en) * 2017-03-03 2017-06-13 西南交通大学 A kind of Consensus target tracking glasses of feature based and method
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN107194968A (en) * 2017-05-18 2017-09-22 腾讯科技(上海)有限公司 Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107341831A (en) * 2017-07-06 2017-11-10 青岛海通胜行智能科技有限公司 A kind of the visual signature robust tracking method and device of IMU auxiliary
CN107491099A (en) * 2017-08-30 2017-12-19 浙江华飞智能科技有限公司 A kind of cloud platform control method and device of view-based access control model and gyroscope
CN107516327A (en) * 2017-08-21 2017-12-26 腾讯科技(上海)有限公司 Method and device, the equipment of camera attitude matrix are determined based on multi-layer filtering
CN107862704A (en) * 2017-11-06 2018-03-30 广东工业大学 A kind of method for tracking target, system and its head camera used
CN108090921A (en) * 2016-11-23 2018-05-29 中国科学院沈阳自动化研究所 Monocular vision and the adaptive indoor orientation method of IMU fusions
CN108259709A (en) * 2018-01-19 2018-07-06 长沙全度影像科技有限公司 A kind of video image anti-fluttering method and system for the shooting of bullet time
CN108364319A (en) * 2018-02-12 2018-08-03 腾讯科技(深圳)有限公司 Scale determines method, apparatus, storage medium and equipment
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN108694348A (en) * 2017-04-07 2018-10-23 中山大学 A kind of Tracing Registration method and device based on physical feature
CN108805987A (en) * 2018-05-21 2018-11-13 中国科学院自动化研究所 Combined tracking method and device based on deep learning
CN108803861A (en) * 2017-04-28 2018-11-13 广东虚拟现实科技有限公司 A kind of exchange method, equipment and system
CN108921898A (en) * 2018-06-28 2018-11-30 北京旷视科技有限公司 Pose of camera determines method, apparatus, electronic equipment and computer-readable medium
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN109035303A (en) * 2018-08-03 2018-12-18 百度在线网络技术(北京)有限公司 SLAM system camera tracking and device, computer readable storage medium
CN109074664A (en) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 Posture scaling method, equipment and unmanned vehicle
CN109196556A (en) * 2017-12-29 2019-01-11 深圳市大疆创新科技有限公司 Barrier-avoiding method, device and moveable platform
CN109175832A (en) * 2018-09-20 2019-01-11 上海理工大学 A kind of 3D welding positioning system and its control method based on monocular
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109559330A (en) * 2017-09-25 2019-04-02 北京金山云网络技术有限公司 Visual tracking method, device, electronic equipment and the storage medium of moving target
CN109584299A (en) * 2018-11-13 2019-04-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109753841A (en) * 2017-11-01 2019-05-14 比亚迪股份有限公司 Lane detection method and apparatus
CN109993802A (en) * 2019-04-03 2019-07-09 浙江工业大学 A kind of Hybrid camera scaling method in urban environment
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A kind of monocular vision odometer position and posture processing method based on IMU auxiliary
CN110147164A (en) * 2019-05-22 2019-08-20 京东方科技集团股份有限公司 Head movement tracking, equipment, system and storage medium
WO2019205850A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Pose determination method and device, intelligent apparatus, and storage medium
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
CN110622213A (en) * 2018-02-09 2019-12-27 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN110648354A (en) * 2019-09-29 2020-01-03 电子科技大学 Slam method in dynamic environment
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN110796010A (en) * 2019-09-29 2020-02-14 湖北工业大学 Video image stabilization method combining optical flow method and Kalman filtering
CN110842918A (en) * 2019-10-24 2020-02-28 华中科技大学 Robot mobile processing autonomous locating method based on point cloud servo
CN110910423A (en) * 2019-11-15 2020-03-24 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN111354042A (en) * 2018-12-24 2020-06-30 深圳市优必选科技有限公司 Method and device for extracting features of robot visual image, robot and medium
CN107316319B (en) * 2017-05-27 2020-07-10 北京小鸟看看科技有限公司 Rigid body tracking method, device and system
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN111595362A (en) * 2020-06-05 2020-08-28 联想(北京)有限公司 Parameter calibration method and device for inertial measurement unit and electronic equipment
CN111811421A (en) * 2020-07-17 2020-10-23 中国人民解放军国防科技大学 High-speed real-time deformation monitoring method and system
CN111862150A (en) * 2020-06-19 2020-10-30 杭州易现先进科技有限公司 Image tracking method and device, AR device and computer device
WO2020221307A1 (en) * 2019-04-29 2020-11-05 华为技术有限公司 Method and device for tracking moving object
CN112017229A (en) * 2020-09-06 2020-12-01 桂林电子科技大学 Method for solving relative attitude of camera
WO2020238790A1 (en) * 2019-05-27 2020-12-03 浙江商汤科技开发有限公司 Camera positioning
CN112037261A (en) * 2020-09-03 2020-12-04 北京华捷艾米科技有限公司 Method and device for removing dynamic features of image
CN112396634A (en) * 2020-11-27 2021-02-23 苏州欧菲光科技有限公司 Moving object detection method, moving object detection device, vehicle and storage medium
CN112734797A (en) * 2019-10-29 2021-04-30 浙江商汤科技开发有限公司 Image feature tracking method and device and electronic equipment
CN112837374A (en) * 2021-03-09 2021-05-25 中国矿业大学 Space positioning method and system
CN113632135A (en) * 2019-04-30 2021-11-09 三星电子株式会社 System and method for low latency, high performance pose fusion
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN113672608A (en) * 2021-08-25 2021-11-19 东北大学 Internet of things perception data reduction system and method based on self-adaptive reduction threshold
CN113920194A (en) * 2021-10-08 2022-01-11 电子科技大学 Four-rotor aircraft positioning method based on visual inertia fusion
CN114167979A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine
CN115115707A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle drowning detection method, vehicle, computer readable storage medium and chip
CN114167979B (en) * 2021-11-18 2024-04-26 玩出梦想(上海)科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325108A (en) * 2013-05-27 2013-09-25 浙江大学 Method for designing monocular vision odometer with light stream method and feature point matching method integrated

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325108A (en) * 2013-05-27 2013-09-25 浙江大学 Method for designing monocular vision odometer with light stream method and feature point matching method integrated

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孟琳: "基于计算机视觉的相对导航运动估计研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
邴志刚 等: "基于视觉与IMU 的跟踪机器人研究", 《天津职业技术师范大学学报》 *
郑驰 等: "融合光流与特征点匹配的单目视觉里程计", 《浙江大学学报(工学版)》 *

Cited By (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device
CN106546238A (en) * 2016-10-26 2017-03-29 北京小鸟看看科技有限公司 Wearable device and the method that user's displacement is determined in wearable device
CN106780608A (en) * 2016-11-23 2017-05-31 北京地平线机器人技术研发有限公司 Posture information method of estimation, device and movable equipment
CN108090921A (en) * 2016-11-23 2018-05-29 中国科学院沈阳自动化研究所 Monocular vision and the adaptive indoor orientation method of IMU fusions
CN106705965A (en) * 2017-01-12 2017-05-24 苏州中德睿博智能科技有限公司 Scene three-dimensional data registration method and navigation system error correction method
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN106842625A (en) * 2017-03-03 2017-06-13 西南交通大学 A kind of Consensus target tracking glasses of feature based and method
CN106842625B (en) * 2017-03-03 2020-03-17 西南交通大学 Target tracking method based on feature consensus
CN108694348A (en) * 2017-04-07 2018-10-23 中山大学 A kind of Tracing Registration method and device based on physical feature
CN108803861A (en) * 2017-04-28 2018-11-13 广东虚拟现实科技有限公司 A kind of exchange method, equipment and system
CN107194968B (en) * 2017-05-18 2024-01-16 腾讯科技(上海)有限公司 Image identification tracking method and device, intelligent terminal and readable storage medium
CN107194968A (en) * 2017-05-18 2017-09-22 腾讯科技(上海)有限公司 Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image
CN107316319B (en) * 2017-05-27 2020-07-10 北京小鸟看看科技有限公司 Rigid body tracking method, device and system
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107341831B (en) * 2017-07-06 2020-10-27 青岛海通胜行智能科技有限公司 IMU (inertial measurement Unit) -assisted visual feature robust tracking method and device
CN107341831A (en) * 2017-07-06 2017-11-10 青岛海通胜行智能科技有限公司 A kind of the visual signature robust tracking method and device of IMU auxiliary
CN107516327A (en) * 2017-08-21 2017-12-26 腾讯科技(上海)有限公司 Method and device, the equipment of camera attitude matrix are determined based on multi-layer filtering
CN107516327B (en) * 2017-08-21 2023-05-16 腾讯科技(上海)有限公司 Method, device and equipment for determining camera attitude matrix based on multilayer filtering
CN107491099A (en) * 2017-08-30 2017-12-19 浙江华飞智能科技有限公司 A kind of cloud platform control method and device of view-based access control model and gyroscope
CN109559330A (en) * 2017-09-25 2019-04-02 北京金山云网络技术有限公司 Visual tracking method, device, electronic equipment and the storage medium of moving target
CN109074664A (en) * 2017-10-26 2018-12-21 深圳市大疆创新科技有限公司 Posture scaling method, equipment and unmanned vehicle
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
CN109753841A (en) * 2017-11-01 2019-05-14 比亚迪股份有限公司 Lane detection method and apparatus
CN109753841B (en) * 2017-11-01 2023-12-12 比亚迪股份有限公司 Lane line identification method and device
CN107862704B (en) * 2017-11-06 2021-05-11 广东工业大学 Target tracking method and system and holder camera used by same
CN107862704A (en) * 2017-11-06 2018-03-30 广东工业大学 A kind of method for tracking target, system and its head camera used
CN109196556A (en) * 2017-12-29 2019-01-11 深圳市大疆创新科技有限公司 Barrier-avoiding method, device and moveable platform
CN108259709A (en) * 2018-01-19 2018-07-06 长沙全度影像科技有限公司 A kind of video image anti-fluttering method and system for the shooting of bullet time
CN110622213B (en) * 2018-02-09 2022-11-15 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN110622213A (en) * 2018-02-09 2019-12-27 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN108364319A (en) * 2018-02-12 2018-08-03 腾讯科技(深圳)有限公司 Scale determines method, apparatus, storage medium and equipment
CN108364319B (en) * 2018-02-12 2022-02-01 腾讯科技(深圳)有限公司 Dimension determination method and device, storage medium and equipment
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN110555882B (en) * 2018-04-27 2022-11-15 腾讯科技(深圳)有限公司 Interface display method, device and storage medium
WO2019205850A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Pose determination method and device, intelligent apparatus, and storage medium
CN110555882A (en) * 2018-04-27 2019-12-10 腾讯科技(深圳)有限公司 Interface display method, device and storage medium
US11158083B2 (en) 2018-04-27 2021-10-26 Tencent Technology (Shenzhen) Company Limited Position and attitude determining method and apparatus, smart device, and storage medium
CN108805987A (en) * 2018-05-21 2018-11-13 中国科学院自动化研究所 Combined tracking method and device based on deep learning
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN108648215B (en) * 2018-06-22 2022-04-15 南京邮电大学 SLAM motion blur pose tracking algorithm based on IMU
CN108921898A (en) * 2018-06-28 2018-11-30 北京旷视科技有限公司 Pose of camera determines method, apparatus, electronic equipment and computer-readable medium
CN109035303A (en) * 2018-08-03 2018-12-18 百度在线网络技术(北京)有限公司 SLAM system camera tracking and device, computer readable storage medium
CN109175832A (en) * 2018-09-20 2019-01-11 上海理工大学 A kind of 3D welding positioning system and its control method based on monocular
CN109175832B (en) * 2018-09-20 2020-11-10 上海理工大学 Control method of 3D welding positioning system based on monocular measurement
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109376785B (en) * 2018-10-31 2021-09-24 东南大学 Navigation method based on iterative extended Kalman filtering fusion inertia and monocular vision
CN109584299A (en) * 2018-11-13 2019-04-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109584299B (en) * 2018-11-13 2021-01-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN111354042A (en) * 2018-12-24 2020-06-30 深圳市优必选科技有限公司 Method and device for extracting features of robot visual image, robot and medium
CN111354042B (en) * 2018-12-24 2023-12-01 深圳市优必选科技有限公司 Feature extraction method and device of robot visual image, robot and medium
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A kind of monocular vision odometer position and posture processing method based on IMU auxiliary
CN109993802A (en) * 2019-04-03 2019-07-09 浙江工业大学 A kind of Hybrid camera scaling method in urban environment
CN109993802B (en) * 2019-04-03 2020-12-25 浙江工业大学 Hybrid camera calibration method in urban environment
WO2020221307A1 (en) * 2019-04-29 2020-11-05 华为技术有限公司 Method and device for tracking moving object
CN113632135A (en) * 2019-04-30 2021-11-09 三星电子株式会社 System and method for low latency, high performance pose fusion
CN110147164A (en) * 2019-05-22 2019-08-20 京东方科技集团股份有限公司 Head movement tracking, equipment, system and storage medium
WO2020238790A1 (en) * 2019-05-27 2020-12-03 浙江商汤科技开发有限公司 Camera positioning
CN110796010A (en) * 2019-09-29 2020-02-14 湖北工业大学 Video image stabilization method combining optical flow method and Kalman filtering
CN110648354A (en) * 2019-09-29 2020-01-03 电子科技大学 Slam method in dynamic environment
CN110648354B (en) * 2019-09-29 2022-02-01 电子科技大学 Slam method in dynamic environment
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN110842918A (en) * 2019-10-24 2020-02-28 华中科技大学 Robot mobile processing autonomous locating method based on point cloud servo
CN112734797A (en) * 2019-10-29 2021-04-30 浙江商汤科技开发有限公司 Image feature tracking method and device and electronic equipment
CN110910423A (en) * 2019-11-15 2020-03-24 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN110910423B (en) * 2019-11-15 2022-08-23 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN111595362A (en) * 2020-06-05 2020-08-28 联想(北京)有限公司 Parameter calibration method and device for inertial measurement unit and electronic equipment
CN111862150A (en) * 2020-06-19 2020-10-30 杭州易现先进科技有限公司 Image tracking method and device, AR device and computer device
CN111811421A (en) * 2020-07-17 2020-10-23 中国人民解放军国防科技大学 High-speed real-time deformation monitoring method and system
CN112037261A (en) * 2020-09-03 2020-12-04 北京华捷艾米科技有限公司 Method and device for removing dynamic features of image
CN112017229A (en) * 2020-09-06 2020-12-01 桂林电子科技大学 Method for solving relative attitude of camera
CN112017229B (en) * 2020-09-06 2023-06-27 桂林电子科技大学 Camera relative pose solving method
CN112396634A (en) * 2020-11-27 2021-02-23 苏州欧菲光科技有限公司 Moving object detection method, moving object detection device, vehicle and storage medium
CN112837374B (en) * 2021-03-09 2023-11-03 中国矿业大学 Space positioning method and system
CN112837374A (en) * 2021-03-09 2021-05-25 中国矿业大学 Space positioning method and system
CN113672608B (en) * 2021-08-25 2023-07-25 东北大学 Internet of things perception data reduction system and method based on self-adaptive reduction threshold
CN113672608A (en) * 2021-08-25 2021-11-19 东北大学 Internet of things perception data reduction system and method based on self-adaptive reduction threshold
CN113920194B (en) * 2021-10-08 2023-04-21 电子科技大学 Positioning method of four-rotor aircraft based on visual inertia fusion
CN113920194A (en) * 2021-10-08 2022-01-11 电子科技大学 Four-rotor aircraft positioning method based on visual inertia fusion
CN114167979A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine
CN114167979B (en) * 2021-11-18 2024-04-26 玩出梦想(上海)科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine
CN115115707A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle drowning detection method, vehicle, computer readable storage medium and chip
CN115115707B (en) * 2022-06-30 2023-10-10 小米汽车科技有限公司 Vehicle falling water detection method, vehicle, computer readable storage medium and chip

Similar Documents

Publication Publication Date Title
CN105931275A (en) Monocular and IMU fused stable motion tracking method and device based on mobile terminal
US11519729B2 (en) Vision-aided inertial navigation
CN105953796A (en) Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
Lupton et al. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions
CN109885080B (en) Autonomous control system and autonomous control method
CN110402368B (en) Integrated vision-based inertial sensor system for use in vehicle navigation
EP2209091B1 (en) System and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
CN109141433A (en) A kind of robot indoor locating system and localization method
Zhang et al. IMU data processing for inertial aided navigation: A recurrent neural network based approach
CN110865650B (en) Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
CN109443348A (en) It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
Kang et al. Vins-vehicle: A tightly-coupled vehicle dynamics extension to visual-inertial state estimator
CN112907678B (en) Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment
CN114001733B (en) Map-based consistent efficient visual inertial positioning algorithm
CN111649739A (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN109238277B (en) Positioning method and device for visual inertial data depth fusion
Zhang et al. Vision-aided localization for ground robots
CN113503873A (en) Multi-sensor fusion visual positioning method
Spaenlehauer et al. A loosely-coupled approach for metric scale estimation in monocular vision-inertial systems
CN109387198A (en) A kind of inertia based on sequential detection/visual odometry Combinated navigation method
EP3227634A1 (en) Method and system for estimating relative angle between headings
Irmisch et al. Simulation framework for a visual-inertial navigation system
Park et al. A novel line of sight control system for a robot vision tracking system, using vision feedback and motion-disturbance feedforward compensation
CN115540854A (en) Active positioning method, equipment and medium based on UWB assistance
Kundra et al. Non-deceiving features in fused optical flow gyroscopes

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160907