CN105953796A - Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone - Google Patents
Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone Download PDFInfo
- Publication number
- CN105953796A CN105953796A CN201610346493.3A CN201610346493A CN105953796A CN 105953796 A CN105953796 A CN 105953796A CN 201610346493 A CN201610346493 A CN 201610346493A CN 105953796 A CN105953796 A CN 105953796A
- Authority
- CN
- China
- Prior art keywords
- frame
- pose
- imu
- point
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Abstract
The invention discloses a stable motion tracking method and a stable motion tracking device based on integration of a simple camera and an IMU (inertial measurement unit) of a smart cellphone, and belongs to the technical field of AR (augmented reality)/VR (virtual reality) motion tracking. The method includes processing an acquired image according to an ORB (object request broker) algorithm, performing 3D (three-dimensional) reconstruction to obtain initial map points, and completing map initialization; performing visual tracking through ORB algorithm real-time matching and parallel partial keyframe mapping to obtain a visual pose; acquiring accelerated velocity and angular velocity, both generated in a three-dimensional space, of the IMU, and performing integral operation on the accelerated velocity and the angular velocity to obtain an IMU pose prediction result; performing Kalman fusion on the visual pose and the IMU pose prediction result, and performing motion tracking according to pose information acquired after fusion. Compared with the prior art, the stable motion tracking method and the stable motion tracking device have the advantages that a stable motion tracking mode can be acquired and real-time online dimension estimation can be achieved.
Description
Technical field
The present invention relates to moving communicating field, particularly relate to a kind of smart mobile phone monocular and IMU merges
Stable motion tracking and device.
Background technology
Along with the development of VR technology, utilizing advanced motion tracking technology is its prerequisite technology applied
One of condition, can realize preferably mutual and more preferably feeling of immersion in this technical foundation.Work as reach
Dynamic VR mainly uses handle to interact, and simply uses the gyroscope of mobile phone in interaction
Carry out rotating and follow the tracks of, due to deviation and the impact of noise of mobile phone gyroscope itself, cause rotation to be estimated
Meter is inaccurate, and repeatable accuracy is poor;When user be seated stand up move forward time, if not using handle
If Jiao Hu, virtual scene remains stationary, and as not occurring whatever, interactive experience is bad;
And be immersed in virtual environment when user is seated, subconsciousness is stood up and is attempted moving, and virtual scene does not has
Any change occurs, and feeling of immersion disappears.
Motion tracking technology is intended to measure, follows the tracks of, records object movement locus in three dimensions,
It mainly obtains the information of moving scene by sensor technology, and is calculated tracked thing in real time
Body, in the attitude in space, is widely used in robot navigation, Navigation of Pilotless Aircraft and unmanned vehicle and automatically drives
Sail the fields such as navigation.Within 2004, Nister proposes visual odometry (Visual Odometry, VO) first
Concept since, the method for view-based access control model speedometer has become the master of real-time Attitude estimation and motion tracking
Stream.It, by estimating the camera incremental motion in space, determines the movement locus of camera at time and space.
And vision inertia speedometer (Visual IMU Odometry, VIO) has merged camera and inertia sensing
The information of device, mainly gyroscope and accelerometer, give a scheme having complementary advantages.Such as,
One single camera can estimate relative position, but it is not provided that absolute measure, it is impossible to obtains thing
Actual range between body size or two objects, and photographic head sampling frame per second is the most relatively low and image passes
The noise of sensor is relatively large so that it is poor to the adaptability of environment during motion tracking.
Inertial sensor can provide absolute measure, and measures with higher sample frequency, thus improve and set
For robustness when moving quick.But, the low cost inertial sensor carried, compare based on camera
Location estimation bigger drift easily occurs, it is impossible to realize stable motion tracking.
Summary of the invention
The technical problem to be solved in the present invention is to provide and a kind of can obtain more stable motion tracking
Mode, and realize smart mobile phone monocular and the stable fortune of IMU fusion of the real-time online estimation of yardstick
Motion tracking method and apparatus.
For solving above-mentioned technical problem, the present invention provides technical scheme as follows:
The stable motion tracking that a kind of smart mobile phone monocular and IMU merge, including:
Utilize ORB algorithm that the image obtained is processed, carry out 3D reconstruct afterwards, obtain initial
Point map, completes map initialization;
The mode using ORB algorithm real-time matching and parallel local key frame to build figure carry out vision with
Track, obtains vision pose;
Obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and fast to acceleration and angle
Angle value is integrated computing, obtains IMU pose and predicts the outcome;
Vision pose and IMU pose are predicted the outcome and carries out Kalman's fusion, obtain according to after merging
Posture information carry out motion tracking.
Further, the image obtained is processed by the described ORB of utilization algorithm, carries out 3D afterwards
Reconstruct, obtains initial point map, completes map initialization and include:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
Further, calculate camera under the second frame described in include relative to the relative pose of the first frame:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains under described second frame camera relative to the first frame
Relative pose.
Further, described use ORB algorithm real-time matching and parallel local key frame build the side of figure
Formula carries out visual tracking, obtains vision pose and includes:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
Further, described use ORB algorithm real-time matching and parallel local key frame build the side of figure
Formula carries out visual tracking, obtains vision pose and also includes:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
Device is followed the tracks of in the stable motion that a kind of smart mobile phone monocular and IMU merge, including:
Map initialization module, for utilizing ORB algorithm to process the image obtained, afterwards
Carry out 3D reconstruct, obtain initial point map, complete map initialization;
Visual tracking module, is used for using ORB algorithm real-time matching and parallel local key frame to build figure
Mode carry out visual tracking, obtain vision pose;
IMU pose computing module: fast for obtaining acceleration that IMU produces at three dimensions and angle
Angle value, and acceleration and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
Fusion Module: carry out Kalman's fusion for vision pose and IMU pose are predicted the outcome,
Motion tracking is carried out according to the posture information obtained after merging.
Further, described map initialization module is additionally operable to:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
Further, calculate camera under the second frame described in include relative to the relative pose of the first frame:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains under described second frame camera relative to the first frame
Relative pose.
Further, described visual tracking module is additionally operable to:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
Further, described visual tracking module is additionally operable to:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
The method have the advantages that
In the present invention, first to map initialization, obtain image after initializing successfully and follow the tracks of continuously also
Carry out pose estimation;Meanwhile, obtain IMU data to be integrated predicting pose;At spreading kalman
Carry out data fusion under filtering (Extended Kalman Filter, EKF) framework and obtain stable position
Appearance is estimated.For the motion tracking problem of current mobile VR, the camera that the present invention carries with mobile terminal
And IMU, by using VIO to combine vision measurement and inertial sensor measurement under EKF framework,
Pose and absolute measure can be estimated exactly.Realize the quick and stable motion tracking of mobile VR
Method.Compared with prior art, the present invention has can obtain more stable motion tracking mode,
And the feature that the real-time online realizing yardstick is estimated.
Accompanying drawing explanation
The stream of the stable motion tracking that Fig. 1 is the smart mobile phone monocular of the present invention and IMU merges
Journey schematic diagram;
Regarding of the stable motion tracking that Fig. 2 is the smart mobile phone monocular of the present invention and IMU merges
Feel that pose estimates schematic flow sheet;
Regarding of the stable motion tracking that Fig. 3 is the smart mobile phone monocular of the present invention and IMU merges
Feel that pose and IMU pose Kalman merge principle schematic;
The seat of the stable motion tracking that Fig. 4 is the smart mobile phone monocular of the present invention and IMU merges
Mark system schematic diagram;
The list of the stable motion tracking that Fig. 5 is the smart mobile phone monocular of the present invention and IMU merges
Visually feel and IMU system schematic;
The skill of the stable motion tracking that Fig. 6 is the smart mobile phone monocular of the present invention and IMU merges
Art scheme overall block flow diagram;
Fig. 7 is the knot that device is followed the tracks of in the smart mobile phone monocular of the present invention and the stable motion of IMU fusion
Structure schematic diagram.
Detailed description of the invention
For making the technical problem to be solved in the present invention, technical scheme and advantage clearer, below will knot
Conjunction drawings and the specific embodiments are described in detail.
On the one hand, the stable motion that the present invention provides a kind of smart mobile phone monocular and IMU to merge is followed the tracks of
Method, as it is shown in figure 1, include:
Step S101: utilize ORB algorithm to process the image obtained, carries out 3D weight afterwards
Structure, obtains initial point map, completes map initialization;
In this step, the purpose carrying out map initialization is to build initial three-dimensional point cloud.Due to can not
Only obtain depth information from single frame, it is therefore desirable to from image sequence, choose more than two frames or two frames
Image, estimate that camera attitude also reconstructs initial three-dimensional point cloud.In this step, use two passes
Key frame, one is initial key frame (initial frame), and another is the key after motion certain angle
Frame (end frame), to carrying out the coupling of key point, then to mating between initial frame to end frame
The feature point set of merit carries out 3D reconstruct, finally completes map initialization.
Step S102: the mode using ORB algorithm real-time matching and parallel local key frame to build figure is entered
Row visual tracking, obtains vision pose;
In this step, after map initialization success, view-based access control model carries out motion tracking.In view of mobile terminal
More weak computing capability, uses the real-time matching of ORB algorithm and pose to estimate, and parallel local
The maintenance of key frame and build the mode of figure and carry out visual tracking, and then obtain vision pose.Wherein, ORB
Algorithm real-time matching and pose are estimated as track thread, locally key frame maintenance and build figure be local close
Joint main feed line journey.
Step S103: obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and to adding
Speed and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
In this step, (Inertial measurement unit is called for short IMU, inertia to the IMU related to
Measuring unit) it is Measuring Object three-axis attitude angular velocity (or angular speed) and the device of acceleration.Typically
, an IMU contains accelerometer and the gyro of three single shafts, the accelerometer of three single shafts
Found the acceleration signal of three axles in carrier coordinate system unification and independence for detecting object, gyro is used for detecting carrier
Relative to the angular velocity signal of navigational coordinate system, in this step, IMU can produce 3 vertical axis
Raw acceleration and magnitude of angular velocity, be integrated predicting pose, and the monocular vision in mobile device pass
Sensor can provide 3D position and the measured value of pose not having yardstick.Between front and back's consecutive frame
Obtaining IMU data and carry out pose prediction, a later frame vision pose is estimated to be updated as measured value.
Step S104: vision pose and IMU pose are predicted the outcome and carries out Kalman's fusion, foundation
The posture information obtained after fusion carries out motion tracking;
In this step, for obtaining stable tracking pose, make full use of the sensor of vision and IMU
The information obtained, by the present invention in that and use Kalman's fusion method, the vision position obtained by visual pattern
The pose that appearance and IMU integration obtain predicts the outcome and merges, to realize the letter of two Dissimilar sensors
Breath complementation and Target state estimator, thus pose the most accurate after obtaining fusion, reliable.And then,
Motion tracking is carried out according to the posture information after merging.
In the present invention, first to map initialization, obtain image after initializing successfully and follow the tracks of continuously also
Carry out pose estimation;Meanwhile, obtain IMU data to be integrated predicting pose;At spreading kalman
Carry out data fusion under filtering (Extended Kalman Filter, EKF) framework and obtain stable position
Appearance is estimated.For the motion tracking problem of current mobile VR, the camera that the present invention carries with mobile terminal
And IMU, by using VIO to combine vision measurement and inertial sensor measurement under EKF framework,
Pose and absolute measure can be estimated exactly.Realize the quick and stable motion tracking of mobile VR
Method.Compared with prior art, the present invention has can obtain more stable motion tracking mode,
And the feature that the real-time online realizing yardstick is estimated.
As a modification of the present invention, utilize ORB algorithm that the image obtained is processed, it
After carry out 3D reconstruct, obtain initial point map, complete map initialization and include:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
For this improvement of the present invention, the present invention provides a kind of complete specific embodiment as follows:
1. gather the first two field picture and use characteristics algorithm (the Oriented FAST with local invariant
And Rotated BRIEF, ORB) extract characteristic point and calculate description, the first frame is key frame,
The absolute pose of labelling camera is [R(0,k)|t(0,k)], subscript (0, k) represent the absolute pose of kth frame, then
[R(0,0)|t(0,0)]=[I | 0];
2. after translation one segment distance, then gather image and use ORB algorithm to extract characteristic point calculating to retouch
State son.After the first frame image features Point matching success, this frame is also denoted as key frame.And calculate
Under two frames, camera is [R relative to the relative pose of the first frame(0,1)|t(0,1)]=[R | t];
3. the match is successful, and feature point set carries out 3D reconstruct, obtains initial point map.
In the present embodiment, ORB algorithm is used to extract feature directly coupling estimation pose, ORB calculation
Method is that FAST Corner Detection is combined a kind of algorithm improvement with BRIEF feature description, has taken into account monocular
Efficiency during visual tracking and precision.
As a further improvement on the present invention, relative relative to the first frame of camera under the second frame is calculated
Pose includes:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains relative relative to the first frame of camera under the second frame
Pose.
For further improvement of the present invention, the complete specific embodiment that the present invention provides is as follows:
1., after translating a segment distance, ORB algorithm is used to extract characteristic point and calculate the second two field picture
Describe son, after the first frame image features Point matching success, obtain the Corresponding matching on two key frames special
Levy point set and be designated as (XL,XR);
2. according to XL TFXR=0, calculate basis matrix F;
3. by basis matrix F and the mutual relation of essential matrix E: E=KL TFKR, wherein (KL,KR)
The intrinsic parameter of camera respectively, this intrinsic parameter can be demarcated and K in advanceL=KR.Obtain essential matrix E, this
Stromal matrix is the most relevant with the outer ginseng of camera, and unrelated with camera internal reference;
4. according to E=[t]×R, wherein [t]×For translational movement t=(tx,ty,tz)TAntisymmetric matrix, R for rotation
Torque battle array.Matrix E is utilized singular value decomposition (Singular Value Decomposition, SVD),
Can calculate R, t, then under the second frame, camera relative to the relative pose of the first frame is
[R(0,1)|t(0,1)]=[R | t].
The present embodiment is at camera in motor process, and can obtain corresponding to every frame picture one successively is
Row relative pose.
As the further improvement of the present invention, use ORB algorithm real-time matching and parallel local
Key frame is built the mode of figure and is carried out visual tracking, obtains vision pose and includes:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
For the further improvement of the invention described above, as in figure 2 it is shown, for the present frame (of image
IkTwo field picture), the specific embodiment of tracking step is as follows:
(1) ORB algorithm rasterizing is used (piece image to be divided into the grid that a series of size is identical
Lattice) extracted region images characteristic point and calculate describe son, rasterizing extract can ensure that the spy on image
Levy an extraction to be evenly distributed, improve stability and the precision of follow-up tracking;
(2) use constant speed movement model, estimate present frame correspondence camera pose.By previous frame
Image Ik-1All point maps project in current image frame.Feature Points Matching, and upper by what the match is successful
One frame point map is assigned to the corresponding characteristic point of present frame;
(3) LM (Levenberg-Marquardt) algorithm and Huber is used to estimate to update present frame
Pose and present frame point map;
(4) according to the pose after updating, by the local all point maps of key frame, (this point map does not wraps
Point map containing in (2)) project in current image frame, and carry out Feature Points Matching.Mate into
After merit, all point maps that the match is successful are assigned to the corresponding characteristic point of present frame.And use LM
Algorithm and Huber estimate again to update present frame pose [R(0,k)|t(0,k)] and present frame point map.
In the present embodiment, ORB algorithm real-time matching and parallel local key frame is used to build the mode of figure
Carry out visual tracking, and then obtain vision pose.Wherein, ORB algorithm real-time matching and pose are estimated
It is calculated as track thread, locally the maintenance of key frame and to build figure be local joint main feed line journey.In the present embodiment,
Track thread processes with local key frame thread parallel, and efficiency high energy realizes real-time tracking.
As a modification of the present invention, use ORB algorithm real-time matching and parallel local key frame
The mode building figure carries out visual tracking, and obtaining vision pose can also include:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
For this improvement, the complete specific embodiment that the present invention provides is as follows:
1) increase new key frame, judge whether to need from time dimension and present frame point map number
Key frame to be strengthened.Exceed after certain time then or the map of present frame when increasing key frame from last time
New key frame is increased when counting less than threshold value;
2) if present frame is new key frame, new point map is increased.By new key frame without ground
All characteristic points of figure point carry out Feature Points Matching with all characteristic points in the key frame of local, mate into
After merit, 3D reconstruct obtains new point map;
3) in order to ensure efficiency and the seriality of tracking followed the tracks of, the quantity of local key frame is controlled,
When key frame quantity is more than threshold value, delete the key frame added the earliest in the key frame of local;
4) locally bundle adjustment (Bundle Adjustment) optimizes, and revises cumulative error.Obtain excellent
Pose after change and point map.
In the present embodiment, step 1) to 4) local key frame thread (locally key frame can be placed on
Thread is (4) in above-described embodiment) in parallel processing, improve efficiency.Repeat in above-described embodiment
(1) to (4), and 1) to 4) be capable of following the tracks of continuously.
In the present embodiment, i.e. can guarantee that the seriality of tracking can reduce again and need crucial frame number to be processed
Amount, decreases the process time, improves the efficiency of motion tracking.
In the present invention, vision pose and IMU pose carry out Kalman's fusion process can use ability
Accomplished in many ways known to field technique personnel, it is preferred that be referred to below embodiment and carry out:
Vision and IMU merge schematic flow sheet, as it is shown on figure 3, describe for convenience, under definition
Mark w, i, v, c represent world coordinate system, IMU coordinate, visual coordinate system and camera coordinates system respectively.Sit
The definition of mark system, as shown in Figure 4;
Step 1: assume that inertia measurement includes specific deviation b and white Gaussian noise n is then actual
Angular velocity omega and actual acceleration a as follows:
ω=ωm-bω-nωA=am-ba-na
Wherein subscript m represents measured value, and dynamic deviation b can be represented as a stochastic process:
The state of wave filter includes IMU position in world coordinate systemAnd world coordinate system
The speed of IMU coordinate system relativelyWith attitude four elementMeanwhile, also gyroscope and acceleration
Deviation b of meterω, baAnd Ocular measure factor lambda.And demarcate the rotation between the IMU of gained and camera
Transfer the registration of Party membership, etc. from one unit to anotherTranslation relationIt is hereby achieved that one comprises 24 elementary state vector X,
As shown in accompanying drawing 5 prediction module.
Step 2: in above-mentioned state is expressed and described, we use four elements to be described attitude.
In this case, we use four element errors to represent error and its covariance, so can increase
Numerical stability and minimum express.So, we define the error condition vector of 22 elements.
In view of estimated valueWith its true value x, such asIn addition to four element errors, we
All state variables are used the method, and wherein four element errors are defined as:
Thus, it is possible to obtain the lienarized equation of continuous error condition:
Wherein,It it is noise vector.In current solution, we are to calculation
The speed of method is especially of interest, to this end, within the time of integration of two adjacent states, it will be assumed that FcAnd Gc
It it is steady state value.In order to it is carried out discretization expression:
Meanwhile, the covariance matrix Q of discrete time can be obtained by integrationd:
By calculating gained FdAnd Qd, according to Kalman filtering, it is calculated state covariance matrix:
Pk+1|k=FdPk|kFd T+Qd
Step 3: for the position measurement of cameraWe obtain according to the visual tracking of one camera
Pose [R(0,k)|t(0,k)], (With) be camera pose position vector and rotate quaternion representation.Enter
And obtain the measurement position of its correspondence.Thus obtain following measurement model:
Wherein,It is IMU attitude under world coordinate system,It is that visual coordinate system is relative to generation
The rotation of boundary's coordinate system.
Step 4: definition errors in position measurement model
Definition wheel measuring error model
Wherein,WithIt is to be error state amount respectivelyWithWrong calculation matrix.Finally,
Calculation matrix can be accumulated as:
Step 5: when we get calculation matrix H, we can be according to Kalman filter
Step is updated, as shown in the more new module in Fig. 5.
Calculating residual vector:
Calculate with new amount: S=HPHT+R;
Calculate Kalman gain: K=PHTS-1;
Calculating correct amount:According to correct amountWe can calculate the renewal amount of X state.
Error state four element can be updated by such as following formula:
Pk+1|k+1=(Id-KH)Pk+1|k(Id-KH)T+KRKT
Followed the tracks of by above-mentioned monocular and IMU merges, obtain the attitude output that mobile terminal is stable, and then
Realize stable motion tracking, the technical scheme overall block flow diagram of the embodiment of the present invention, as shown in Figure 6.
Vision pose and the IMU pose of above-described embodiment only present invention carry out Kalman's fusion
One citing, in addition to this embodiment, it is also possible to uses and well known to a person skilled in the art other method,
It also is able to realize the technique effect of the present invention.
In each method embodiment of the present invention, the sequence number of each step can not be used for limiting the elder generation of each step
Rear order, for those of ordinary skill in the art, on the premise of not paying creative work,
The priority of each step is changed also within protection scope of the present invention.
On the other hand, with said method accordingly, the present invention also provide for a kind of smart mobile phone monocular and
Device is followed the tracks of in the stable motion that IMU merges, as it is shown in fig. 7, comprises:
Map initialization module 11, for utilizing ORB algorithm that the image obtained is processed, it
After carry out 3D reconstruct, obtain initial point map, complete map initialization;
Visual tracking module 12, is used for using ORB algorithm real-time matching and parallel local key frame to build
The mode of figure carries out visual tracking, obtains vision pose;
IMU pose computing module 13: for obtaining acceleration and the angle that IMU produces at three dimensions
Velocity amplitude, and acceleration and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
Fusion Module 14: carry out Kalman's fusion for vision pose and IMU pose are predicted the outcome,
Motion tracking is carried out according to the posture information obtained after merging.
Compared with prior art, the present invention has the more stable motion tracking mode that can obtain, and
Realize the feature that the real-time online of yardstick is estimated.
As a modification of the present invention, map initialization module 11 is additionally operable to:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
In the present invention, ORB algorithm is used to extract feature and directly mate estimation pose, ORB algorithm
It is that FAST Corner Detection is combined with BRIEF feature description a kind of algorithm improvement, has taken into account monocular and regarded
Efficiency during feel tracking and precision.
As a further improvement on the present invention, relative relative to the first frame of camera under the second frame is calculated
Pose includes:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains relative relative to the first frame of camera under the second frame
Pose.
The present invention is at camera in motor process, and can obtain corresponding to every frame picture successively is a series of
Relative pose.
As the further improvement of the present invention, visual tracking module 12 is additionally operable to:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
In the present invention, the mode using ORB algorithm real-time matching and parallel local key frame to build figure is entered
Row visual tracking, and then obtain vision pose.Wherein, ORB algorithm real-time matching and pose are estimated
For track thread, the locally maintenance of key frame and to build figure be joint, local main feed line journey.In the present invention, with
Track thread processes with local key frame thread parallel, and efficiency high energy realizes real-time tracking.
As a modification of the present invention, visual tracking module 12 is additionally operable to:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
In the present invention, i.e. can guarantee that the seriality of tracking can reduce again and need key frame quantity to be processed,
Decrease the process time, improve the efficiency of motion tracking.
The above is the preferred embodiment of the present invention, it is noted that general for the art
For logical technical staff, on the premise of without departing from principle of the present invention, it is also possible to make some changing
Entering and retouch, these improvements and modifications also should be regarded as protection scope of the present invention.
Claims (10)
1. the stable motion tracking that a smart mobile phone monocular and IMU merge, it is characterised in that
Including:
Utilize ORB algorithm that the image obtained is processed, carry out 3D reconstruct afterwards, obtain initial
Point map, completes map initialization;
The mode using ORB algorithm real-time matching and parallel local key frame to build figure carry out vision with
Track, obtains vision pose;
Obtain acceleration and magnitude of angular velocity that IMU produces at three dimensions, and fast to acceleration and angle
Angle value is integrated computing, obtains IMU pose and predicts the outcome;
Vision pose and IMU pose are predicted the outcome and carries out Kalman's fusion, obtain according to after merging
Posture information carry out motion tracking.
The stable motion that smart mobile phone monocular the most according to claim 1 and IMU merge is followed the tracks of
Method, it is characterised in that the described ORB of utilization algorithm to obtain image process, laggard
Row 3D reconstructs, and obtains initial point map, completes map initialization and include:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
The stable motion that smart mobile phone monocular the most according to claim 2 and IMU merge is followed the tracks of
Method, it is characterised in that described in calculate under the second frame camera relative to the relative pose bag of the first frame
Include:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains under described second frame camera relative to the first frame
Relative pose.
4. according to stablizing that described smart mobile phone monocular arbitrary in claim 1-3 and IMU merge
Motion tracking method, it is characterised in that described use ORB algorithm real-time matching and parallel local are closed
Key frame is built the mode of figure and is carried out visual tracking, obtains vision pose and includes:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
The stable motion that smart mobile phone monocular the most according to claim 4 and IMU merge is followed the tracks of
Method, it is characterised in that described use ORB algorithm real-time matching and parallel local key frame build figure
Mode carry out visual tracking, obtain vision pose and also include:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
6. device is followed the tracks of in the stable motion that a smart mobile phone monocular and IMU merge, it is characterised in that
Including:
Map initialization module, for utilizing ORB algorithm to process the image obtained, afterwards
Carry out 3D reconstruct, obtain initial point map, complete map initialization;
Visual tracking module, is used for using ORB algorithm real-time matching and parallel local key frame to build figure
Mode carry out visual tracking, obtain vision pose;
IMU pose computing module: fast for obtaining acceleration that IMU produces at three dimensions and angle
Angle value, and acceleration and magnitude of angular velocity are integrated computing, obtain IMU pose and predict the outcome;
Fusion Module: carry out Kalman's fusion for vision pose and IMU pose are predicted the outcome,
Motion tracking is carried out according to the posture information obtained after merging.
The stable motion that smart mobile phone monocular the most according to claim 6 and IMU merge is followed the tracks of
Device, it is characterised in that described map initialization module is additionally operable to:
The first two field picture obtained use ORB algorithm extract characteristic point and calculate description, by the
One frame is designated as key frame, the absolute pose of labelling camera;
After camera translates one end distance, then ORB algorithm is used to extract characteristic point to the image obtained
And calculate description, mate with the first frame image features point, the second frame is designated as key frame, and
Calculate under the second frame camera relative to the relative pose of the first frame;
The feature point set that the match is successful is carried out 3D reconstruct, obtains initial point map.
The stable motion that smart mobile phone monocular the most according to claim 7 and IMU merge is followed the tracks of
Device, it is characterised in that described in calculate under the second frame camera relative to the relative pose bag of the first frame
Include:
According to the Corresponding matching feature point set on the first frame and the second two field picture, calculate between two two field pictures
Basis matrix;
According to basis matrix and the intrinsic parameter of camera, it is calculated essential matrix;
Essential matrix is used singular value decomposition, obtains under described second frame camera relative to the first frame
Relative pose.
9. according to stablizing that described smart mobile phone monocular arbitrary in claim 6-8 and IMU merge
Motion tracer, it is characterised in that described visual tracking module is additionally operable to:
ORB algorithm rasterizing is used to extract image characteristic point and calculate description the present frame of image
Son;
Use constant speed movement model, estimate present frame correspondence camera pose, by previous frame image institute
There is point map to project in current image frame, carry out Feature Points Matching, and by the previous frame that the match is successful
Point map is assigned to the corresponding characteristic point of present frame;
LM algorithm and Huber is used to estimate to update present frame pose and present frame point map;
According to the pose after updating, all for local key frame point maps are projected in current image frame,
And carry out Feature Points Matching, after the match is successful, all point maps that the match is successful are assigned to present frame
Corresponding characteristic point, and renewal present frame pose is with current again to use LM algorithm and Huber to estimate
Frame point map.
The stable motion that smart mobile phone monocular the most according to claim 9 and IMU merge with
Track device, it is characterised in that described visual tracking module is additionally operable to:
Judge whether to need to increase key according to time interval situation and/or present frame point map number
Frame, if increasing after key frame exceedes certain time from last time or the map of present frame is counted less than threshold
Value, then increase new key frame;
Judge whether present frame is new key frame, if it is, increase new point map, by new
Key frame carries out characteristic point without all characteristic points of point map with all characteristic points in the key frame of local
Coupling, after the match is successful, 3D reconstruct obtains new point map;
Locally bundle adjustment optimization, revises cumulative error, the pose after being optimized and point map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610346493.3A CN105953796A (en) | 2016-05-23 | 2016-05-23 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610346493.3A CN105953796A (en) | 2016-05-23 | 2016-05-23 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105953796A true CN105953796A (en) | 2016-09-21 |
Family
ID=56909351
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610346493.3A Pending CN105953796A (en) | 2016-05-23 | 2016-05-23 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105953796A (en) |
Cited By (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548486A (en) * | 2016-11-01 | 2017-03-29 | 浙江大学 | A kind of unmanned vehicle location tracking method based on sparse visual signature map |
CN106546238A (en) * | 2016-10-26 | 2017-03-29 | 北京小鸟看看科技有限公司 | Wearable device and the method that user's displacement is determined in wearable device |
CN106556391A (en) * | 2016-11-25 | 2017-04-05 | 上海航天控制技术研究所 | A kind of fast vision measuring method based on multi-core DSP |
CN106570820A (en) * | 2016-10-18 | 2017-04-19 | 浙江工业大学 | Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) |
CN106595570A (en) * | 2016-12-16 | 2017-04-26 | 杭州奥腾电子股份有限公司 | Vehicle single camera and six-axis sensor combination range finding system and range finding method thereof |
CN106767785A (en) * | 2016-12-23 | 2017-05-31 | 成都通甲优博科技有限责任公司 | The air navigation aid and device of a kind of double loop unmanned plane |
CN106885574A (en) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | A kind of monocular vision robot synchronous superposition method based on weight tracking strategy |
CN107194968A (en) * | 2017-05-18 | 2017-09-22 | 腾讯科技(上海)有限公司 | Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image |
CN107246868A (en) * | 2017-07-26 | 2017-10-13 | 上海舵敏智能科技有限公司 | A kind of collaborative navigation alignment system and navigation locating method |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
CN107748569A (en) * | 2017-09-04 | 2018-03-02 | 中国兵器工业计算机应用技术研究所 | Motion control method, device and UAS for unmanned plane |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN108022302A (en) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | A kind of sterically defined AR 3 d display devices of Inside-Out |
CN106920279B (en) * | 2017-03-07 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | Three-dimensional map construction method and device |
CN108225345A (en) * | 2016-12-22 | 2018-06-29 | 乐视汽车(北京)有限公司 | The pose of movable equipment determines method, environmental modeling method and device |
CN108364319A (en) * | 2018-02-12 | 2018-08-03 | 腾讯科技(深圳)有限公司 | Scale determines method, apparatus, storage medium and equipment |
CN108648235A (en) * | 2018-04-27 | 2018-10-12 | 腾讯科技(深圳)有限公司 | Method for relocating, device and the storage medium of camera posture tracing process |
CN108759826A (en) * | 2018-04-12 | 2018-11-06 | 浙江工业大学 | A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane |
CN108829116A (en) * | 2018-10-09 | 2018-11-16 | 上海岚豹智能科技有限公司 | Barrier-avoiding method and equipment based on monocular cam |
CN108871311A (en) * | 2018-05-31 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Pose determines method and apparatus |
CN109079799A (en) * | 2018-10-23 | 2018-12-25 | 哈尔滨工业大学(深圳) | It is a kind of based on bionical robot perception control system and control method |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
CN109631894A (en) * | 2018-12-11 | 2019-04-16 | 智灵飞(北京)科技有限公司 | A kind of monocular vision inertia close coupling method based on sliding window |
CN109671120A (en) * | 2018-11-08 | 2019-04-23 | 南京华捷艾米软件科技有限公司 | A kind of monocular SLAM initial method and system based on wheel type encoder |
CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
CN109727287A (en) * | 2018-12-27 | 2019-05-07 | 江南大学 | A kind of improvement register method and its system suitable for augmented reality |
CN109739079A (en) * | 2018-12-25 | 2019-05-10 | 广东工业大学 | A method of improving VSLAM system accuracy |
CN109752717A (en) * | 2017-11-07 | 2019-05-14 | 现代自动车株式会社 | Device and method for the sensing data in associated vehicle |
CN109887029A (en) * | 2019-01-17 | 2019-06-14 | 江苏大学 | A kind of monocular vision mileage measurement method based on color of image feature |
CN109900294A (en) * | 2019-05-13 | 2019-06-18 | 奥特酷智能科技(南京)有限公司 | Vision inertia odometer based on hardware accelerator |
CN109978943A (en) * | 2017-12-28 | 2019-07-05 | 深圳市优必选科技有限公司 | Move camera lens working method and system, the device with store function |
CN110006423A (en) * | 2019-04-04 | 2019-07-12 | 北京理工大学 | A kind of adaptive inertial navigation and visual combination air navigation aid |
CN110009739A (en) * | 2019-01-29 | 2019-07-12 | 浙江省北大信息技术高等研究院 | The extraction and coding method of the motion feature of the digital retina of mobile camera |
CN110095752A (en) * | 2019-05-07 | 2019-08-06 | 百度在线网络技术(北京)有限公司 | Localization method, device, equipment and medium |
CN110140100A (en) * | 2017-01-02 | 2019-08-16 | 摩致实验室有限公司 | Three-dimensional enhanced reality object user's interface function |
CN110147164A (en) * | 2019-05-22 | 2019-08-20 | 京东方科技集团股份有限公司 | Head movement tracking, equipment, system and storage medium |
WO2019157925A1 (en) * | 2018-02-13 | 2019-08-22 | 视辰信息科技(上海)有限公司 | Visual-inertial odometry implementation method and system |
CN110196047A (en) * | 2019-06-20 | 2019-09-03 | 东北大学 | Robot autonomous localization method of closing a position based on TOF depth camera and IMU |
CN110211239A (en) * | 2019-05-30 | 2019-09-06 | 杭州远传新业科技有限公司 | Augmented reality method, apparatus, equipment and medium based on unmarked identification |
CN110243381A (en) * | 2019-07-11 | 2019-09-17 | 北京理工大学 | A kind of Lu Kong robot collaborative perception monitoring method |
CN110319772A (en) * | 2019-07-12 | 2019-10-11 | 上海电力大学 | Vision large span distance measuring method based on unmanned plane |
CN110361005A (en) * | 2019-06-26 | 2019-10-22 | 深圳前海达闼云端智能科技有限公司 | Positioning method, positioning device, readable storage medium and electronic equipment |
CN110490900A (en) * | 2019-07-12 | 2019-11-22 | 中国科学技术大学 | Binocular visual positioning method and system under dynamic environment |
CN110520694A (en) * | 2017-10-31 | 2019-11-29 | 深圳市大疆创新科技有限公司 | A kind of visual odometry and its implementation |
CN110517319A (en) * | 2017-07-07 | 2019-11-29 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus that camera posture information is determining |
WO2020024182A1 (en) * | 2018-08-01 | 2020-02-06 | 深圳市大疆创新科技有限公司 | Parameter processing method and apparatus, camera device and aircraft |
CN110874100A (en) * | 2018-08-13 | 2020-03-10 | 北京京东尚科信息技术有限公司 | System and method for autonomous navigation using visual sparse maps |
CN111052183A (en) * | 2017-09-04 | 2020-04-21 | 苏黎世大学 | Visual inertial odometer using event camera |
CN111076733A (en) * | 2019-12-10 | 2020-04-28 | 亿嘉和科技股份有限公司 | Robot indoor map building method and system based on vision and laser slam |
CN111307165A (en) * | 2020-03-06 | 2020-06-19 | 新石器慧通(北京)科技有限公司 | Vehicle positioning method and system and unmanned vehicle |
CN111462179A (en) * | 2020-03-26 | 2020-07-28 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111652933A (en) * | 2020-05-06 | 2020-09-11 | Oppo广东移动通信有限公司 | Monocular camera-based repositioning method and device, storage medium and electronic equipment |
CN111780764A (en) * | 2020-06-30 | 2020-10-16 | 杭州海康机器人技术有限公司 | Visual positioning method and device based on visual map |
CN111879306A (en) * | 2020-06-17 | 2020-11-03 | 杭州易现先进科技有限公司 | Visual inertial positioning method, device, system and computer equipment |
CN112074705A (en) * | 2017-12-18 | 2020-12-11 | Alt有限责任公司 | Method and system for optical inertial tracking of moving object |
CN112393721A (en) * | 2020-09-30 | 2021-02-23 | 苏州大学应用技术学院 | Camera pose estimation method |
CN112489176A (en) * | 2020-11-26 | 2021-03-12 | 香港理工大学深圳研究院 | Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching |
CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN112639883A (en) * | 2020-03-17 | 2021-04-09 | 华为技术有限公司 | Relative attitude calibration method and related device |
CN112907742A (en) * | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
CN112936269A (en) * | 2021-02-04 | 2021-06-11 | 珠海市一微半导体有限公司 | Robot control method based on intelligent terminal |
CN113091738A (en) * | 2021-04-09 | 2021-07-09 | 安徽工程大学 | Mobile robot map construction method based on visual inertial navigation fusion and related equipment |
CN113139456A (en) * | 2018-02-05 | 2021-07-20 | 浙江商汤科技开发有限公司 | Electronic equipment state tracking method and device, electronic equipment and control system |
CN113298692A (en) * | 2021-05-21 | 2021-08-24 | 北京索为云网科技有限公司 | Terminal pose tracking method, AR rendering method, terminal pose tracking device and storage medium |
CN113409368A (en) * | 2020-03-16 | 2021-09-17 | 北京京东乾石科技有限公司 | Drawing method and device, computer readable storage medium and electronic equipment |
CN113447014A (en) * | 2021-08-30 | 2021-09-28 | 深圳市大道智创科技有限公司 | Indoor mobile robot, mapping method, positioning method, and mapping positioning device |
CN113632135A (en) * | 2019-04-30 | 2021-11-09 | 三星电子株式会社 | System and method for low latency, high performance pose fusion |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN114663822A (en) * | 2022-05-18 | 2022-06-24 | 广州市影擎电子科技有限公司 | Servo motion trajectory generation method and device |
CN115115707A (en) * | 2022-06-30 | 2022-09-27 | 小米汽车科技有限公司 | Vehicle drowning detection method, vehicle, computer readable storage medium and chip |
CN116645400A (en) * | 2023-07-21 | 2023-08-25 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
CN117392518A (en) * | 2023-12-13 | 2024-01-12 | 南京耀宇视芯科技有限公司 | Low-power-consumption visual positioning and mapping chip and method thereof |
CN111052183B (en) * | 2017-09-04 | 2024-05-03 | 苏黎世大学 | Vision inertial odometer using event camera |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090081968A (en) * | 2008-01-25 | 2009-07-29 | 성균관대학교산학협력단 | System and method for simultaneous recognition and pose estimation of object using in-situ monitoring |
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN102768042A (en) * | 2012-07-11 | 2012-11-07 | 清华大学 | Visual-inertial combined navigation method |
CN102967297A (en) * | 2012-11-23 | 2013-03-13 | 浙江大学 | Space-movable visual sensor array system and image information fusion method |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
CN103954283A (en) * | 2014-04-01 | 2014-07-30 | 西北工业大学 | Scene matching/visual odometry-based inertial integrated navigation method |
CN104501814A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Attitude and position estimation method based on vision and inertia information |
CN104680522A (en) * | 2015-02-09 | 2015-06-03 | 浙江大学 | Visual positioning method based on synchronous working of front and back cameras of smart phone |
-
2016
- 2016-05-23 CN CN201610346493.3A patent/CN105953796A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090081968A (en) * | 2008-01-25 | 2009-07-29 | 성균관대학교산학협력단 | System and method for simultaneous recognition and pose estimation of object using in-situ monitoring |
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN102768042A (en) * | 2012-07-11 | 2012-11-07 | 清华大学 | Visual-inertial combined navigation method |
CN102967297A (en) * | 2012-11-23 | 2013-03-13 | 浙江大学 | Space-movable visual sensor array system and image information fusion method |
CN103646391A (en) * | 2013-09-30 | 2014-03-19 | 浙江大学 | Real-time camera tracking method for dynamically-changed scene |
CN103954283A (en) * | 2014-04-01 | 2014-07-30 | 西北工业大学 | Scene matching/visual odometry-based inertial integrated navigation method |
CN104501814A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Attitude and position estimation method based on vision and inertia information |
CN104680522A (en) * | 2015-02-09 | 2015-06-03 | 浙江大学 | Visual positioning method based on synchronous working of front and back cameras of smart phone |
Non-Patent Citations (2)
Title |
---|
李仁厚: "《自主移动机器人导论 第2版》", 31 May 2013, 西安交通大学出版社 * |
邹建成,牛少彰: "《数学及其在图像处理中的应用》", 31 July 2015 * |
Cited By (117)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570820B (en) * | 2016-10-18 | 2019-12-03 | 浙江工业大学 | A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone |
CN106570820A (en) * | 2016-10-18 | 2017-04-19 | 浙江工业大学 | Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) |
CN106546238A (en) * | 2016-10-26 | 2017-03-29 | 北京小鸟看看科技有限公司 | Wearable device and the method that user's displacement is determined in wearable device |
WO2018077176A1 (en) * | 2016-10-26 | 2018-05-03 | 北京小鸟看看科技有限公司 | Wearable device and method for determining user displacement in wearable device |
CN106548486B (en) * | 2016-11-01 | 2024-02-27 | 浙江大学 | Unmanned vehicle position tracking method based on sparse visual feature map |
CN106548486A (en) * | 2016-11-01 | 2017-03-29 | 浙江大学 | A kind of unmanned vehicle location tracking method based on sparse visual signature map |
CN106556391A (en) * | 2016-11-25 | 2017-04-05 | 上海航天控制技术研究所 | A kind of fast vision measuring method based on multi-core DSP |
CN106595570A (en) * | 2016-12-16 | 2017-04-26 | 杭州奥腾电子股份有限公司 | Vehicle single camera and six-axis sensor combination range finding system and range finding method thereof |
CN108225345A (en) * | 2016-12-22 | 2018-06-29 | 乐视汽车(北京)有限公司 | The pose of movable equipment determines method, environmental modeling method and device |
CN106767785B (en) * | 2016-12-23 | 2020-04-07 | 成都通甲优博科技有限责任公司 | Navigation method and device of double-loop unmanned aerial vehicle |
CN106767785A (en) * | 2016-12-23 | 2017-05-31 | 成都通甲优博科技有限责任公司 | The air navigation aid and device of a kind of double loop unmanned plane |
CN110140100B (en) * | 2017-01-02 | 2020-02-28 | 摩致实验室有限公司 | Three-dimensional augmented reality object user interface functionality |
CN110140100A (en) * | 2017-01-02 | 2019-08-16 | 摩致实验室有限公司 | Three-dimensional enhanced reality object user's interface function |
CN106885574B (en) * | 2017-02-15 | 2020-02-07 | 北京大学深圳研究生院 | Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy |
CN106885574A (en) * | 2017-02-15 | 2017-06-23 | 北京大学深圳研究生院 | A kind of monocular vision robot synchronous superposition method based on weight tracking strategy |
CN106920279B (en) * | 2017-03-07 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | Three-dimensional map construction method and device |
CN107194968B (en) * | 2017-05-18 | 2024-01-16 | 腾讯科技(上海)有限公司 | Image identification tracking method and device, intelligent terminal and readable storage medium |
CN107194968A (en) * | 2017-05-18 | 2017-09-22 | 腾讯科技(上海)有限公司 | Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image |
CN110517319A (en) * | 2017-07-07 | 2019-11-29 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus that camera posture information is determining |
CN107246868B (en) * | 2017-07-26 | 2021-11-02 | 上海舵敏智能科技有限公司 | Collaborative navigation positioning system and navigation positioning method |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
CN107687850B (en) * | 2017-07-26 | 2021-04-23 | 哈尔滨工业大学深圳研究生院 | Unmanned aerial vehicle pose estimation method based on vision and inertia measurement unit |
CN107246868A (en) * | 2017-07-26 | 2017-10-13 | 上海舵敏智能科技有限公司 | A kind of collaborative navigation alignment system and navigation locating method |
CN111052183B (en) * | 2017-09-04 | 2024-05-03 | 苏黎世大学 | Vision inertial odometer using event camera |
CN111052183A (en) * | 2017-09-04 | 2020-04-21 | 苏黎世大学 | Visual inertial odometer using event camera |
CN107748569A (en) * | 2017-09-04 | 2018-03-02 | 中国兵器工业计算机应用技术研究所 | Motion control method, device and UAS for unmanned plane |
CN107748569B (en) * | 2017-09-04 | 2021-02-19 | 中国兵器工业计算机应用技术研究所 | Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system |
CN110520694A (en) * | 2017-10-31 | 2019-11-29 | 深圳市大疆创新科技有限公司 | A kind of visual odometry and its implementation |
CN109752717A (en) * | 2017-11-07 | 2019-05-14 | 现代自动车株式会社 | Device and method for the sensing data in associated vehicle |
CN109752717B (en) * | 2017-11-07 | 2023-10-17 | 现代自动车株式会社 | Apparatus and method for correlating sensor data in a vehicle |
CN107909614B (en) * | 2017-11-13 | 2021-02-26 | 中国矿业大学 | Positioning method of inspection robot in GPS failure environment |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN108022302A (en) * | 2017-12-01 | 2018-05-11 | 深圳市天界幻境科技有限公司 | A kind of sterically defined AR 3 d display devices of Inside-Out |
CN108022302B (en) * | 2017-12-01 | 2021-06-29 | 深圳市天界幻境科技有限公司 | Stereo display device of Inside-Out space orientation's AR |
CN112074705A (en) * | 2017-12-18 | 2020-12-11 | Alt有限责任公司 | Method and system for optical inertial tracking of moving object |
CN109978943A (en) * | 2017-12-28 | 2019-07-05 | 深圳市优必选科技有限公司 | Move camera lens working method and system, the device with store function |
CN109978943B (en) * | 2017-12-28 | 2021-06-04 | 深圳市优必选科技有限公司 | Working method and system of moving lens and device with storage function |
CN113139456A (en) * | 2018-02-05 | 2021-07-20 | 浙江商汤科技开发有限公司 | Electronic equipment state tracking method and device, electronic equipment and control system |
CN108364319A (en) * | 2018-02-12 | 2018-08-03 | 腾讯科技(深圳)有限公司 | Scale determines method, apparatus, storage medium and equipment |
CN108364319B (en) * | 2018-02-12 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Dimension determination method and device, storage medium and equipment |
WO2019157925A1 (en) * | 2018-02-13 | 2019-08-22 | 视辰信息科技(上海)有限公司 | Visual-inertial odometry implementation method and system |
CN108759826A (en) * | 2018-04-12 | 2018-11-06 | 浙江工业大学 | A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane |
CN108759826B (en) * | 2018-04-12 | 2020-10-27 | 浙江工业大学 | Unmanned aerial vehicle motion tracking method based on multi-sensing parameter fusion of mobile phone and unmanned aerial vehicle |
US11205282B2 (en) | 2018-04-27 | 2021-12-21 | Tencent Technology (Shenzhen) Company Limited | Relocalization method and apparatus in camera pose tracking process and storage medium |
CN108648235B (en) * | 2018-04-27 | 2022-05-17 | 腾讯科技(深圳)有限公司 | Repositioning method and device for camera attitude tracking process and storage medium |
CN108648235A (en) * | 2018-04-27 | 2018-10-12 | 腾讯科技(深圳)有限公司 | Method for relocating, device and the storage medium of camera posture tracing process |
CN108871311B (en) * | 2018-05-31 | 2021-01-19 | 北京字节跳动网络技术有限公司 | Pose determination method and device |
CN108871311A (en) * | 2018-05-31 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Pose determines method and apparatus |
WO2020024182A1 (en) * | 2018-08-01 | 2020-02-06 | 深圳市大疆创新科技有限公司 | Parameter processing method and apparatus, camera device and aircraft |
CN109089100A (en) * | 2018-08-13 | 2018-12-25 | 西安理工大学 | A kind of synthetic method of binocular tri-dimensional video |
CN110874100A (en) * | 2018-08-13 | 2020-03-10 | 北京京东尚科信息技术有限公司 | System and method for autonomous navigation using visual sparse maps |
CN109089100B (en) * | 2018-08-13 | 2020-10-23 | 西安理工大学 | Method for synthesizing binocular stereo video |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109307508B (en) * | 2018-08-29 | 2022-04-08 | 中国科学院合肥物质科学研究院 | Panoramic inertial navigation SLAM method based on multiple key frames |
CN108829116A (en) * | 2018-10-09 | 2018-11-16 | 上海岚豹智能科技有限公司 | Barrier-avoiding method and equipment based on monocular cam |
CN109079799A (en) * | 2018-10-23 | 2018-12-25 | 哈尔滨工业大学(深圳) | It is a kind of based on bionical robot perception control system and control method |
CN109079799B (en) * | 2018-10-23 | 2021-11-12 | 哈尔滨工业大学(深圳) | Robot perception control system and control method based on bionics |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
CN109376785B (en) * | 2018-10-31 | 2021-09-24 | 东南大学 | Navigation method based on iterative extended Kalman filtering fusion inertia and monocular vision |
CN109671120A (en) * | 2018-11-08 | 2019-04-23 | 南京华捷艾米软件科技有限公司 | A kind of monocular SLAM initial method and system based on wheel type encoder |
CN109631894A (en) * | 2018-12-11 | 2019-04-16 | 智灵飞(北京)科技有限公司 | A kind of monocular vision inertia close coupling method based on sliding window |
CN109739079A (en) * | 2018-12-25 | 2019-05-10 | 广东工业大学 | A method of improving VSLAM system accuracy |
CN109739079B (en) * | 2018-12-25 | 2022-05-10 | 九天创新(广东)智能科技有限公司 | Method for improving VSLAM system precision |
CN109727287B (en) * | 2018-12-27 | 2023-08-08 | 江南大学 | Improved registration method and system suitable for augmented reality |
CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
CN109727287A (en) * | 2018-12-27 | 2019-05-07 | 江南大学 | A kind of improvement register method and its system suitable for augmented reality |
CN109712170B (en) * | 2018-12-27 | 2021-09-07 | 广东省智能制造研究所 | Environmental object tracking method and device based on visual inertial odometer |
CN109887029A (en) * | 2019-01-17 | 2019-06-14 | 江苏大学 | A kind of monocular vision mileage measurement method based on color of image feature |
CN110009739A (en) * | 2019-01-29 | 2019-07-12 | 浙江省北大信息技术高等研究院 | The extraction and coding method of the motion feature of the digital retina of mobile camera |
CN110006423A (en) * | 2019-04-04 | 2019-07-12 | 北京理工大学 | A kind of adaptive inertial navigation and visual combination air navigation aid |
CN110006423B (en) * | 2019-04-04 | 2020-11-06 | 北京理工大学 | Self-adaptive inertial navigation and visual combined navigation method |
CN113632135A (en) * | 2019-04-30 | 2021-11-09 | 三星电子株式会社 | System and method for low latency, high performance pose fusion |
CN110095752A (en) * | 2019-05-07 | 2019-08-06 | 百度在线网络技术(北京)有限公司 | Localization method, device, equipment and medium |
CN109900294A (en) * | 2019-05-13 | 2019-06-18 | 奥特酷智能科技(南京)有限公司 | Vision inertia odometer based on hardware accelerator |
CN110147164A (en) * | 2019-05-22 | 2019-08-20 | 京东方科技集团股份有限公司 | Head movement tracking, equipment, system and storage medium |
CN110211239A (en) * | 2019-05-30 | 2019-09-06 | 杭州远传新业科技有限公司 | Augmented reality method, apparatus, equipment and medium based on unmarked identification |
CN110211239B (en) * | 2019-05-30 | 2022-11-08 | 杭州远传新业科技股份有限公司 | Augmented reality method, apparatus, device and medium based on label-free recognition |
CN110196047A (en) * | 2019-06-20 | 2019-09-03 | 东北大学 | Robot autonomous localization method of closing a position based on TOF depth camera and IMU |
CN110361005A (en) * | 2019-06-26 | 2019-10-22 | 深圳前海达闼云端智能科技有限公司 | Positioning method, positioning device, readable storage medium and electronic equipment |
CN110243381B (en) * | 2019-07-11 | 2020-10-30 | 北京理工大学 | Cooperative sensing monitoring method for air-ground robot |
CN110243381A (en) * | 2019-07-11 | 2019-09-17 | 北京理工大学 | A kind of Lu Kong robot collaborative perception monitoring method |
CN110490900B (en) * | 2019-07-12 | 2022-04-19 | 中国科学技术大学 | Binocular vision positioning method and system under dynamic environment |
CN110319772A (en) * | 2019-07-12 | 2019-10-11 | 上海电力大学 | Vision large span distance measuring method based on unmanned plane |
CN110490900A (en) * | 2019-07-12 | 2019-11-22 | 中国科学技术大学 | Binocular visual positioning method and system under dynamic environment |
CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN111076733A (en) * | 2019-12-10 | 2020-04-28 | 亿嘉和科技股份有限公司 | Robot indoor map building method and system based on vision and laser slam |
CN111307165A (en) * | 2020-03-06 | 2020-06-19 | 新石器慧通(北京)科技有限公司 | Vehicle positioning method and system and unmanned vehicle |
CN113409368A (en) * | 2020-03-16 | 2021-09-17 | 北京京东乾石科技有限公司 | Drawing method and device, computer readable storage medium and electronic equipment |
CN113409368B (en) * | 2020-03-16 | 2023-11-03 | 北京京东乾石科技有限公司 | Mapping method and device, computer readable storage medium and electronic equipment |
CN112639883A (en) * | 2020-03-17 | 2021-04-09 | 华为技术有限公司 | Relative attitude calibration method and related device |
CN112639883B (en) * | 2020-03-17 | 2021-11-19 | 华为技术有限公司 | Relative attitude calibration method and related device |
CN111462179B (en) * | 2020-03-26 | 2023-06-27 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111462179A (en) * | 2020-03-26 | 2020-07-28 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111652933B (en) * | 2020-05-06 | 2023-08-04 | Oppo广东移动通信有限公司 | Repositioning method and device based on monocular camera, storage medium and electronic equipment |
CN111652933A (en) * | 2020-05-06 | 2020-09-11 | Oppo广东移动通信有限公司 | Monocular camera-based repositioning method and device, storage medium and electronic equipment |
CN111879306A (en) * | 2020-06-17 | 2020-11-03 | 杭州易现先进科技有限公司 | Visual inertial positioning method, device, system and computer equipment |
CN111780764B (en) * | 2020-06-30 | 2022-09-02 | 杭州海康机器人技术有限公司 | Visual positioning method and device based on visual map |
CN111780764A (en) * | 2020-06-30 | 2020-10-16 | 杭州海康机器人技术有限公司 | Visual positioning method and device based on visual map |
CN112393721B (en) * | 2020-09-30 | 2024-04-09 | 苏州大学应用技术学院 | Camera pose estimation method |
CN112393721A (en) * | 2020-09-30 | 2021-02-23 | 苏州大学应用技术学院 | Camera pose estimation method |
CN112489176B (en) * | 2020-11-26 | 2021-09-21 | 香港理工大学深圳研究院 | Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching |
CN112489176A (en) * | 2020-11-26 | 2021-03-12 | 香港理工大学深圳研究院 | Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching |
CN112936269A (en) * | 2021-02-04 | 2021-06-11 | 珠海市一微半导体有限公司 | Robot control method based on intelligent terminal |
CN112907742A (en) * | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
CN113091738A (en) * | 2021-04-09 | 2021-07-09 | 安徽工程大学 | Mobile robot map construction method based on visual inertial navigation fusion and related equipment |
CN113298692A (en) * | 2021-05-21 | 2021-08-24 | 北京索为云网科技有限公司 | Terminal pose tracking method, AR rendering method, terminal pose tracking device and storage medium |
CN113298692B (en) * | 2021-05-21 | 2024-04-16 | 北京索为云网科技有限公司 | Augmented reality method for realizing real-time equipment pose calculation based on mobile terminal browser |
CN113447014A (en) * | 2021-08-30 | 2021-09-28 | 深圳市大道智创科技有限公司 | Indoor mobile robot, mapping method, positioning method, and mapping positioning device |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN114494825B (en) * | 2021-12-31 | 2024-04-19 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN114663822A (en) * | 2022-05-18 | 2022-06-24 | 广州市影擎电子科技有限公司 | Servo motion trajectory generation method and device |
CN115115707B (en) * | 2022-06-30 | 2023-10-10 | 小米汽车科技有限公司 | Vehicle falling water detection method, vehicle, computer readable storage medium and chip |
CN115115707A (en) * | 2022-06-30 | 2022-09-27 | 小米汽车科技有限公司 | Vehicle drowning detection method, vehicle, computer readable storage medium and chip |
CN116645400B (en) * | 2023-07-21 | 2023-12-08 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
CN116645400A (en) * | 2023-07-21 | 2023-08-25 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
CN117392518A (en) * | 2023-12-13 | 2024-01-12 | 南京耀宇视芯科技有限公司 | Low-power-consumption visual positioning and mapping chip and method thereof |
CN117392518B (en) * | 2023-12-13 | 2024-04-09 | 南京耀宇视芯科技有限公司 | Low-power-consumption visual positioning and mapping chip and method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105953796A (en) | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone | |
US11519729B2 (en) | Vision-aided inertial navigation | |
CN105931275A (en) | Monocular and IMU fused stable motion tracking method and device based on mobile terminal | |
CN109991636A (en) | Map constructing method and system based on GPS, IMU and binocular vision | |
CN109885080B (en) | Autonomous control system and autonomous control method | |
CN110095116A (en) | A kind of localization method of vision positioning and inertial navigation combination based on LIFT | |
Helmick et al. | Path following using visual odometry for a mars rover in high-slip environments | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
CN103033189B (en) | Inertia/vision integrated navigation method for deep-space detection patrolling device | |
CN109540126A (en) | A kind of inertia visual combination air navigation aid based on optical flow method | |
CN109141433A (en) | A kind of robot indoor locating system and localization method | |
CN106017463A (en) | Aircraft positioning method based on positioning and sensing device | |
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN114001733B (en) | Map-based consistent efficient visual inertial positioning algorithm | |
CN106574836A (en) | A method for localizing a robot in a localization plane | |
Huai et al. | Observability analysis and keyframe-based filtering for visual inertial odometry with full self-calibration | |
Williams et al. | Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles | |
Sjanic et al. | EM-SLAM with inertial/visual applications | |
CN106352897A (en) | Silicon MEMS (micro-electromechanical system) gyroscope error estimating and correcting method based on monocular visual sensor | |
CN103017773A (en) | Surrounding road navigation method based on celestial body surface feature and natural satellite road sign | |
Kehoe et al. | Partial aircraft state estimation from optical flow using non-model-based optimization | |
CN114993338B (en) | High-efficiency visual inertial mileage calculation method based on multi-section independent map sequence | |
Ready et al. | Inertially aided visual odometry for miniature air vehicles in gps-denied environments | |
Jamal et al. | Terrain mapping and pose estimation for polar shadowed regions of the moon | |
Tian et al. | A Design of Odometer-Aided Visual Inertial Integrated Navigation Algorithm Based on Multiple View Geometry Constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160921 |
|
RJ01 | Rejection of invention patent application after publication |