CN108665540A - Robot localization based on binocular vision feature and IMU information and map structuring system - Google Patents
Robot localization based on binocular vision feature and IMU information and map structuring system Download PDFInfo
- Publication number
- CN108665540A CN108665540A CN201810218153.1A CN201810218153A CN108665540A CN 108665540 A CN108665540 A CN 108665540A CN 201810218153 A CN201810218153 A CN 201810218153A CN 108665540 A CN108665540 A CN 108665540A
- Authority
- CN
- China
- Prior art keywords
- imu
- submodule
- module
- acceleration
- key frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of robot localization based on binocular vision feature and IMU information and map structuring system, binocular information collection, feature extracting and matching module include binocular ORB feature extractions submodule, binocular characteristic matching submodule and IMU information collection submodules;Improved IMU initialization and its motion module include IMU angular speed estimation of deviation submodule, acceleration of gravity estimates submodule, IMU acceleration bias estimates submodule and IMU pre-integration submodules;Vision SLAM algorithm initializations and tracking module include that tracking interframe movement submodule and key frame generate submodule;It includes that new key frame is inserted into submodule, part BA optimization submodules and rejects redundancy key frames module that module is built in part;Winding detects and optimization module includes winding detection sub-module and global optimization submodule.The present invention provides that a kind of robustness is good, accuracy is high, adaptable robot localization and map structuring system based on binocular vision feature and IMU information.
Description
Technical field
The invention belongs to robotic technology fields, and in particular to a kind of robot localization and map structuring system.
Background technology
The technology of simultaneous localization and mapping (SLAM) is the important problem in one, robot navigation field, SLAM problems
It can be described as:Robot can establish the environment explored in one global map, while can utilize at any time
This map goes to deduce the position of itself.Robot is freely moved in the environment by carrying sensor device, by adopting
The information collected positions self-position, while map structuring is carried out on the basis of positioning, while realizing robot
Position and build figure.There are two the solutions that main factor affects SLAM problems, are data characteristics and the observation of sensor respectively
Data dependence, if the robustness and accuracy and the utilization rate that can improve sensing data of data correlation can be improved,
The positioning of robot and the precision of map structuring will be improved.
Most common sensor has camera and IMU sensors etc. in mobile device at present, and how algorithm for design goes to utilize this
A little common sensor devices realize that high-precision while positioning and drawing are the hot issues of a current research.
Novelty of the invention merges binocular vision information using IMU information, makes up and is asked present in binocular vision SLAM
Topic, and improve the robustness of SLAM algorithms.Herein on basis, the map for having true dimensional information is constructed.
The technology of simultaneous localization and mapping (SLAM) is the more classical problem of robot field, and SLAM problems can be with
It is described as:Robot is moved since a certain unknown position in circumstances not known, itself pose is estimated in moving process and is built
Vertical environmental map realizes the autonomous positioning of robot and builds figure.Influence two correlations because being known as observation data of SLAM systems
Property and ambient noise, the data dependence that has depended on of correctness of ambient enviroment observation, and then influence environmental map
Structure.
Invention content
In order to overcome the robustness of existing robot localization and map structuring system is poor, accuracy is relatively low, adaptability compared with
The deficiency of difference, the present invention provide that a kind of robustness is good, accuracy is high, adaptable based on binocular vision feature and IMU information
Robot localization and map structuring system.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of robot localization based on binocular vision feature and IMU information and map structuring system, the system comprises
Binocular information collection, feature extracting and matching module, improved IMU initialization and its motion module, vision SLAM algorithms are initial
Change and tracking module locally build module, winding detection and optimization module;The binocular information collection, feature extracting and matching
Module includes binocular ORB feature extractions submodule, binocular characteristic matching submodule and IMU information collection submodules;The improvement
IMU initialization and its motion module includes IMU angular speed estimation of deviation submodule, acceleration of gravity estimates submodule, IMU adds
Velocity deviation estimates submodule and IMU pre-integration submodules;The vision SLAM algorithm initializations and tracking module include tracking
Interframe movement submodule and key frame generate submodule;It includes that new key frame is inserted into submodule, part that module is built in the part
BA optimizes submodule and rejects redundancy key frames module;Winding detection and optimization module include winding detection sub-module and complete
Office's optimization submodule.
Further, in the binocular information collection, feature extraction and matching module, binocular ORB feature extraction submodules are logical
The picture for receiving that binocular camera is shot in robot moving process is crossed, is extracted in left images using ORB feature extraction algorithms
ORB features store the result of extraction;The characteristic point that binocular characteristic matching module extracts left and right picture matches, and obtains
The depth information of characteristic point;IMU information acquisition modules are collected robot interior IMU information.
Further, in the improved IMU initialization and its motion module, IMU angular speed estimation of deviation submodule profit
With vision SLAM as a result, using optimization method to IMU angular speed estimation of deviation;Acceleration of gravity and IMU acceleration bias are estimated
Submodule obtains line using IMU angular speed estimation of deviation submodules as a result, to continuous N number of key frame using relationship between it
Property equation, solution obtain acceleration of gravity;It is the influence row by acceleration of gravity to IMU acceleration to acceleration bias estimation
It removes, the optimization key frame of continuous current frame state is being combined two-by-two again, equation is being solved and obtains the revaluation of acceleration of gravity
Meter, while obtaining IMU acceleration bias;IMU acceleration bias estimates submodule, inclined using acceleration of gravity and IMU acceleration
It is that difference estimates submodule as a result, influence of the acceleration of gravity to IMU acceleration is excluded, continuous N number of key frame is solved again
Equation obtains the revaluation of acceleration of gravity, while obtaining IMU acceleration bias;IMU pre-integration submodule schemes continuous two
As all IMU data between frame by IMU pre-integration model set at an individual observation, obtain the movement of robot
The model pose estimation current with robot.
Further, in the vision SLAM algorithm initializations and tracking module, tracking interframe movement submodule, which calculates, to be connected
Continue the ORB characteristic matchings between two images, calculate eigenmatrix, recycles boundling optimization to estimating that progress of recording a demerit is further
Estimation;After successfully returning, system starts to track in a manner of pure vision, and key frame generates knot among submodule monitoring, tracing
Then present frame is added in key frame set when present frame meets the condition of key frame for fruit, and using current key frame as
Track reference frame.
The part is built in module, and new key frame is inserted into submodule) according to the key frame of upper submodule generation, it is inserted into
In crucial frame queue;Local BA optimizes submodule), after key frame insertion, to the key frame and point map of current partial view
Cloud makees boundling optimization;The key frame submodule for rejecting redundancy is deleted more in window according to the key frame of insertion using screening strategy
Remaining key frame ensures the size of map.
Winding detection in optimization module, winding detection sub-module by present frame in the database key frame set into
Row inquiry, detects winding;After successfully returning, the pose for all key frames of swinging fore-upward is estimated, global optimization submodule
All key frames in winding are carried out with global figure optimization, error factor includes visual projection's factor and the IMU factors.
The present invention technical concept be:Robot SLAM systems proposed by the present invention are believed based on binocular vision and inertial navigation
A solution of breath fusion SLAM systems.The invention firstly uses pure vision posture informations to IMU buggy models and again
Estimated in power acceleration direction etc.;It is abundant to image zooming-out using efficient feature extraction algorithm in vision pose
Description, is then matched using the characteristic point of extraction, obtains its depth information;Utilize the IMU kinematics based on pre-integration
The motion model of model foundation camera carries out in real time according to a preliminary estimate camera position using motion model;In base according to a preliminary estimate
Pose is estimated between former and later two picture frames on plinth, is utilizing visual geometric knowledge, is being carried out to robot current location
More accurate estimation;Using key-frame selection strategy, the sparse structure of local map point is realized;In regarding for fusion IMU information
On the basis for feeling information matches, using the rear end optimization algorithm based on figure optimization, by the space map spot projection factor and IMU because
Son carries out accurate and estimation in real time to map location in real time as the factor in factor graph.The present invention is realized based on double
Visually feel with the robot of inertial navigation information synchronize be positioned at map building algorithm, can be to robot motion and ambient condition information
Accurately estimated.
Beneficial effects of the present invention are mainly manifested in:The method for using visual information and the fusion of IMU information, can be fine
The problems in solution vision SLAM, and improve the robustness of SLAM systems;Accuracy is high, adaptable.
Description of the drawings
Fig. 1 is present system structural diagrams.
Fig. 2 is present system flow diagram.
Specific implementation mode
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Figures 1 and 2, a kind of robot localization based on binocular vision feature and IMU information and map structuring system,
Including:Binocular information collection, feature extracting and matching module (1), improved IMU initialization and its motion module (2), vision
Module (4), winding detection and optimization module (5) are built in SLAM algorithm initializations and tracking module (3), part;The binocular letter
Breath acquisition, feature extracting and matching module (1) include:Binocular ORB feature extractions submodule (1.1), binocular characteristic matching submodule
Block (1.2), IMU information collections submodule (1.3);The improved IMU initialization and its motion module (2) include:The angles IMU speed
Rate estimation of deviation submodule (2.1), acceleration of gravity estimate submodule (2.2), IMU acceleration bias estimates submodule (2.3),
IMU pre-integration submodule (2.4);The vision SLAM algorithm initializations and tracking module (3) include:Track interframe movement submodule
Block (3.1), key frame generate submodule (3.2);Module (4) is built in the part:New key frame is inserted into submodule
(4.1), part BA optimizes submodule (4.2), rejects redundancy key frames module (4.3);The winding detection and optimization module (5)
Including:Winding detection sub-module (5.1), global optimization submodule (5.2).
In the binocular information collection, feature extraction and matching module (1), binocular ORB feature extractions submodule (1.1) is logical
The picture for receiving that binocular camera is shot in robot moving process is crossed, is extracted in left images using ORB feature extraction algorithms
ORB features store the result of extraction;The characteristic point that binocular characteristic matching module (1.2) extracts left and right picture matches,
Obtain the depth information of characteristic point;IMU information acquisition modules (1.3) are collected robot interior IMU information;
In the improved IMU initialization and its motion module (2), IMU angular speed estimation of deviation submodules (2.1) utilize
Vision SLAM is as a result, using optimization method to IMU angular speed estimation of deviation;Acceleration of gravity and IMU acceleration bias estimate son
Module (2.2) obtains linear equation using relationship between it, solves using submodule (2.1) as a result, to continuous N number of key frame
Obtain acceleration of gravity;It is the influence exclusion by acceleration of gravity to IMU acceleration to acceleration bias estimation, again to even
The optimization key frame for continuing current frame state combines two-by-two, solves equation and obtains the revaluation of acceleration of gravity, while obtaining IMU
Acceleration bias;IMU acceleration bias estimates submodule (2.3), using submodule (2.2) as a result, by acceleration of gravity pair
The influence of IMU acceleration excludes, and solving equation to continuous N number of key frame again obtains the revaluation of acceleration of gravity, simultaneously
To IMU acceleration bias;All IMU data between continuous two picture frames are passed through IMU by IMU pre-integration submodule (2.4)
Pre-integration model set obtains the pose estimation current with robot of the motion model of robot at an individual observation;
In the vision SLAM algorithm initializations and tracking module (3), vision SLAM algorithm initializations and tracking module
(3.1) the ORB characteristic matchings between continuous two images are calculated, eigenmatrix is calculated, boundling optimization is recycled to remember estimation
It crosses and is further estimated;After submodule (3.1) successfully returns, system starts to track in a manner of pure vision, key frame production
Raw submodule (3.2) monitoring, tracing intermediate result, when then key frame collection is added in present frame by the condition that present frame meets key frame
In being fated, and using current key frame as tracking reference frame,;
The part is built in module (4), and new key frame is inserted into the key that submodule (4.1) is generated according to a upper submodule
Frame is inserted into crucial frame queue;Local BA optimization submodules (4.2), after key frame insertion, to the key of current partial view
Frame makees boundling optimization with point map cloud;The key frame submodule (4.3) for rejecting redundancy utilizes screening according to the key frame of insertion
Strategy deletes key frame extra in window, ensures the size of map.
With optimization module (5), winding detection sub-module (5.1) is crucial in the database by present frame for the winding detection
Frame set is inquired, and winding is detected;After submodule (5.2) successfully returns, the pose for all key frames of swinging fore-upward is carried out
Estimation, global optimization submodule (5.3) carry out global figure to all key frames in winding and optimize, and error factor includes vision
Project the factor and the IMU factors.
The specific executive condition introduced to module further below.
In the binocular information collection, feature extraction and matching module (1):
Binocular ORB feature extractions submodule (1.1), after each robot collects left images, for image into
The extraction of row ORB features chooses ORB features and is that ORB features have preferable visual angle illumination not as the reason of characteristics of image
Denaturation, and ORB features have the characteristics that extraction rate is fast and matching speed is fast, very as a kind of binary visual signature
It is suitble to require the SLAM systems of real-time.After extraction after ORB features, the result of extraction is stored.
Binocular characteristic matching submodule (1.2) only need to be same when searching for match point since left images are corrected by polar curve
A line is matched, and after each robot collects image zooming-out characteristic point, is matched using description of characteristic point, using double
The trigonometry visually felt can calculate the absolute measure of characteristic point, and calculation formula is as follows:
In above-mentioned formula, f is the focal length of camera, and B is the distance between two camera optical centers;D is parallax, i.e. left and right figure
The distance difference of the same characteristic point of piece, z are characterized depth a little;
IMU information collections submodule (1.3) is acquired in real time for the IMU data to robot interior, and send to
It is handled in IMU pre-integration submodule (4.1), this is because the collected IMU data frequencies of IMU information acquisition modules institute are remote
Higher than visual pattern frequency acquisition, the detection of the winding based on factor graph and optimization module (5) can be reduced using IMU pre-integration
Calculate pressure.
In the improved IMU initialization and its motion module (2), IMU angular speed estimation of deviation submodules (2.1), profit
With binocular vision SLAM's as a result, IMU angular speed deviation variables are added to robotary vector among, to IMU angular speeds
Estimation of deviation.Optimizing formula is:
Wherein, N is current key number of frames,It is the video camera being calculated by pure vision SLAM algorithms
Center C coordinatesWith the calibration matrix R between video camera and the centers IMU (being denoted as B)CBMultiplication obtains, Δ Ri,i+1It is according to IMU
Spin matrix between i the and i+1 key frames that pre-integration submodule (2.4) obtains;It is according to IMU pre-integration submodules
(2.4) the Δ R obtainedi,i+1With the first approximation of angular speed change of error equation;By being solved to optimization method, obtain
IMU angular speed deviations bg。
Acceleration of gravity estimates submodule (3.2), using IMU angular speed estimation of deviation submodules (2.1) as a result, for
Relationship between two key frames being connected obtains equation using pre-integration measured value Δ v:
In above-mentioned formula, RWCRepresent spin matrixs of the camera center C relative to world coordinates W, RWBIt represents in IMU
Spin matrixs of the heart B relative to world coordinates, pCBRepresent positions of the centers the IMU B under camera coordinates, Δ t be interframe when
Between it is poor,The camera speed of interframe is represented, s is the absolute measure of characteristic point, by can be calculated, gravity acceleration gw。
IMU acceleration bias estimate submodule (2.3), using acceleration of gravity estimate submodule (2.2) as a result, utilize
Linear relationship between continuous three key frames:
I is represented with 1,2,3, tri- continuous key frames of i+1, i+2 are as follows to every accounting equation in above formula:
In above formula, [](;1:2)Refer to the first two columns using matrix, RWCCamera center C is represented relative to world coordinates W's
Spin matrix, RWBRepresent spin matrixs of the centers the IMU B relative to world coordinates, RWIChanging coordinates are represented relative to world coordinates
Spin moment, pBPositions of the centers the IMU B under world coordinates is represented, I is the time difference that unit matrix Δ t is interframe, Δ v generations
The camera speed of table interframe, δ θxyIt is the acceleration of gravity deflection for needing to estimate, baIt is acceleration bias.By three all companies
Continuous key frame equation is piled into a linear equation, is denoted as:
A3 (N-2) × 5X5 × 1=B3 (N-2) × 1
Singular value decomposition is carried out by thread equation above, obtains acceleration of gravity direction δ θxyAnd IMU acceleration is inclined
Poor ba。
All IMU data between continuous two picture frames are passed through B pre-integration model sets by IMU pre-integration submodules
At an individual observation, motion model and the robot present bit estimation of robot are obtained.IMU is labeled as B, IMU is defeated
The acceleration gone out is expressed as α with angular speedBWith ωB, then the equation of motion of robot is between two IMU export i and i+1:
It is denoted as Δ R, Δ v, Δ p in k and the direct IMU pre-integration result of two key frames of k+1, then obtains two key frames
Between motion model:
The R in above formulaWBRepresent rotations of the centers the IMU B relative to world coordinates, vBRepresent the speed of IMU, pBRepresent IMU
Positions of the center B under world coordinates, bg、baRespectively represent the deviation of angular speed and acceleration.Jacobian matrixWithGeneration
Table is first approximation of the IMU pre-integration observation with change of error equation.
In the vision SLAM algorithm initializations and tracking module (3), interframe movement submodule (3.1) is tracked, ORB is utilized
After feature extraction submodule (1.1) extracts feature, characteristic matching is carried out to the feature on continuous two initialisation images Fc, Fr,
Position orientation relation between the two is solved using PnP algorithms.Hereafter SLAM system tracks can be utilized constantly to robot position
It sets and is estimated.The reason of using boundling optimization to estimating further estimation, is that boundling optimization can correct position auto―control, makes
More meet ORB characteristic matching results.
Key frame generates submodule (3.2) and decides whether present frame being selected as key frame according to rule.Its rule is such as
Under:
1) 20 frames at least with a distance from last time reorientation;
2) part, which builds the figure free time or is inserted into difference from last key frame, has 20 frames;
3) at least contain 50 characteristic points in present frame.
The part is built in module (4), and new key frame is inserted into submodule (4.1), the pass generated using submodule (3.2)
Key frame is inserted into queue, and more new content has the edge that identical point map cloud generates in relation to view and with key frame, while more
New node is linked with other key frames.Then, the underproof cloud point of partial view is rejected, new cloud point is regenerated.
Local BA optimization submodules (4.2), local BA optimize current key frame, and all be connected to is concentrated in inter-view
Its key frame and all map cloud points observed by these key frames.The every other pass that can observe these cloud points
Key frame, but the key frame for being not connected to current key frame can be optimised;Wherein projection error is:
Eproj(i)=ρ ((x- π (XC))T∑I(x-π(XC)))
Wherein π (XC) it is projection formula, x represents projection coordinate of the spatial point on present frame, from world coordinate system coordinate
Value XWTo camera coordinates system coordinate value XCBe converted to:
The R in above formulaCBRepresent spin matrixs of the centers the IMU B relative to Current camera center C coordinates, RBWRepresent world's seat
Mark spin matrixs of the W for the centers IMU B.pCBRepresent the centers IMU B being translated towards relative to current Current camera center C coordinates
Amount, pWBRepresent translation vectors of the centers the IMU B relative to our times coordinate W;
The IMU factors can be expressed as:
Wherein each error term distribution is expressed as:
In above formulaFirst approximations of the two interframe rotation transformation Δ R relative to acceleration of gravity is represented,Represent two frames
Between first approximations of the speed Δ v relative to acceleration of gravity,Two interframe velocity variations Δ v are represented relative to acceleration
First approximation.RBWWorld coordinates W is represented for the spin matrix of the centers IMU B, RWBRepresent rotations of the IMU coordinates B to world coordinates W
Torque battle array.bg,baRespectively represent the deviation of angular speed and acceleration, eR,ep,ev,ebWhat is indicated respectively is rotation, translation, speed
The error term brought with IMU deviations.It is all considered as Gaussian Profile, ∑ by weIIt is the information matrix of pre-integration, ∑RIt is deviation
What random walk information matrix represented is huber robust loss equations.
Redundancy key frames module (4.3) is rejected, due to the characteristics of IMU in short-term, if two in the native window of part BA
The difference of a continuous key frame is no more than 0.5 second, and track thread is allowed to abandon extra key frame.But it is to be able to execute complete
Office's optimization refines after cycle is closed or at any time map, we do not allow the continuous key frame of any two poor
It is different more than 3 seconds.If we close complete BA using IMU constraints, we only need to limit in native window between key frame
Time migration.
With optimization module (5), winding detection sub-module (5.1) calculates the bag of words of current key frame for the winding detection
Vector calculates similarity with the relevant adjacent image of its view, retains minimum score value.Then, we retrieve image recognition data
The key frame that those score values are less than minimum score value is lost in library, and by present frame, key frame set is inquired in the database.
Global optimization submodule (5.2) estimates all key frames swung fore-upward, only consider at this time visual projection because
Son is projected because subformula is as shown in the optimization submodule (4.2) of current frame state.
For in the equipment for being equipped with the robot of binocular camera and IMU, the present embodiment is discussed in detail in this equipment
Implementation procedure.
First in binocular and IMU information collections and depth information acquistion module (1), feature is carried out to binocular vision information
Extraction and characteristic matching, and IMU information is acquired, the sensor information of acquisition is passed to subsequent module by this two parts, because
It is moved in space for robot, the two parts constantly will be acquired and extract to the data that sensor newly obtains.Profit
Image is acquired with binocular camera, image is sent to feature extraction algorithm, the position of the characteristics of image extracted and description are protected
It deposits, while using the characteristic point for extracting left and right picture, carrying out the depth of matching primitives characteristic point and preservation.IMU is passed simultaneously
The signal of sensor is acquired and send to IMU pre-integration algorithms and handled.
After binocular vision SLAM executes a period of time, improved IMU initialization and its motion module (2) bring into operation.
It is IMU angular speed estimation of deviation first, the angular speed estimation of deviation result of IMU will be obtained.Secondly acceleration of gravity pre-estimation pair
Acceleration of gravity carries out preliminary estimation, and acceleration of gravity can be obtained using acceleration bias estimated result and scale, can be right
IMU acceleration bias carries out more accurate estimation.IMU acceleration bias is estimated and acceleration of gravity revaluation is added to IMU
Velocity deviation is estimated, while is estimated again acceleration of gravity.Utilize binocular and IMU information collections and Depth Information Acquistion
The IMU information datas that module (1) obtains establish the motion model scored in advance based on IMU.
Using the characteristic point information extracted, vision SLAM algorithm initializations and tracking module (3) are to the parts vision SLAM
Carry out initialization operation.Successively to tracing module, key frame generation module and point map are initialized.System after initialization
Save the point map and initial location information that initialization obtains.After vision initializes, system is calculated tracking is executed
Interframe pose is estimated in method part.Key-frame selection strategy is utilized simultaneously, generates key frame.Utilize the kinematics mould of IMU
Type and binocular vision posture information, estimate the speed of each key frame, complete the initial of the algorithm of fusion IMU information
Chemical industry is made.This part enables to the result of binocular vision SLAM more robust, reduces the case where tracking fails.
Track algorithm solves associated data problem, will carry out the structure of local map later.It is given birth to according to a upper module
At key frame to the point map cloud that current key frame and its relevant view are seen, it is excellent to carry out part BA in partial view
Change, optimization includes to optimizing while IMU and vision pose.After optimization, the key of intervals in queue can be deleted
Frame, to avoid the size of map is increased in region.
Global optimization is carried out to the map of structure with optimization module (5) using the winding detection based on figure optimization algorithm, is obtained
It obtains related robot location and point map is more accurately estimated.This module is divided into two parts and carries out respectively.It detects back first
Ring module (5.1) calculates the bag of words vector of current key frame, calculates similarity with the relevant adjacent image of its view, retains most
Low score value.Then, we retrieve image recognition database, the key frame that those score values are less than minimum score value are lost, by present frame
Key frame set is inquired in the database, when detecting winding, is then estimated the pose for all key frames of swinging fore-upward.
Global pose optimization submodule (5.2) carries out when each key frame is inserted into, for the position to key frame and point map
Further estimation, and the pose of current all key frames is further optimized in the current situation, this step utilizes
Winding detection as a result, it is possible to eliminate accumulative error according at influence.
Claims (6)
1. a kind of robot localization based on binocular vision feature and IMU information and map structuring system, which is characterized in that described
System includes binocular information collection, feature extracting and matching module, and improved IMU is initialized and its motion module, vision SLAM
Algorithm initialization and tracking module locally build module, winding detection and optimization module;The binocular information collection, feature carry
It includes binocular ORB feature extractions submodule, binocular characteristic matching submodule and IMU information collection submodules to take with matching module;
The improved IMU initialization and its motion module include that IMU angular speed estimation of deviation submodule, acceleration of gravity estimate submodule
Block, IMU acceleration bias estimation submodule and IMU pre-integration submodules;The vision SLAM algorithm initializations and tracking module
Submodule is generated including tracking interframe movement submodule and key frame;It includes that new key frame is inserted into submodule that module is built in the part
Block, part BA optimization submodules and rejecting redundancy key frames module;The winding detection and optimization module include winding detection
Module and global optimization submodule.
2. the robot localization as described in claim 1 based on binocular vision feature and IMU information and map structuring system,
It is characterized in that, in the binocular information collection, feature extraction and matching module, binocular ORB feature extraction submodules pass through receiving
It is special to extract the ORB in left images using ORB feature extraction algorithms for the picture that binocular camera is shot in robot moving process
Sign stores the result of extraction;The characteristic point that binocular characteristic matching module extracts left and right picture matches, and obtains characteristic point
Depth information;IMU information acquisition modules are collected robot interior IMU information.
3. the robot localization as claimed in claim 1 or 2 based on binocular vision feature and IMU information and map structuring system
System, which is characterized in that in the improved IMU initialization and its motion module, IMU angular speed estimation of deviation submodules are utilized and regarded
Feel SLAM as a result, using optimization method to IMU angular speed estimation of deviation;Acceleration of gravity and IMU acceleration bias estimate submodule
Block obtains linear side using IMU angular speed estimation of deviation submodules as a result, to continuous N number of key frame using relationship between it
Journey, solution obtain acceleration of gravity;It is the influence exclusion by acceleration of gravity to IMU acceleration to acceleration bias estimation,
Again the optimization key frame of continuous current frame state is combined two-by-two, solves equation and obtain the revaluation of acceleration of gravity, together
When obtain IMU acceleration bias;IMU acceleration bias estimates submodule, is estimated using acceleration of gravity and IMU acceleration bias
Submodule as a result, influence of the acceleration of gravity to IMU acceleration is excluded, solving equation to continuous N number of key frame again obtains
To the revaluation of acceleration of gravity, while obtaining IMU acceleration bias;IMU pre-integration submodule by continuous two picture frames it
Between all IMU data by IMU pre-integration model set at an individual observation, obtain the motion model of robot with
The current pose estimation of robot.
4. the robot localization as claimed in claim 1 or 2 based on binocular vision feature and IMU information and map structuring system
System, which is characterized in that in the vision SLAM algorithm initializations and tracking module, tracking interframe movement submodule calculates continuous two
ORB characteristic matchings between a image calculate eigenmatrix, recycle boundling optimization to record a demerit estimation and are further estimated
Meter;After successfully returning, system starts to track in a manner of pure vision, and key frame generates submodule monitoring, tracing intermediate result,
Then present frame is added in key frame set when present frame meets the condition of key frame, and using current key frame as tracking
Reference frame.
5. the robot localization as claimed in claim 1 or 2 based on binocular vision feature and IMU information and map structuring system
System, which is characterized in that the part is built in module, and new key frame is inserted into submodule) according to the key of upper submodule generation
Frame is inserted into crucial frame queue;Local BA optimizes submodule), after key frame insertion, to the key frame of current partial view with
Point map cloud makees boundling optimization;The key frame submodule for rejecting redundancy deletes window according to the key frame of insertion using screening strategy
Extra key frame, ensures the size of map in mouthful.
6. the robot localization as claimed in claim 1 or 2 based on binocular vision feature and IMU information and map structuring system
System, which is characterized in that winding detection is with optimization module, and winding detection sub-module is by present frame key frame in the database
Set is inquired, and winding is detected;After successfully returning, the pose for all key frames of swinging fore-upward is estimated, global optimization
Submodule carries out global figure to all key frames in winding and optimizes, and error factor includes visual projection's factor and the IMU factors.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810218153.1A CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810218153.1A CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108665540A true CN108665540A (en) | 2018-10-16 |
Family
ID=63785213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810218153.1A Pending CN108665540A (en) | 2018-03-16 | 2018-03-16 | Robot localization based on binocular vision feature and IMU information and map structuring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108665540A (en) |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
CN109579840A (en) * | 2018-10-25 | 2019-04-05 | 中国科学院上海微系统与信息技术研究所 | A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features |
CN109658507A (en) * | 2018-11-27 | 2019-04-19 | 联想(北京)有限公司 | Information processing method and device, electronic equipment |
CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
CN109781092A (en) * | 2019-01-19 | 2019-05-21 | 北京化工大学 | Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident |
CN109871803A (en) * | 2019-02-18 | 2019-06-11 | 清华大学 | Robot winding detection method and device |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110084853A (en) * | 2019-04-22 | 2019-08-02 | 北京易达图灵科技有限公司 | A kind of vision positioning method and system |
CN110111389A (en) * | 2019-05-14 | 2019-08-09 | 南京信息工程大学 | A kind of mobile augmented reality Tracing Registration method and system based on SLAM |
CN110118554A (en) * | 2019-05-16 | 2019-08-13 | 深圳前海达闼云端智能科技有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
CN110125928A (en) * | 2019-03-27 | 2019-08-16 | 浙江工业大学 | A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames |
CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A kind of dynamic SLAM system merged based on RGBD with encoder |
CN110471422A (en) * | 2019-08-29 | 2019-11-19 | 南京理工大学 | The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair |
CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
CN110514199A (en) * | 2019-08-28 | 2019-11-29 | 爱笔(北京)智能科技有限公司 | A kind of winding detection method and device of SLAM system |
CN110648370A (en) * | 2019-09-29 | 2020-01-03 | 百度在线网络技术(北京)有限公司 | Calibration data screening method and device and electronic equipment |
CN110986968A (en) * | 2019-10-12 | 2020-04-10 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
CN111089579A (en) * | 2018-10-22 | 2020-05-01 | 北京地平线机器人技术研发有限公司 | Heterogeneous binocular SLAM method and device and electronic equipment |
CN111145251A (en) * | 2018-11-02 | 2020-05-12 | 深圳市优必选科技有限公司 | Robot, synchronous positioning and mapping method thereof and computer storage device |
CN111220155A (en) * | 2020-03-04 | 2020-06-02 | 广东博智林机器人有限公司 | Method, device and processor for estimating pose based on binocular vision inertial odometer |
WO2020108285A1 (en) * | 2018-11-30 | 2020-06-04 | 华为技术有限公司 | Map building method, apparatus and system, and storage medium |
CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
CN111307146A (en) * | 2020-03-02 | 2020-06-19 | 北京航空航天大学青岛研究院 | Virtual reality wears display device positioning system based on binocular camera and IMU |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111323009A (en) * | 2020-03-09 | 2020-06-23 | 西南交通大学 | Magnetic suspension train positioning method and system |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Method, system, equipment and storage medium for simultaneous positioning and mapping |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111899276A (en) * | 2020-07-07 | 2020-11-06 | 武汉大学 | SLAM method and system based on binocular event camera |
CN112082545A (en) * | 2020-07-29 | 2020-12-15 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
WO2021035703A1 (en) * | 2019-08-30 | 2021-03-04 | 深圳市大疆创新科技有限公司 | Tracking method and movable platform |
CN112509006A (en) * | 2020-12-11 | 2021-03-16 | 北京华捷艾米科技有限公司 | Sub-map recovery fusion method and device |
CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
CN112767546A (en) * | 2021-01-22 | 2021-05-07 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | Binocular vision navigation method and device based on landmark points |
CN113720323A (en) * | 2021-07-30 | 2021-11-30 | 安徽大学 | Monocular vision through-guidance SLAM method and device based on dotted line feature fusion |
CN113781582A (en) * | 2021-09-18 | 2021-12-10 | 四川大学 | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration |
CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
-
2018
- 2018-03-16 CN CN201810218153.1A patent/CN108665540A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
Non-Patent Citations (3)
Title |
---|
RA’UL MUR-ARTAL 等: "Visual-Inertial Monocular SLAM with Map Reuse", 《IEEE ROBOTICS AND AUTOMATION LETTERS》 * |
XUE-BO ZHANG 等: "An Extended Kalman Filter-Based Robot Pose Estimation Approach with Vision and Odometry", 《WEARABLE SENSORS AND ROBOTS》 * |
熊敏君 等: "基于单目视觉与惯导融合的无人机位姿估计", 《计算机应用》 * |
Cited By (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111089579A (en) * | 2018-10-22 | 2020-05-01 | 北京地平线机器人技术研发有限公司 | Heterogeneous binocular SLAM method and device and electronic equipment |
CN109579840A (en) * | 2018-10-25 | 2019-04-05 | 中国科学院上海微系统与信息技术研究所 | A kind of close coupling binocular vision inertia SLAM method of dotted line Fusion Features |
CN111145251B (en) * | 2018-11-02 | 2024-01-02 | 深圳市优必选科技有限公司 | Robot and synchronous positioning and mapping method thereof and computer storage device |
CN111145251A (en) * | 2018-11-02 | 2020-05-12 | 深圳市优必选科技有限公司 | Robot, synchronous positioning and mapping method thereof and computer storage device |
CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
CN109658507A (en) * | 2018-11-27 | 2019-04-19 | 联想(北京)有限公司 | Information processing method and device, electronic equipment |
WO2020108285A1 (en) * | 2018-11-30 | 2020-06-04 | 华为技术有限公司 | Map building method, apparatus and system, and storage medium |
CN109712170A (en) * | 2018-12-27 | 2019-05-03 | 广东省智能制造研究所 | Environmental objects method for tracing, device, computer equipment and storage medium |
CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
CN109767470B (en) * | 2019-01-07 | 2021-03-02 | 浙江商汤科技开发有限公司 | Tracking system initialization method and terminal equipment |
CN109781092A (en) * | 2019-01-19 | 2019-05-21 | 北京化工大学 | Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident |
CN109781092B (en) * | 2019-01-19 | 2021-01-19 | 北京化工大学 | Mobile robot positioning and mapping method in dangerous chemical engineering accident |
CN109871803A (en) * | 2019-02-18 | 2019-06-11 | 清华大学 | Robot winding detection method and device |
CN109871803B (en) * | 2019-02-18 | 2020-12-08 | 清华大学 | Robot loop detection method and device |
CN110125928B (en) * | 2019-03-27 | 2021-04-06 | 浙江工业大学 | Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames |
CN110125928A (en) * | 2019-03-27 | 2019-08-16 | 浙江工业大学 | A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames |
CN109993113B (en) * | 2019-03-29 | 2023-05-02 | 东北大学 | Pose estimation method based on RGB-D and IMU information fusion |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110084853A (en) * | 2019-04-22 | 2019-08-02 | 北京易达图灵科技有限公司 | A kind of vision positioning method and system |
CN110111389B (en) * | 2019-05-14 | 2023-06-02 | 南京信息工程大学 | Mobile augmented reality tracking registration method and system based on SLAM |
CN110111389A (en) * | 2019-05-14 | 2019-08-09 | 南京信息工程大学 | A kind of mobile augmented reality Tracing Registration method and system based on SLAM |
CN110118554A (en) * | 2019-05-16 | 2019-08-13 | 深圳前海达闼云端智能科技有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
CN110458863B (en) * | 2019-06-25 | 2023-12-01 | 广东工业大学 | Dynamic SLAM system based on RGBD and encoder fusion |
CN110458863A (en) * | 2019-06-25 | 2019-11-15 | 广东工业大学 | A kind of dynamic SLAM system merged based on RGBD with encoder |
CN110487274B (en) * | 2019-07-30 | 2021-01-29 | 中国科学院空间应用工程与技术中心 | SLAM method and system for weak texture scene, navigation vehicle and storage medium |
CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
CN110514199B (en) * | 2019-08-28 | 2021-10-22 | 爱笔(北京)智能科技有限公司 | Loop detection method and device of SLAM system |
CN110514199A (en) * | 2019-08-28 | 2019-11-29 | 爱笔(北京)智能科技有限公司 | A kind of winding detection method and device of SLAM system |
CN110471422A (en) * | 2019-08-29 | 2019-11-19 | 南京理工大学 | The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair |
WO2021035703A1 (en) * | 2019-08-30 | 2021-03-04 | 深圳市大疆创新科技有限公司 | Tracking method and movable platform |
CN110648370B (en) * | 2019-09-29 | 2022-06-03 | 阿波罗智联(北京)科技有限公司 | Calibration data screening method and device and electronic equipment |
CN110648370A (en) * | 2019-09-29 | 2020-01-03 | 百度在线网络技术(北京)有限公司 | Calibration data screening method and device and electronic equipment |
CN112581514A (en) * | 2019-09-30 | 2021-03-30 | 浙江商汤科技开发有限公司 | Map construction method and device and storage medium |
CN110986968B (en) * | 2019-10-12 | 2022-05-24 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
CN110986968A (en) * | 2019-10-12 | 2020-04-10 | 清华大学 | Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction |
CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
CN111307146A (en) * | 2020-03-02 | 2020-06-19 | 北京航空航天大学青岛研究院 | Virtual reality wears display device positioning system based on binocular camera and IMU |
CN111220155A (en) * | 2020-03-04 | 2020-06-02 | 广东博智林机器人有限公司 | Method, device and processor for estimating pose based on binocular vision inertial odometer |
CN111323009A (en) * | 2020-03-09 | 2020-06-23 | 西南交通大学 | Magnetic suspension train positioning method and system |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111311684B (en) * | 2020-04-01 | 2021-02-05 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111561923B (en) * | 2020-05-19 | 2022-04-15 | 北京数字绿土科技股份有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN111561923A (en) * | 2020-05-19 | 2020-08-21 | 北京数字绿土科技有限公司 | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion |
CN111784835B (en) * | 2020-06-28 | 2024-04-12 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111899276A (en) * | 2020-07-07 | 2020-11-06 | 武汉大学 | SLAM method and system based on binocular event camera |
CN112082545A (en) * | 2020-07-29 | 2020-12-15 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
CN112082545B (en) * | 2020-07-29 | 2022-06-21 | 武汉威图传视科技有限公司 | Map generation method, device and system based on IMU and laser radar |
CN111737278B (en) * | 2020-08-05 | 2020-12-04 | 鹏城实验室 | Method, system, equipment and storage medium for simultaneous positioning and mapping |
CN111737278A (en) * | 2020-08-05 | 2020-10-02 | 鹏城实验室 | Method, system, equipment and storage medium for simultaneous positioning and mapping |
CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
CN112509006A (en) * | 2020-12-11 | 2021-03-16 | 北京华捷艾米科技有限公司 | Sub-map recovery fusion method and device |
CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN112747749B (en) * | 2020-12-23 | 2022-12-06 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
CN112767546B (en) * | 2021-01-22 | 2022-08-02 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
CN112767546A (en) * | 2021-01-22 | 2021-05-07 | 湖南大学 | Binocular image-based visual map generation method for mobile robot |
CN113076988B (en) * | 2021-03-25 | 2022-06-03 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | Binocular vision navigation method and device based on landmark points |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113720323B (en) * | 2021-07-30 | 2024-01-23 | 安徽大学 | Monocular vision inertial navigation SLAM method and device based on point-line feature fusion |
CN113720323A (en) * | 2021-07-30 | 2021-11-30 | 安徽大学 | Monocular vision through-guidance SLAM method and device based on dotted line feature fusion |
CN113781582B (en) * | 2021-09-18 | 2023-09-19 | 四川大学 | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration |
CN113781582A (en) * | 2021-09-18 | 2021-12-10 | 四川大学 | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration |
CN114485620B (en) * | 2022-01-29 | 2023-07-28 | 中国科学院国家空间科学中心 | Autonomous visual positioning system and method for asteroid detector fused with orbit dynamics |
CN114485620A (en) * | 2022-01-29 | 2022-05-13 | 中国科学院国家空间科学中心 | Orbital dynamics fused asteroid detector autonomous visual positioning system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108665540A (en) | Robot localization based on binocular vision feature and IMU information and map structuring system | |
CN107193279A (en) | Robot localization and map structuring system based on monocular vision and IMU information | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
Sarlin et al. | Back to the feature: Learning robust camera localization from pixels to pose | |
Zhou et al. | To learn or not to learn: Visual localization from essential matrices | |
Gálvez-López et al. | Real-time monocular object slam | |
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN110345944A (en) | Merge the robot localization method of visual signature and IMU information | |
Saeedi et al. | Vision-based 3-D trajectory tracking for unknown environments | |
CN110125928A (en) | A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames | |
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
Huang | Review on LiDAR-based SLAM techniques | |
CN110717927A (en) | Indoor robot motion estimation method based on deep learning and visual inertial fusion | |
Kragic et al. | Robust visual servoing | |
CA2945860C (en) | A method for localizing a robot in a localization plane | |
CN103177269A (en) | Equipment and method used for estimating object posture | |
Pieropan et al. | Robust 3D tracking of unknown objects | |
CN114088081A (en) | Map construction method for accurate positioning based on multi-segment joint optimization | |
Panahandeh et al. | Vision-aided inertial navigation using planar terrain features | |
US10977810B2 (en) | Camera motion estimation | |
Kim et al. | Vision-based navigation with efficient scene recognition | |
Peng et al. | View-invariant full-body gesture recognition via multilinear analysis of voxel data | |
Chen et al. | End-to-end multi-view structure-from-motion with hypercorrelation volume | |
CN111724438B (en) | Data processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |