CN107193279A - Robot localization and map structuring system based on monocular vision and IMU information - Google Patents
Robot localization and map structuring system based on monocular vision and IMU information Download PDFInfo
- Publication number
- CN107193279A CN107193279A CN201710320077.0A CN201710320077A CN107193279A CN 107193279 A CN107193279 A CN 107193279A CN 201710320077 A CN201710320077 A CN 201710320077A CN 107193279 A CN107193279 A CN 107193279A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- imu
- delta
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Secondary Cells (AREA)
Abstract
The invention discloses a kind of robot localization based on monocular vision and IMU information and map structuring system.The present invention is estimated IMU buggy models, absolute measure and acceleration of gravity direction etc. using pure vision guided navigation information;In vision guided navigation, using efficient ORB feature extraction algorithms, abundant ORB features are extracted to picture frame;The motion model of camera is set up using the IMU kinetic models based on pre-integration, camera position is carried out in real time according to a preliminary estimate;More accurate estimation is carried out to the ORB features between two picture frames on the basis of according to a preliminary estimate, many mesh geometric knowledges are recycled, realized to space point map three-dimensionalreconstruction;On the basis of the visual information matching of fusion IMU information, using the rear end optimized algorithm based on factor graph, accurate and estimation in real time is carried out to map location in real time.The present invention can be accurately estimated robot motion and ambient condition information.
Description
Technical field
The invention belongs to robotic technology field, and in particular to the robot localization of view-based access control model feature and IMU information with
Map structuring system.
Background technology
The technology of simultaneous localization and mapping (SLAM) is the problem of robot field one is important, and SLAM problems can be with
It is described as:The robot for being equipped with sensor is freely moved in space, using the sensor information collected, to self-position
Positioned, at the same on the basis of self poisoning increment type structure map, position and chart while realizing robot.Shadow
Two key factors for ringing SLAM Resolving probiems are respectively the data characteristics and observation data dependence of sensor, if it is possible to high
The utilization sensing data of effect and the accuracy and robustness for improving data correlation, will can have influence on positioning and the map of robot
Structure.
Most common sensor has monocular camera and IMU sensors etc. on current mobile device, and how algorithm for design goes profit
Realized with these common sensor devices high-precision while positioning and drawing are the hot issues of a current research.
Innovative utilization IMU information fusion monocular vision information of the invention, makes up and is asked present in monocular vision SLAM
Topic, and improve the robustness of SLAM algorithms.Herein on basis, the map for possessing true dimensional information is constructed.
The technology of simultaneous localization and mapping (SLAM) is the problem of robot field is more classical, and SLAM problems can be with
It is described as:Robot is moved in circumstances not known since a unknown position, according to location estimation and ground in moving process
Figure carries out self poisoning, while building increment type map on the basis of self poisoning, realizes the autonomous positioning of robot and leads
Boat.Influence SLAM problems important because ambient noise and observing the correlations of data, if it is possible to obtain compared with
High data correlation will determine the correctness observed surrounding environment, and then have influence on the establishment of whole environmental map.
The content of the invention
It is an object of the invention to provide robustness is good, accuracy is high, adaptable robot localization and map structuring
System.
Robot localization proposed by the present invention and map structuring system, are same based on monocular vision information and IMU information
Shi Dingwei and map structuring (SLAM) a solution.The present invention is first with pure vision guided navigation information to IMU deviation moulds
Type, absolute measure and acceleration of gravity direction etc. are estimated;In vision guided navigation, calculated using efficient ORB feature extractions
Method, extracts abundant ORB to picture frame and describes son;The motion mould of camera is set up using the IMU kinetic models based on pre-integration
Type, is carried out in real time according to a preliminary estimate using motion model to camera position;To between two picture frames on the basis of according to a preliminary estimate
ORB features carry out more accurate estimation, many mesh geometric knowledges of recycling are more accurate to robot current location progress
Estimation, and realize to space point map three-dimensionalreconstruction;On the basis of the visual information matching of fusion IMU information, use
Rear end optimized algorithm based on factor graph, using the space map spot projection factor and the IMU factors as the factor in factor graph, in real time
Accurate and estimation in real time is carried out to map location.It is same that the present invention realizes the robot based on monocular vision and IMU information
Step is positioned at map building algorithm, and robot motion and ambient condition information accurately can be estimated.
The robot localization that the present invention is provided and map structuring system, its structure are shown in Figure 1, including:Information gathering
With extraction module (1), vision SLAM initialization modules (2), merge the SLAM algorithm initializations module (3) of IMU information, based on fortune
The tracing module (4) of movable model, the rear end optimization module (5) based on factor graph;Described information is gathered wraps with extraction module (1)
Include:ORB feature extractions submodule (1.1), IMU information gatherings submodule (1.2);Vision SLAM initialization modules (2) bag
Include:Follow the trail of preliminary initialization submodule (2.1), key frame and produce submodule (2.2), map initialization submodule (2.3);It is described
The SLAM algorithm initializations module (3) of fusion IMU information includes:IMU angular speed estimation of deviation submodules (3.1), atlas dimension
With acceleration of gravity pre-estimation submodule (3.2), IMU acceleration bias estimation submodule (3.3), key frame velocity estimation module
Submodule (3.4);The tracing module (4) based on motion model includes:IMU pre-integration submodule (4.1), based on motion mould
The matched sub-block (4.2) of type, global characteristics matched sub-block (4.3), local map matched sub-block (4.4), reset seat
Module (4.5);The rear end optimization module (5) based on factor graph includes:The optimization submodule (5.1) of current frame state,
Figure point and key frame state optimization submodule (5.2), winding detection and global key frame pose optimization submodule (5.3).
In information gathering and extraction module (1), ORB feature extractions submodule (1.1) is by receiving robot moving process
The picture that middle monocular camera is shot, the ORB features in image are extracted using ORB feature extraction algorithms;IMU information acquisition modules
(1.2) robot interior IMU information is collected;
In vision SLAM initialization modules (2), follow the trail of preliminary initialization submodule (2.1) and calculate continuous two initial pictures
Between ORB characteristic matchings, calculate eigenmatrixs using 8 methods, recycle boundling optimization to record a demerit estimation and is further estimated
Meter;After submodule (2.1) is successfully returned, system starts to follow the trail of in the way of pure vision, and key frame produces submodule (2.2) prison
Intermediate result is followed the trail of in control, then adds present frame among key frame set when present frame meets the condition of key frame, and current
Key frame as follow the trail of reference frame, submodule (2.2) produce key frame after, point map initialization submodule (2.3) utilize
Triangulation carries out depth information reconstruct to the point map repeated in key frame, is carried out using many mesh geometric correlation knowledge
Three-dimensional coordinate is reconstructed;
In the SLAM algorithm initializations module (3) for merging IMU information, IMU angular speed estimation of deviation submodules (3.1) are utilized
Vision SLAM results, using optimization method to IMU angular speed estimation of deviation;Atlas dimension and acceleration of gravity estimation submodule
(3.2) submodule (3.1) result is utilized, continuous N number of key frame is combined two-by-two, linear equation is obtained using relation between it,
Solution obtains atlas dimension and acceleration of gravity;IMU acceleration bias estimation submodule (3.3) utilizes submodule (3.2) result,
Influence by acceleration of gravity to IMU acceleration is excluded, and continuous N number of key frame is combined two-by-two again, is solved linear equation and is obtained
To the revaluation of atlas dimension and acceleration of gravity, while obtaining IMU acceleration bias;Key frame velocity estimation submodule
(3.4) using the result calculated before, angular speed deviation, acceleration bias and speed will be added in the state of key frame, it is right
Continuous N two field pictures directly optimize calculating, and all N frame speeds are estimated;
In tracing module (4) based on motion model, IMU pre-integration submodule (4.1) is by between continuous two picture frames
All IMU data by IMU pre-integration model set into a single observation, obtain the motion model and machine of robot
The current pose estimation of device people;Matched sub-block (4.2) based on motion model is estimated according to the pose of motion model, and previous
The key frame of two field picture is matched in certain hunting zone;Global characteristics matched sub-block (4.3) is lost in submodule (4.2)
In the case of losing, and former frame carries out the characteristic matching of whole figure;Local map matched sub-block (4.4) is known in submodule (4.3)
In the case of other, local map is projected into present frame according to motion model and matched;Bit submodule (4.5) is reset in submodule
In the case of block (4.4) failure, the key frame set in present frame and database is subjected to global registration using bag of words;
In rear end optimization module (5) based on factor graph, the optimization submodule (5.1) of current frame state is according to visual projection
The factor and the IMU factors, the state using present frame as optimized variable, current local map and previous key frame participate in computing without
Update;Point map and key frame state optimization submodule (5.2) are after the addition of new key frame to local map point and native window
In crucial frame state estimated, error factor include visual projection's factor and the IMU factors;Winding is detected and global key frame
Pose optimization submodule (5.3) is by present frame, key frame set is inquired about in database, when detecting winding, then to time
The pose of all key frames is estimated on ring, now only considers visual projection's factor.
The implementation status to module is specifically introduced further below.
In described information collection and extraction module (1):
ORB feature extractions submodule (1.1), after each robot collects image, for carrying out ORB features to image
Extraction, choose ORB features as characteristics of image the reason for be that ORB features have preferable visual angle illumination invariant, and
ORB features are as a kind of binary visual signature, with extraction rate is fast and the characteristics of fast matching speed, is well suited for realistic
The SLAM systems of when property.After extraction after ORB features, the result of extraction is stored.
IMU information gatherings submodule (1.2), is gathered, and deliver in real time for the IMU data to robot interior
Handled in IMU pre-integration submodule (4.1), because the IMU data frequencys that IMU information acquisition modules are collected are remote
Higher than visual pattern frequency acquisition, the calculating pressure of the rear end optimization module (5) based on factor graph can be reduced using IMU pre-integration
Power.
In the vision SLAM initialization modules (2):
Preliminary initialization submodule (2.1) is followed the trail of, for continuous two initialisation image Fc,FrUtilize ORB feature extractions
Submodule (1.1) is extracted after feature, to Fc,FrUpper feature carries out characteristic matching, obtains matching result xc, xr, solve equation
xcFcrxr=0, obtain eigenmatrix Fcr, to FcrCarry out singular value decomposition and obtain FrWith respect to FcTransformation matrix.Hereafter can profit
Constantly robot location is estimated with SLAM system tracks.Estimation is recorded a demerit using boundling optimization and further estimated
The reason for be boundling optimization can further correct eigenmatrix, be allowed to more meet ORB characteristic matching results.
Key frame produces submodule (2.2), according to rule, decides whether that by present frame selection be key frame.Its rule is such as
Under:
1) at least from 20 frames with a distance from last time reorientation;
2) local drawing module is idle or from existing 20 frames of last key frame insertion difference;
3) at least 50 characteristic points in present frame.
Map initialization submodule (2.3), depth is extracted using trigonometry to the ORB features matched in two continuous frames image
Information is spent, and utilizes multiple characteristic matching results, linear equation is built:
Wherein, p1, p2For the characteristic point matched in two field pictures, M1, M2For the outer ginseng matrix of two images, P is to estimate
Obtain point map position.Singular value decomposition is done to this linear equation, the minimum singular vector of singular value is exactly three-dimensional coordinate P
Solution.
In the SLAM algorithm initializations module (3) of the fusion IMU information:
IMU angular speed estimation of deviation submodules (3.1), using monocular vision SLAM result, IMU angular speeds deviation is become
Amount is added among robotary vector, to IMU angular speed estimation of deviation.Optimizing formula is:
Wherein, N is current key number of frames,It is that obtained shooting is calculated by pure vision SLAM algorithms
Machine center C coordinatesWith the calibration matrix R between video camera and IMU centers (being denoted as B)CBMultiplication is obtained, Δ RI, i+1It is basis
Spin matrix between i the and i+1 key frames that IMU pre-integration submodule (4.1) is obtained;.It is according to IMU pre-integration submodules
The Δ R that block (4.1) is obtainedI, i+1With the first approximation of angular speed change of error equation;By being solved to optimization method, obtain
IMU angular speed deviations bg。
Atlas dimension and acceleration of gravity pre-estimation submodule (3.2), utilize IMU angular speed estimation of deviation submodules
(3.1) result, for the relation two-by-two between three key frames being connected, utilizes pre-integration measured value Δ between any two
V, obtains linear equation:
I, i+1, the continuous key frames of i+2 tri-, for the items in above formula are represented with 1,2,3:
In above-mentioned formula, RWCRepresent spin matrixs of the camera center C relative to world coordinates W, RWBRepresent in IMU
Heart B is relative to the spin matrix of world coordinates, pBPositions of the IMU centers B under world coordinates is represented, I is that unit matrix Δ t is
The time difference of interframe, Δ v represents the camera speed of interframe, s and gWIt is scale factor and the acceleration of gravity for needing to estimate.By institute
Three continuous key frame equatioies having are piled into a linear equation, are denoted as:
A3(N-2)×4x4×1=B3(N-2)×1
Singular value decomposition is carried out to this formula, the estimate s to yardstick and gravity acceleration is obtained*With
IMU acceleration bias estimation submodule (3.3), utilizes atlas dimension and acceleration of gravity pre-estimation submodule
(3.2) result, utilizes the linear relationship between continuous three key frames:
Wherein, λ (i) keeps constant, φ (i) with previous step, and ζ (i), ψ (i) accounting equation is as follows:
In above formula [](:, 1:2)Refer to the first two columns data using matrix;RWCCamera center C is represented relative to world coordinates
W spin matrix, RWBRepresent spin matrixs of the IMU centers B relative to world coordinates, RWIChanging coordinates are represented relative to the world
The spin moment of coordinate, pBPositions of the IMU centers B under world coordinates is represented, I is the time difference that unit matrix Δ t is interframe, Δ
V represents the camera speed of interframe, s and δ θxyIt is scale factor and the acceleration of gravity deflection for needing to estimate, baIt is that acceleration is inclined
Difference.Three all continuous key frame equatioies are piled into a linear equation, are denoted as:
A3(N-2)×6x6×1=B3(N-2)×1
By carrying out singular value decomposition to linear equation above, scale factor s is obtained*With acceleration of gravity direction
And IMU acceleration bias
Key frame velocity estimation submodule (3.4), using the result calculated before, will add angle speed in the state of key frame
Rate deviation, acceleration bias, and speed, calculating is directly optimized to continuous N two field pictures, and all N frame speeds are estimated
Meter.
In the tracing module (4) based on motion model:
All IMU data between continuous two picture frames are passed through IMU pre-integration moulds by IMU pre-integration submodule (4.1)
Type assembles a single observation, obtains the pose estimation current with robot of the motion model of robot.IMU is marked
For B, the acceleration and angular speed of IMU outputs are expressed as aBAnd ωB, then between two IMU output i and i+1 robot motion
Equation is:
Δ R, Δ v, Δ p are denoted as in k and the direct IMU pre-integration result of two key frames of k+1, then obtains two key frames
Between motion model be:
The R in above formulaWBRepresent spin matrixs of the IMU centers B relative to world coordinates, vBRepresent IMU speed, pBRepresent
Positions of the IMU centers B under world coordinates, bg, baThe deviation of angular speed and acceleration is represented respectively.Jacobian matrixWith
What is represented is IMU pre-integration observation with the first approximation of change of error equation.
Matched sub-block (4.2) based on motion model, R is estimated according to the pose of motion modelWBWithWpB, with former frame
The key frame of image is matched in certain hunting zone.
Global characteristics matched sub-block (4.3), in the case of submodule (4.2) failure, and former frame carries out whole figure
Characteristic matching.
Local map matched sub-block (4.4), in the case where submodule (4.3) is recognized, by local map according to motion
Model projection is matched to present frame.For 3D point of one in space under camera coordinatesCorresponding frame note
Make C, by XcThe formula projected in C is denoted as:
Projecting obtained point isWherein [fu, fv] be camera focal length, [cu, cv] be photocentre coordinate.
Bit submodule (4.5) is reset, in the case of submodule (4.4) failure, by the key in present frame and database
Frame set carries out global registration using bag of words.
In the rear end optimization module (5) based on factor graph:
The optimization submodule (5.1) of current frame state, it is considered to visual projection's factor and the IMU factors, with the state of present frame
As optimized variable, current local map and previous key frame participate in computing without updating.Wherein projection error is:
Eproj(i)=ρ ((x- π (Xc))TΣx(x-π(Xc)))
Wherein π (Xc) it is projection formula, x represents projection coordinate of the spatial point on present frame, from world coordinate system coordinate
Value XWTo camera coordinates system coordinate value XcBe converted to:
The R in above formulaCBRepresent spin matrixs of the IMU centers B relative to Current camera center C coordinates, RBWRepresent world's seat
Mark the small spin matrix for IMU centers B.CpBRepresent IMU centers B being translated towards relative to current Current camera center C coordinates
Amount,WpBRepresent translation vectors of the IMU centers B relative to our times coordinate W.
The IMU factors can be expressed as:
Wherein each error term distribution is expressed as:
eb=bj-bi
In above formulaFirst approximations of the two interframe rotation transformation Δ R relative to acceleration of gravity is represented,Represent two frames
Between speed Δ v relative to acceleration of gravity first approximation,Two interframe velocity variations Δ v are represented relative to acceleration
First approximation.RBWRepresent world coordinates W to IMU coordinates B spin matrix, RWBIMU coordinates B is represented to world coordinates W rotation
Matrix.bg, baThe deviation of angular speed and acceleration, e are represented respectivelyR, ep, ev, ebRepresent respectively be rotation, translation, speed and
The error term that IMU deviations are brought.It is all considered as Gaussian Profile, Σ by weIIt is the information matrix of pre-integration, ΣRBe deviation with
What machine migration information matrix was represented is huber robust loss equations.
Point map and key frame state optimization submodule (5.2), to local map point and locally after the addition of new key frame
Crucial frame state in window is estimated that error factor includes visual projection's factor and the IMU factors, for example current frame state of formula
Optimization submodule (5.1) shown in.
Winding detects and global key frame pose optimization submodule (5.3), by present frame in database key frame collection
Conjunction is inquired about, and when detecting winding, then the pose for all key frames of swinging fore-upward is estimated, is now only considered visual projection
The factor, projection factor formula is as shown in the optimization submodule (5.1) of current frame state.
Due to present invention employs the method for visual information and IMU information fusions, can be good at solving in vision SLAM
The problem of, and improve the robustness of SLAM systems.
Brief description of the drawings
Fig. 1 is present system structural diagrams.
Fig. 2 is present system flow diagram.
Fig. 3 is the result of present system operation.Wherein, a is scale factor variation diagram, b acceleration bias variation diagrams, c
Angular speed change of error figure.
Fig. 4 is the result that optimization module is optimized to factor graph in the present invention.Wherein, a be simple motion in the case of all
Accurate movement locus estimation is obtained, b estimates to obtain accurate movement locus in the case of simple compound movement.
Embodiment
The preferable embodiment of the present invention is provided below according to Fig. 1 and Fig. 2, and is described in detail, makes to be better understood when
The present invention rather than for limiting the scope of the present invention.
Core algorithm and periphery composition part-structure are as depicted in figs. 1 and 2.
So that exemplified by the equipment of the unmanned plane of monocular camera and IMU is equipped with, this paper algorithms are discussed in detail in this equipment
Implementation procedure.
First in information gathering and extraction module (1), feature extraction is carried out to visual information, and IMU information is carried out
Collection, this two-part information will constitute the importation of algorithm, and as robot is moved in space, the two parts will not
The disconnected data to newly arriving are acquired and extracted.Image is gathered using preposition monocular camera, image is delivered into feature extraction and calculated
Method, the position of the characteristics of image extracted is preserved with description.The signal of built-in IMU sensors is acquired and sent simultaneously
Handled to IMU pre-integration algorithms.
Using the characteristic point information extracted, vision SLAM initialization modules (2) are initialized to vision SLAM parts
Operation.Successively to tracing module, key frame generation module and point map initialization module are initialized.After being initialized
System saves the point map after initialization, and initial positional information.The premise that this part is normally run is information gathering
With extraction module (1) provide correct feature extraction, the initialization result of this part be after module basis.Vision is initial
After change is finished, system will perform vision SLAM algorithms part, and vision SLAM algorithms part includes the tracking based on motion model
Module (4) and the rear end optimization module (5) based on factor graph.
After vision SLAM performs a period of time, start the SLAM algorithm initializations module (3) of fusion IMU information.First
It is IMU angular speeds estimation of deviation (3.1), this part is estimated IMU angular speed deviation.Next atlas dimension and gravity
Acceleration pre-estimation (3.2) carries out preliminary estimation to atlas dimension and acceleration of gravity, and obtaining after acceleration of gravity just can be with
More accurate estimation is carried out to IMU acceleration bias.IMU acceleration bias is estimated and yardstick and acceleration of gravity revaluation
(3.3) it is that IMU acceleration bias is estimated, while yardstick and acceleration of gravity are estimated again.Final key frame speed is estimated
Meter (3.4) is estimated the speed of each key frame, completes the initial work of the SLAM algorithms of fusion IMU information.This
Partial result can provide the absolute measure information about map, and IMU deviation information.Such as navigation task indoors
In, be using the yardstick of the obtained maps of monocular vision SLAM it is uncertain, can be to the absolute measure of map using this module
Estimated.The result of system operation is as shown in Figure 3, it can be seen that scale factor is stablized in a value, generation after a certain time
The estimation of table absolute measure has obtained a stable value.In x, y, the acceleration and angular speed change of error in tri- directions of z also exist
Keep stable after certain time, illustrate that estimation of this algorithm to IMU deviation has reached stable.Algorithm afterwards will utilize these
As a result.
In order to more fully utilize IMU information, we devise the tracing module (4) based on motion model, and it is main interior
Hold as shown in Figure 1.The IMU information collected first with information gathering and extraction module (1) sets up the fortune scored in advance based on IMU
Movable model, this model is used for the specific matching (4.2) based on motion model.When there are enough match points, start RANSAC
Module estimated current frame position, the step for accelerate the speed of characteristic matching.Shown when not enough characteristic points
There is certain deviation between motion model and real motion, now start global characteristics matching (4.3), it is right again after global registration success
Present frame location estimation.Start local map matching module (4.4) if failed, local map is projected into present frame is carried out
Matching.Start if failure and reset bit model (4.5).This part information gathering with extraction module (1) is per treatment finish after
Perform, this part enables to pure vision SLAM result more robust, reduce the situation of tracking failure.
Track algorithm is solved the problems, such as after data correlation, has built rear end factor graph.After based on factor graph
End optimization module (5) is optimized to factor graph, obtains the more accurate estimation of relevant robot location and point map.This
Module is divided into three parts and carried out respectively.Frame optimization (5.1) current first each picture frame into when start, this is
In order to further estimate on the basis of tracking current location.Point map and crucial frame optimization (5.2) are in each key
Frame is carried out when insertion, in order to which the position to key frame and point map is further estimated.Winding is detected and global crucial framing bit
Appearance optimization (5.3), which is realized, can change detection algorithm, and the pose of current all key frames is carried out further in the current situation
Optimization, this step make use of winding detect as a result, it is possible to eliminate accumulative error according into influence.
The result of algorithm operation is as shown in figure 4, solid line is real trace in figure, and dotted line is the estimation of algorithm, and dotted line is to estimate
The deviation of evaluation and real trace.It can be seen that being obtained for accurate motion under simple (a) and complicated (b) motion conditions
Estimate track.
Claims (6)
1. robot localization and map structuring system based on monocular vision and IMU information, it is characterised in that including:Information is adopted
Collect with extraction module (1), vision SLAM initialization modules (2), merge the SLAM algorithm initializations module (3) of IMU information, be based on
The tracing module (4) of motion model, the rear end optimization module (5) based on factor graph;Described information is gathered wraps with extraction module (1)
Include:ORB feature extractions submodule (1.1), IMU information gatherings submodule (1.2);Vision SLAM initialization modules (2) bag
Include:Follow the trail of preliminary initialization submodule (2.1), key frame and produce submodule (2.2), map initialization submodule (2.3);It is described
The SLAM algorithm initializations module (3) of fusion IMU information includes:IMU angular speed estimation of deviation submodules (3.1), atlas dimension
With acceleration of gravity pre-estimation submodule (3.2), IMU acceleration bias estimation submodule (3.3), key frame velocity estimation module
Submodule (3.4);The tracing module (4) based on motion model includes:IMU pre-integration submodule (4.1), based on motion mould
The matched sub-block (4.2) of type, global characteristics matched sub-block (4.3), local map matched sub-block (4.4), reset seat
Module (4.5);The rear end optimization module (5) based on factor graph includes:The optimization submodule (5.1) of current frame state,
Figure point and key frame state optimization submodule (5.2), winding detection and global key frame pose optimization submodule (5.3);
In described information collection and extraction module (1), ORB feature extractions submodule (1.1) is by receiving robot moving process
The picture that middle monocular camera is shot, the ORB features in image are extracted using ORB feature extraction algorithms;IMU information acquisition modules
(1.2) robot interior IMU information is collected;
In the vision SLAM initialization modules (2), follow the trail of preliminary initialization submodule (2.1) and calculate continuous two initial pictures
Between ORB characteristic matchings, calculate eigenmatrixs using 8 methods, recycle boundling optimization to record a demerit estimation and is further estimated
Meter;After submodule (2.1) is successfully returned, system starts to follow the trail of in the way of pure vision, and key frame produces submodule (2.2) prison
Intermediate result is followed the trail of in control, then adds present frame among key frame set when present frame meets the condition of key frame, and current
Key frame as follow the trail of reference frame, submodule (2.2) produce key frame after, point map initialization submodule (2.3) utilize
Triangulation carries out depth information reconstruct to the point map repeated in key frame, is carried out using many mesh geometric correlation knowledge
Three-dimensional coordinate is reconstructed;
In the SLAM algorithm initializations module (3) of the fusion IMU information, IMU angular speed estimation of deviation submodules (3.1) are utilized
Vision SLAM results, using optimization method to IMU angular speed estimation of deviation;Atlas dimension and acceleration of gravity estimation submodule
(3.2) submodule (3.1) result is utilized, continuous N number of key frame is combined two-by-two, linear equation is obtained using relation between it,
Solution obtains atlas dimension and acceleration of gravity;IMU acceleration bias estimation submodule (3.3) utilizes submodule (3.2) result,
Influence by acceleration of gravity to IMU acceleration is excluded, and continuous N number of key frame is combined two-by-two again, is solved linear equation and is obtained
To the revaluation of atlas dimension and acceleration of gravity, while obtaining IMU acceleration bias;Key frame velocity estimation submodule
(3.4) using the result calculated before, angular speed deviation, acceleration bias and speed will be added in the state of key frame, it is right
Continuous N two field pictures directly optimize calculating, and all N frame speeds are estimated;
In the tracing module (4) based on motion model, IMU pre-integration submodule (4.1) is by between continuous two picture frames
All IMU data by IMU pre-integration model set into a single observation, obtain the motion model and machine of robot
The current pose estimation of device people;Matched sub-block (4.2) based on motion model is estimated according to the pose of motion model, and previous
The key frame of two field picture is matched in certain hunting zone;Global characteristics matched sub-block (4.3) is lost in submodule (4.2)
In the case of losing, and former frame carries out the characteristic matching of whole figure;Local map matched sub-block (4.4) is known in submodule (4.3)
In the case of other, local map is projected into present frame according to motion model and matched;Bit submodule (4.5) is reset in submodule
In the case of block (4.4) failure, the key frame set in present frame and database is subjected to global registration using bag of words;
In the rear end optimization module (5) based on factor graph, the optimization submodule (5.1) of current frame state is according to visual projection
The factor and the IMU factors, the state using present frame as optimized variable, current local map and previous key frame participate in computing without
Update;Point map and key frame state optimization submodule (5.2) are after the addition of new key frame to local map point and native window
In crucial frame state estimated, error factor include visual projection's factor and the IMU factors;Winding is detected and global key frame
Pose optimization submodule (5.3) is by present frame, key frame set is inquired about in database, when detecting winding, then to time
The pose of all key frames is estimated on ring, now only considers visual projection's factor.
2. the robot localization according to claim 1 based on monocular vision and IMU information and map structuring system, it is special
Levy and be, in information gathering and extraction module (1):
The ORB feature extractions submodule (1.1) carries out ORB features after each robot collects image, to image and carried
Take, after extraction after ORB features, the result of extraction is stored;
The IMU information gatherings submodule (1.2), for the IMU real time data acquisitions to robot interior, and it is pre- to deliver to IMU
Handled in integration submodule (4.1).
3. the robot localization according to claim 2 based on monocular vision and IMU information and map structuring system, it is special
Levy and be, in vision SLAM initialization modules (2):
Described tracking preliminary initialization submodule (2.1) is to continuous two initialisation image Fc,FrUtilize ORB feature extractions
Module (1.1) extracts feature, to Fc,FrUpper feature carries out characteristic matching and obtains matching result xc, xr, solve EQUATION xcFcrxr=0,
Obtain eigenmatrix Fcr, to FcrCarry out singular value decomposition and obtain FrWith respect to FcTransformation matrix;Hereafter SLAM system tracks are utilized
Constantly robot location is estimated;Using boundling optimization to estimating that the reason for progress of recording a demerit further is estimated is boundling
Optimization can further correct eigenmatrix, be allowed to more meet ORB characteristic matching results;
Described key frame produces submodule (2.2) according to following rules, decides whether that by present frame selection be key frame;It is advised
It is then as follows:
1) at least from 20 frames with a distance from last time reorientation;
2) local drawing module is idle or from existing 20 frames of last key frame insertion difference;
3) at least 50 characteristic points in present frame;
Described map initialization submodule (2.3) is extracted using trigonometry to the ORB features matched in two continuous frames image
Depth information, and multiple characteristic matching results are utilized, build linear equation:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mo>&lsqb;</mo>
<msub>
<mi>p</mi>
<msub>
<mn>1</mn>
<mo>&times;</mo>
</msub>
</msub>
<mo>&rsqb;</mo>
<msub>
<mi>M</mi>
<mn>1</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>p</mi>
<msub>
<mn>2</mn>
<mo>&times;</mo>
</msub>
</msub>
<mo>&rsqb;</mo>
<msub>
<mi>M</mi>
<mn>2</mn>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mi>P</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
Wherein, p1, p2For the characteristic point matched in two field pictures, M1, M2For the outer ginseng matrix of two images, P is the ground to be estimated
Chart position;Singular value decomposition is done to this linear equation, the minimum singular vector of singular value is exactly three-dimensional coordinate P solution.
4. the robot localization according to claim 3 based on monocular vision and IMU information and map structuring system, it is special
Levy and be, in the SLAM algorithm initializations module (3) of described fusion IMU information:
Described IMU angular speed estimation of deviation submodules (3.1) utilize monocular vision SLAM result, by IMU angular speed deviations
Variable is added among robotary vector, to IMU angular speed estimation of deviation;Optimizing formula is:
<mrow>
<munder>
<mrow>
<mi>arg</mi>
<mi>min</mi>
</mrow>
<msub>
<mi>b</mi>
<mi>g</mi>
</msub>
</munder>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mo>|</mo>
<mo>|</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&Delta;R</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mi>E</mi>
<mi>x</mi>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mrow>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>R</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msub>
<mi>b</mi>
<mi>g</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<msubsup>
<mi>R</mi>
<mrow>
<mi>B</mi>
<mi>W</mi>
</mrow>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
Wherein, N is current key number of frames,It is to be calculated by pure vision SLAM algorithms in obtained video camera
Heart C coordinatesWith the calibration matrix R between video camera and IMU centers (being denoted as B)CBMultiplication is obtained, Δ RI, i+1It is pre- according to IMU
Spin matrix between i the and i+1 key frames that integration submodule (4.1) is obtained;.It is according to IMU pre-integration submodules
(4.1) the Δ R obtainedI, i+1With the first approximation of angular speed change of error equation;By being solved to optimization method, obtain
IMU angular speed deviations bg;
Described atlas dimension and acceleration of gravity pre-estimation submodule (3.2) utilize IMU angular speed estimation of deviation submodules
(3.1) result, for the relation two-by-two between three key frames being connected, utilizes pre-integration measured value Δ between any two
V, obtains linear equation:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&lambda;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>&beta;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>s</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mi>&gamma;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
I, i+1, the continuous key frames of i+2 tri-, for the items in above formula are represented with 1,2,3:
<mrow>
<mi>&lambda;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
<mn>2</mn>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
<mn>1</mn>
<mi>W</mi>
</mmultiscripts>
<mo>)</mo>
</mrow>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
<mn>3</mn>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
<mn>2</mn>
<mi>W</mi>
</mmultiscripts>
<mo>)</mo>
</mrow>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
</mrow>
<mrow>
<mi>&beta;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>I</mi>
<mrow>
<mn>3</mn>
<mo>&times;</mo>
<mn>3</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&Delta;t</mi>
<mn>12</mn>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>+</mo>
<msubsup>
<mi>&Delta;t</mi>
<mn>23</mn>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&gamma;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>1</mn>
</msubsup>
<mo>)</mo>
</mrow>
<msub>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
</mmultiscripts>
<mi>B</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>3</mn>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>)</mo>
</mrow>
<msub>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
</mmultiscripts>
<mi>B</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;p</mi>
<mn>23</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msub>
<mi>&Delta;v</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msub>
<mi>&Delta;p</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
In above-mentioned formula, RWCRepresent spin matrixs of the camera center C relative to world coordinates W, RWBRepresent IMU centers B relative to
The spin matrix of world coordinates, pBRepresent positions of the IMU centers B under world coordinates, I be unit matrix Δ t be interframe when
Between it is poor, Δ v represents the camera speed of interframe, s and gWIt is scale factor and the acceleration of gravity for needing to estimate;By three all companies
Continuous key frame equation is piled into a linear equation, is denoted as:
A3(N-2)×4x4×1=B3(N-2)×1
Singular value decomposition is carried out to this formula, the estimate s to yardstick and gravity acceleration is obtained*With
Described IMU acceleration bias estimation submodule (3.3) utilizes atlas dimension and acceleration of gravity pre-estimation submodule
(3.2) result, utilizes the linear relationship between continuous three key frames:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&lambda;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>&phi;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>&zeta;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>s</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>&delta;</mi>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>x</mi>
<mi>y</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>b</mi>
<mi>a</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mi>&psi;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
Wherein, λ (i) keeps constant, φ (i) with previous step, and ζ (i), ψ (i) accounting equation is as follows:
<mrow>
<mi>&phi;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mrow>
<mo>&lsqb;</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>I</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mover>
<mi>g</mi>
<mo>^</mo>
</mover>
<mi>I</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>&times;</mo>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&Delta;t</mi>
<mn>12</mn>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>+</mo>
<msubsup>
<mi>&Delta;t</mi>
<mn>23</mn>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mrow>
<mo>(</mo>
<mo>:</mo>
<mo>,</mo>
<mn>1</mn>
<mo>:</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</msub>
</mrow>
<mrow>
<mi>&zeta;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>2</mn>
</msubsup>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>p</mi>
<mn>23</mn>
</mrow>
<mi>a</mi>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
<mn>23</mn>
</mrow>
<mi>a</mi>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>p</mi>
<mn>12</mn>
</mrow>
<mi>a</mi>
</msubsup>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
</mrow>
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&psi;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>1</mn>
</msubsup>
<mo>)</mo>
</mrow>
<msub>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
</mmultiscripts>
<mi>B</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>3</mn>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>C</mi>
</mrow>
<mn>2</mn>
</msubsup>
<mo>)</mo>
</mrow>
<msub>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
</mmultiscripts>
<mi>B</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>2</mn>
</msubsup>
<msub>
<mi>&Delta;p</mi>
<mn>23</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msub>
<mi>&Delta;v</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>-</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mn>1</mn>
</msubsup>
<msub>
<mi>&Delta;p</mi>
<mn>12</mn>
</msub>
<msub>
<mi>&Delta;t</mi>
<mn>23</mn>
</msub>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>I</mi>
</mrow>
</msub>
<msub>
<mover>
<mi>g</mi>
<mo>^</mo>
</mover>
<mi>I</mi>
</msub>
<msup>
<msub>
<mi>G&Delta;t</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
In above formula, [](t, 1:2)Refer to the first two columns data using matrix;RWCRepresent rotations of the camera center C relative to world coordinates W
Torque battle array, RWBRepresent spin matrixs of the IMU centers B relative to world coordinates, RWIChanging coordinates are represented relative to world coordinates
Spin moment, pBPositions of the IMU centers B under world coordinates is represented, I is the time difference that unit matrix Δ t is interframe, and Δ v is represented
The camera speed of interframe, s and δ θxyIt is scale factor and the acceleration of gravity deflection for needing to estimate, baIt is acceleration bias;Will
Three all continuous key frame equatioies are piled into a linear equation, are denoted as:
A3(N-2)×6x6×1=B3(N-2)×1
By carrying out singular value decomposition to linear equation above, scale factor s is obtained*With acceleration of gravity directionAnd
IMU acceleration bias
Described key frame velocity estimation submodule (3.4) will add angle using the result calculated before in the state of key frame
Rate variance, acceleration bias and speed, calculating is directly optimized to continuous N two field pictures, and all N frame speeds are estimated
Meter.
5. the robot localization according to claim 4 based on monocular vision and IMU information and map structuring system, it is special
Levy and be, in the tracing module (4) based on motion model:
All IMU data between continuous two picture frames are passed through IMU pre-integration by described IMU pre-integration submodule (4.1)
Model set obtains the pose estimation current with robot of the motion model of robot into a single observation;IMU is marked
B is designated as, the acceleration and angular speed of IMU outputs are expressed as aBAnd ωB, then between two IMU output i and i+1 robot fortune
Dynamic equation is:
<mrow>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mi>E</mi>
<mi>x</mi>
<mi>p</mi>
<mrow>
<mo>(</mo>
<mo>(</mo>
<mrow>
<msubsup>
<mi>&omega;</mi>
<mi>B</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>i</mi>
</msubsup>
</mrow>
<mo>)</mo>
<mi>&Delta;</mi>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</mmultiscripts>
<mo>=</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mo>+</mo>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<mi>&Delta;</mi>
<mi>t</mi>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>a</mi>
<mi>B</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>i</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>&Delta;</mi>
<mi>t</mi>
</mrow>
<mrow>
<mmultiscripts>
<mi>p</mi>
<mi>B</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</mmultiscripts>
<mo>=</mo>
<mmultiscripts>
<mi>p</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mo>+</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mi>&Delta;</mi>
<mi>t</mi>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<msup>
<mi>&Delta;t</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>a</mi>
<mi>B</mi>
<mi>i</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>i</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mi>&Delta;t</mi>
<mn>2</mn>
</msup>
</mrow>
Δ R, Δ v, Δ p are denoted as in k and the direct IMU pre-integration result of two key frames of k+1, then is obtained between two key frames
Motion model be:
<mrow>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>k</mi>
</msubsup>
<msub>
<mi>&Delta;R</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mi>E</mi>
<mi>x</mi>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>R</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</mmultiscripts>
<mo>=</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>k</mi>
<mi>W</mi>
</mmultiscripts>
<mo>+</mo>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>k</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>&Delta;r</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>a</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>&Delta;</mi>
<mi>t</mi>
</mrow>
<mrow>
<mmultiscripts>
<mi>p</mi>
<mi>B</mi>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</mmultiscripts>
<mo>=</mo>
<mmultiscripts>
<mi>p</mi>
<mi>B</mi>
<mi>k</mi>
<mi>W</mi>
</mmultiscripts>
<mo>+</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>k</mi>
<mi>W</mi>
</mmultiscripts>
<msub>
<mi>&Delta;t</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<msup>
<msub>
<mi>&Delta;t</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mn>2</mn>
</msup>
<mo>+</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>k</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>&Delta;p</mi>
<mrow>
<mi>k</mi>
<mo>,</mo>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>p</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>k</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>p</mi>
</mrow>
<mi>a</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>k</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
In above formula, RWBRepresent spin matrixs of the IMU centers B relative to world coordinates, vBRepresent IMU speed, pBRepresent IMU centers B
Position under world coordinates, bg, baThe deviation of angular speed and acceleration is represented respectively;Jacobian matrixWithRepresent be
IMU pre-integration observation with change of error equation first approximation;
The described matched sub-block (4.2) based on motion model estimates R according to the pose of motion modelWBWith wpB, with former frame
The key frame of image is matched in certain hunting zone;
Described global characteristics matched sub-block (4.3) is in the case of (submodule 4.2) failure, and former frame carries out whole figure
Characteristic matching;
Described local map matching module submodule (4.4) submodule (4.3) recognize in the case of, by local map according to
Motion model projects to present frame and matched;For 3D point of one in space under camera coordinatesIt is corresponding
Frame is denoted as C, by XcThe formula projected in C is denoted as:
<mrow>
<mi>&pi;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>c</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>f</mi>
<mi>u</mi>
</msub>
<mfrac>
<msub>
<mi>X</mi>
<mi>c</mi>
</msub>
<msub>
<mi>Z</mi>
<mi>c</mi>
</msub>
</mfrac>
<mo>+</mo>
<msub>
<mi>c</mi>
<mi>u</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>f</mi>
<mi>v</mi>
</msub>
<mfrac>
<msub>
<mi>Y</mi>
<mi>c</mi>
</msub>
<msub>
<mi>Z</mi>
<mi>c</mi>
</msub>
</mfrac>
<mo>+</mo>
<msub>
<mi>c</mi>
<mi>v</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<msub>
<mi>X</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<mo>&lsqb;</mo>
<msub>
<mi>X</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>Y</mi>
<mi>c</mi>
</msub>
<mo>,</mo>
<msub>
<mi>Z</mi>
<mi>c</mi>
</msub>
<mo>&rsqb;</mo>
</mrow>
Projecting obtained point isWherein [fu, fv] be camera focal length, [cu, cv] be photocentre coordinate;
Described reorientation module submodule (4.5) is in the case of submodule (4.4) failure, by present frame and database
Key frame set carries out global registration using bag of words.
6. the robot localization according to claim 5 based on monocular vision and IMU information and map structuring system, it is special
Levy and be, in the rear end optimization module (5) based on factor graph:
The optimization submodule (5.1) of the current frame state is made according to visual projection's factor and the IMU factors with the state of present frame
For optimized variable, current local map and previous key frame participate in computing without updating;Wherein projection error is:
Eproj(i)=ρ ((x- π (Xc))T∑x(x-π(Xc)))
Wherein, π (Xc) it is projection formula, x represents projection coordinate of the spatial point on present frame, from world coordinate system coordinate value XW
To camera coordinates system coordinate value XcBe converted to:
<mrow>
<msub>
<mi>X</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<msub>
<mi>R</mi>
<mrow>
<mi>C</mi>
<mi>B</mi>
</mrow>
</msub>
<msubsup>
<mi>R</mi>
<mrow>
<mi>B</mi>
<mi>W</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>X</mi>
<mi>W</mi>
</msub>
<mo>-</mo>
<mmultiscripts>
<mi>p</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mmultiscripts>
<mi>p</mi>
<mi>C</mi>
</mmultiscripts>
<mi>B</mi>
</msub>
</mrow>
In above formula, RCBRepresent spin matrixs of the IMU centers B relative to Current camera center C coordinates, RBWRepresent world coordinates small right
In IMU centers B spin matrix.CpBTranslation vectors of the IMU centers B relative to current Current camera center C coordinates is represented,WpB
Represent translation vectors of the IMU centers B relative to our times coordinate W;
IMU factor representations are:
<mrow>
<msub>
<mi>E</mi>
<mrow>
<mi>I</mi>
<mi>M</mi>
<mi>U</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>&rho;</mi>
<mrow>
<mo>(</mo>
<mo>&lsqb;</mo>
<msubsup>
<mi>e</mi>
<mi>R</mi>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>e</mi>
<mi>v</mi>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>e</mi>
<mi>p</mi>
<mi>T</mi>
</msubsup>
<mo>&rsqb;</mo>
<msub>
<mi>&Sigma;</mi>
<mi>I</mi>
</msub>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msubsup>
<mi>e</mi>
<mi>R</mi>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>e</mi>
<mi>v</mi>
<mi>T</mi>
</msubsup>
<mo>,</mo>
<msubsup>
<mi>e</mi>
<mi>p</mi>
<mi>T</mi>
</msubsup>
<mo>&rsqb;</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>&rho;</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>e</mi>
<mi>b</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>&Sigma;</mi>
<mi>R</mi>
</msub>
<msub>
<mi>e</mi>
<mi>b</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, each error term distribution is expressed as:
<mrow>
<msub>
<mi>e</mi>
<mi>R</mi>
</msub>
<mo>=</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&Delta;R</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mrow>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>R</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>i</mi>
</msubsup>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>i</mi>
</msubsup>
<msubsup>
<mi>R</mi>
<mrow>
<mi>W</mi>
<mi>B</mi>
</mrow>
<mi>j</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>e</mi>
<mi>v</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>B</mi>
<mi>W</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>j</mi>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>&Delta;v</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>j</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>a</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>j</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msub>
<mi>e</mi>
<mi>v</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msubsup>
<mi>R</mi>
<mrow>
<mi>B</mi>
<mi>W</mi>
</mrow>
<mi>i</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>j</mi>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<mmultiscripts>
<mi>v</mi>
<mi>B</mi>
<mi>i</mi>
<mi>W</mi>
</mmultiscripts>
<mo>-</mo>
<msub>
<mi>g</mi>
<mi>W</mi>
</msub>
<msub>
<mi>&Delta;t</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>&Delta;v</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>g</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>g</mi>
<mi>j</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>J</mi>
<mrow>
<mi>&Delta;</mi>
<mi>v</mi>
</mrow>
<mi>a</mi>
</msubsup>
<msubsup>
<mi>b</mi>
<mi>a</mi>
<mi>j</mi>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
eb=bj-bi
In above formula,First approximations of the two interframe rotation transformation Δ R relative to acceleration of gravity is represented,Represent two interframe speed
Degreeization Δ v relative to acceleration of gravity first approximation,Represent single orders of the two interframe velocity variations Δ v relative to acceleration
Approximately;RBWRepresent world coordinates W to IMU coordinates B spin matrix, RWBIMU coordinates B is represented to world coordinates W spin moment
Battle array.bg, baThe deviation of angular speed and acceleration, e are represented respectivelyR, ep, ev, ebThat represent respectively is rotation, translation, speed and IMU
The error term that deviation is brought, it is all considered as Gaussian Profile, ∑IIt is the information matrix of pre-integration, ∑RIt is the random walk of deviation
What information matrix was represented is huber robust loss equations;
Point map and key frame the state optimization submodule (5.2) is after the addition of new key frame to local map point and local window
Crucial frame state in mouthful is estimated that error factor includes visual projection's factor and the IMU factors, and formula is such as current frame state
Optimize shown in submodule (5.1);
The winding detection and global key frame pose optimization submodule (5.3) are in the key frame set in database by present frame
Inquired about, when detecting winding, then the pose for all key frames of swinging fore-upward estimated, now only consider visual projection because
Son, projection factor formula is as shown in the optimization submodule (5.1) of current frame state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710320077.0A CN107193279A (en) | 2017-05-09 | 2017-05-09 | Robot localization and map structuring system based on monocular vision and IMU information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710320077.0A CN107193279A (en) | 2017-05-09 | 2017-05-09 | Robot localization and map structuring system based on monocular vision and IMU information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107193279A true CN107193279A (en) | 2017-09-22 |
Family
ID=59872898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710320077.0A Pending CN107193279A (en) | 2017-05-09 | 2017-05-09 | Robot localization and map structuring system based on monocular vision and IMU information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107193279A (en) |
Cited By (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107498559A (en) * | 2017-09-26 | 2017-12-22 | 珠海市微半导体有限公司 | The detection method and chip that the robot of view-based access control model turns to |
CN107677279A (en) * | 2017-09-26 | 2018-02-09 | 上海思岚科技有限公司 | It is a kind of to position the method and system for building figure |
CN107741234A (en) * | 2017-10-11 | 2018-02-27 | 深圳勇艺达机器人有限公司 | The offline map structuring and localization method of a kind of view-based access control model |
CN107747941A (en) * | 2017-09-29 | 2018-03-02 | 歌尔股份有限公司 | A kind of binocular visual positioning method, apparatus and system |
CN107784671A (en) * | 2017-12-01 | 2018-03-09 | 驭势科技(北京)有限公司 | A kind of method and system positioned immediately for vision with building figure |
CN107860390A (en) * | 2017-12-21 | 2018-03-30 | 河海大学常州校区 | The nonholonomic mobile robot of view-based access control model ROS systems remotely pinpoints auto-navigation method |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN107941217A (en) * | 2017-09-30 | 2018-04-20 | 杭州迦智科技有限公司 | A kind of robot localization method, electronic equipment, storage medium, device |
CN108007456A (en) * | 2017-12-06 | 2018-05-08 | 深圳市致趣科技有限公司 | A kind of indoor navigation method, apparatus and system |
CN108090954A (en) * | 2017-12-15 | 2018-05-29 | 南方医科大学南方医院 | Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning |
CN108253962A (en) * | 2017-12-18 | 2018-07-06 | 中北智杰科技(北京)有限公司 | New energy pilotless automobile localization method under a kind of low light environment |
CN108267121A (en) * | 2018-01-24 | 2018-07-10 | 锥能机器人(上海)有限公司 | The vision navigation method and system of more equipment under a kind of variable scene |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN108364319A (en) * | 2018-02-12 | 2018-08-03 | 腾讯科技(深圳)有限公司 | Scale determines method, apparatus, storage medium and equipment |
CN108364014A (en) * | 2018-01-08 | 2018-08-03 | 东南大学 | A kind of multi-sources Information Fusion Method based on factor graph |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108534757A (en) * | 2017-12-25 | 2018-09-14 | 达闼科技(北京)有限公司 | A kind of vision map scale detection method and device based on high in the clouds |
CN108597036A (en) * | 2018-05-03 | 2018-09-28 | 三星电子(中国)研发中心 | Reality environment danger sense method and device |
CN108615247A (en) * | 2018-04-27 | 2018-10-02 | 深圳市腾讯计算机系统有限公司 | Method for relocating, device, equipment and the storage medium of camera posture tracing process |
CN108629793A (en) * | 2018-03-22 | 2018-10-09 | 中国科学院自动化研究所 | The vision inertia odometry and equipment demarcated using line duration |
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
CN108717712A (en) * | 2018-05-29 | 2018-10-30 | 东北大学 | A kind of vision inertial navigation SLAM methods assumed based on ground level |
CN108731700A (en) * | 2018-03-22 | 2018-11-02 | 东南大学 | A kind of weighting Euler's pre-integration method in vision inertia odometer |
CN108776976A (en) * | 2018-06-07 | 2018-11-09 | 驭势科技(北京)有限公司 | A kind of while positioning and the method, system and storage medium for building figure |
CN108827287A (en) * | 2018-04-10 | 2018-11-16 | 南京航空航天大学 | A kind of robust vision SLAM system under complex environment |
CN108846867A (en) * | 2018-08-29 | 2018-11-20 | 安徽云能天智能科技有限责任公司 | A kind of SLAM system based on more mesh panorama inertial navigations |
CN108981693A (en) * | 2018-03-22 | 2018-12-11 | 东南大学 | VIO fast joint initial method based on monocular camera |
CN109029433A (en) * | 2018-06-28 | 2018-12-18 | 东南大学 | Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing |
CN109029448A (en) * | 2018-06-28 | 2018-12-18 | 东南大学 | The IMU of monocular vision inertial positioning assists trace model |
CN109008909A (en) * | 2018-07-13 | 2018-12-18 | 宜宾学院 | A kind of low-power consumption capsule endoscope Image Acquisition and three-dimensional reconstruction system |
CN109073390A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method and device, electronic equipment and readable storage medium storing program for executing |
CN109074407A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Multi-source data mapping method, related device and computer-readable storage medium |
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109341706A (en) * | 2018-10-17 | 2019-02-15 | 张亮 | A kind of production method of the multiple features fusion map towards pilotless automobile |
CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
CN109493415A (en) * | 2018-09-20 | 2019-03-19 | 北京大学 | A kind of the global motion initial method and system of aerial images three-dimensional reconstruction |
CN109506624A (en) * | 2018-10-31 | 2019-03-22 | 台州职业技术学院 | A kind of distributed vision positioning system and method based on mobile robot |
CN109506642A (en) * | 2018-10-09 | 2019-03-22 | 浙江大学 | A kind of robot polyphaser vision inertia real-time location method and device |
CN109520497A (en) * | 2018-10-19 | 2019-03-26 | 天津大学 | The unmanned plane autonomic positioning method of view-based access control model and imu |
CN109671105A (en) * | 2018-12-19 | 2019-04-23 | 青岛小鸟看看科技有限公司 | A kind of the tracking restoration methods and device of vision navigation system |
CN109739254A (en) * | 2018-11-20 | 2019-05-10 | 国网浙江省电力有限公司信息通信分公司 | Using the unmanned plane and its localization method of visual pattern positioning in a kind of electric inspection process |
CN109752003A (en) * | 2018-12-26 | 2019-05-14 | 浙江大学 | A kind of robot vision inertia dotted line characteristic positioning method and device |
WO2019090833A1 (en) * | 2017-11-10 | 2019-05-16 | 珊口(上海)智能科技有限公司 | Positioning system and method, and robot using same |
CN109764869A (en) * | 2019-01-16 | 2019-05-17 | 中国矿业大学 | A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion |
CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
CN109798891A (en) * | 2019-01-25 | 2019-05-24 | 上海交通大学 | Inertial Measurement Unit calibration system based on high-precision motion capture system |
CN109798889A (en) * | 2018-12-29 | 2019-05-24 | 航天信息股份有限公司 | Optimization method, device, storage medium and electronic equipment based on monocular VINS system |
CN109816696A (en) * | 2019-02-01 | 2019-05-28 | 西安全志科技有限公司 | A kind of robot localization and build drawing method, computer installation and computer readable storage medium |
CN109859266A (en) * | 2019-01-28 | 2019-06-07 | 西安理工大学 | Vision positions and drawing practice simultaneously under a kind of big visual angle change based on pre-transform |
CN110058587A (en) * | 2019-03-18 | 2019-07-26 | 西安科技大学 | The method that coal mine fully-mechanized mining face based on SLAM technology is maked an inspection tour unmanned vehicle and independently maked an inspection tour |
CN110084272A (en) * | 2019-03-26 | 2019-08-02 | 哈尔滨工业大学(深圳) | A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression |
CN110084832A (en) * | 2019-04-25 | 2019-08-02 | 亮风台(上海)信息科技有限公司 | Correcting method, device, system, equipment and the storage medium of camera pose |
CN110119144A (en) * | 2019-04-19 | 2019-08-13 | 苏州大学 | Based on the matched multirobot SLAM algorithm of sub- map feature |
CN110119189A (en) * | 2018-02-05 | 2019-08-13 | 浙江商汤科技开发有限公司 | The initialization of SLAM system, AR control method, device and system |
CN110136199A (en) * | 2018-11-13 | 2019-08-16 | 北京初速度科技有限公司 | A kind of vehicle location based on camera, the method and apparatus for building figure |
CN110132306A (en) * | 2019-05-20 | 2019-08-16 | 广州小鹏汽车科技有限公司 | The correcting method and system of vehicle location error |
CN110147705A (en) * | 2018-08-28 | 2019-08-20 | 北京初速度科技有限公司 | A kind of vehicle positioning method and electronic equipment of view-based access control model perception |
CN110163909A (en) * | 2018-02-12 | 2019-08-23 | 北京三星通信技术研究有限公司 | For obtaining the method, apparatus and storage medium of equipment pose |
US10436590B2 (en) | 2017-11-10 | 2019-10-08 | Ankobot (Shanghai) Smart Technologies Co., Ltd. | Localization system and method, and robot using the same |
CN110319772A (en) * | 2019-07-12 | 2019-10-11 | 上海电力大学 | Vision large span distance measuring method based on unmanned plane |
CN110321902A (en) * | 2019-05-09 | 2019-10-11 | 哈尔滨工业大学 | A kind of indoor automatic vision fingerprint collecting method based on SOCP |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110349250A (en) * | 2019-06-28 | 2019-10-18 | 浙江大学 | A kind of three-dimensional rebuilding method of the indoor dynamic scene based on RGBD camera |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
WO2019205842A1 (en) * | 2018-04-27 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Method and device for repositioning in camera orientation tracking process, and storage medium |
CN110462683A (en) * | 2018-03-06 | 2019-11-15 | 斯坦德机器人(深圳)有限公司 | Method, terminal and the computer readable storage medium of close coupling vision SLAM |
CN110473258A (en) * | 2019-07-24 | 2019-11-19 | 西北工业大学 | Monocular SLAM system initialization algorithm based on dotted line Unified frame |
CN110490900A (en) * | 2019-07-12 | 2019-11-22 | 中国科学技术大学 | Binocular visual positioning method and system under dynamic environment |
JP2019212196A (en) * | 2018-06-08 | 2019-12-12 | 株式会社ミツバ | Controller and control method |
CN110617813A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method |
CN110617814A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular vision and inertial sensor integrated remote distance measuring system and method |
CN110660098A (en) * | 2018-06-28 | 2020-01-07 | 北京京东尚科信息技术有限公司 | Positioning method and device based on monocular vision |
CN110726413A (en) * | 2019-10-25 | 2020-01-24 | 中国人民解放军国防科技大学 | Multi-sensor fusion and data management mechanism facing large-scale SLAM |
CN110726406A (en) * | 2019-06-03 | 2020-01-24 | 北京建筑大学 | Improved nonlinear optimization monocular inertial navigation SLAM method |
CN110827415A (en) * | 2019-11-11 | 2020-02-21 | 吉林大学 | All-weather unknown environment unmanned autonomous working platform |
CN110855601A (en) * | 2018-08-21 | 2020-02-28 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN110928311A (en) * | 2019-12-16 | 2020-03-27 | 哈尔滨工业大学 | Indoor mobile robot navigation method based on linear features under panoramic camera |
CN111044039A (en) * | 2019-12-25 | 2020-04-21 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measuring device and method based on IMU |
CN111141295A (en) * | 2019-12-20 | 2020-05-12 | 南京航空航天大学 | Automatic map recovery method based on monocular ORB-SLAM |
CN111144406A (en) * | 2019-12-22 | 2020-05-12 | 复旦大学 | Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot |
WO2020155615A1 (en) * | 2019-01-28 | 2020-08-06 | 速感科技(北京)有限公司 | Vslam method, controller, and mobile device |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN111752265A (en) * | 2019-03-26 | 2020-10-09 | 通用汽车环球科技运作有限责任公司 | Hyper-correlation in a context memory |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | Monocular depth estimation-based multi-sensor fusion SLAM method |
CN112083726A (en) * | 2020-09-04 | 2020-12-15 | 湖南大学 | Park-oriented automatic driving double-filter fusion positioning system |
CN112129287A (en) * | 2020-09-24 | 2020-12-25 | 北京华捷艾米科技有限公司 | Method and related device for processing based on visual inertial odometer |
CN110187306B (en) * | 2019-04-16 | 2021-01-08 | 浙江大学 | TDOA-PDR-MAP fusion positioning method applied to complex indoor space |
CN112288812A (en) * | 2020-10-30 | 2021-01-29 | 西安工程大学 | Mobile robot real-time positioning method based on visual features |
CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113077515A (en) * | 2021-06-07 | 2021-07-06 | 之江实验室 | Tight coupling initialization method for underwater vision inertial navigation pressure positioning |
CN113124854A (en) * | 2019-12-31 | 2021-07-16 | 杭州海康机器人技术有限公司 | Visual positioning method, map construction method and map construction device |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN115265560A (en) * | 2022-07-28 | 2022-11-01 | 重庆长安汽车股份有限公司 | High-precision positioning method, system and storage medium based on visual SLAM |
CN115265523A (en) * | 2022-09-27 | 2022-11-01 | 泉州装备制造研究所 | Robot simultaneous positioning and mapping method, device and readable medium |
CN117589154A (en) * | 2024-01-19 | 2024-02-23 | 深圳竹芒科技有限公司 | Relocation method of self-mobile device, self-mobile device and readable storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
-
2017
- 2017-05-09 CN CN201710320077.0A patent/CN107193279A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
Non-Patent Citations (2)
Title |
---|
RAÚL MUR-ARTAL等: "ORB-SLAM: A Versatile and Accurate Monocular SLAM System", 《 IEEE TRANSACTIONS ON ROBOTICS》 * |
RAÚL MUR-ARTAL等: "Visual-Inertial Monocular SLAM With Map Reuse", 《 IEEE ROBOTICS AND AUTOMATION LETTERS》 * |
Cited By (149)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107677279A (en) * | 2017-09-26 | 2018-02-09 | 上海思岚科技有限公司 | It is a kind of to position the method and system for building figure |
CN107677279B (en) * | 2017-09-26 | 2020-04-24 | 上海思岚科技有限公司 | Method and system for positioning and establishing image |
CN107498559A (en) * | 2017-09-26 | 2017-12-22 | 珠海市微半导体有限公司 | The detection method and chip that the robot of view-based access control model turns to |
US10989540B2 (en) | 2017-09-29 | 2021-04-27 | Goertek Inc. | Binocular vision localization method, device and system |
CN107747941A (en) * | 2017-09-29 | 2018-03-02 | 歌尔股份有限公司 | A kind of binocular visual positioning method, apparatus and system |
WO2019062291A1 (en) * | 2017-09-29 | 2019-04-04 | 歌尔股份有限公司 | Binocular vision positioning method, device, and system |
CN107941217B (en) * | 2017-09-30 | 2020-05-22 | 杭州迦智科技有限公司 | Robot positioning method, electronic equipment, storage medium and device |
CN107941217A (en) * | 2017-09-30 | 2018-04-20 | 杭州迦智科技有限公司 | A kind of robot localization method, electronic equipment, storage medium, device |
CN107741234A (en) * | 2017-10-11 | 2018-02-27 | 深圳勇艺达机器人有限公司 | The offline map structuring and localization method of a kind of view-based access control model |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN107869989B (en) * | 2017-11-06 | 2020-02-07 | 东北大学 | Positioning method and system based on visual inertial navigation information fusion |
WO2019090833A1 (en) * | 2017-11-10 | 2019-05-16 | 珊口(上海)智能科技有限公司 | Positioning system and method, and robot using same |
US10436590B2 (en) | 2017-11-10 | 2019-10-08 | Ankobot (Shanghai) Smart Technologies Co., Ltd. | Localization system and method, and robot using the same |
CN107909614B (en) * | 2017-11-13 | 2021-02-26 | 中国矿业大学 | Positioning method of inspection robot in GPS failure environment |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN107784671A (en) * | 2017-12-01 | 2018-03-09 | 驭势科技(北京)有限公司 | A kind of method and system positioned immediately for vision with building figure |
CN108007456A (en) * | 2017-12-06 | 2018-05-08 | 深圳市致趣科技有限公司 | A kind of indoor navigation method, apparatus and system |
CN108090954A (en) * | 2017-12-15 | 2018-05-29 | 南方医科大学南方医院 | Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning |
CN108253962A (en) * | 2017-12-18 | 2018-07-06 | 中北智杰科技(北京)有限公司 | New energy pilotless automobile localization method under a kind of low light environment |
CN107860390A (en) * | 2017-12-21 | 2018-03-30 | 河海大学常州校区 | The nonholonomic mobile robot of view-based access control model ROS systems remotely pinpoints auto-navigation method |
CN108534757A (en) * | 2017-12-25 | 2018-09-14 | 达闼科技(北京)有限公司 | A kind of vision map scale detection method and device based on high in the clouds |
CN108534757B (en) * | 2017-12-25 | 2021-01-15 | 达闼科技(北京)有限公司 | Cloud-based visual map scale detection method and device |
CN108364014A (en) * | 2018-01-08 | 2018-08-03 | 东南大学 | A kind of multi-sources Information Fusion Method based on factor graph |
CN108267121A (en) * | 2018-01-24 | 2018-07-10 | 锥能机器人(上海)有限公司 | The vision navigation method and system of more equipment under a kind of variable scene |
CN110119189B (en) * | 2018-02-05 | 2022-06-03 | 浙江商汤科技开发有限公司 | Initialization method, AR control method, device and system of SLAM system |
CN110119189A (en) * | 2018-02-05 | 2019-08-13 | 浙江商汤科技开发有限公司 | The initialization of SLAM system, AR control method, device and system |
CN108364344A (en) * | 2018-02-08 | 2018-08-03 | 重庆邮电大学 | A kind of monocular real-time three-dimensional method for reconstructing based on loopback test |
CN110163909A (en) * | 2018-02-12 | 2019-08-23 | 北京三星通信技术研究有限公司 | For obtaining the method, apparatus and storage medium of equipment pose |
CN108364319B (en) * | 2018-02-12 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Dimension determination method and device, storage medium and equipment |
CN108364319A (en) * | 2018-02-12 | 2018-08-03 | 腾讯科技(深圳)有限公司 | Scale determines method, apparatus, storage medium and equipment |
WO2019157925A1 (en) * | 2018-02-13 | 2019-08-22 | 视辰信息科技(上海)有限公司 | Visual-inertial odometry implementation method and system |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN110462683B (en) * | 2018-03-06 | 2022-04-12 | 斯坦德机器人(深圳)有限公司 | Method, terminal and computer readable storage medium for tightly coupling visual SLAM |
CN110462683A (en) * | 2018-03-06 | 2019-11-15 | 斯坦德机器人(深圳)有限公司 | Method, terminal and the computer readable storage medium of close coupling vision SLAM |
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN108537844B (en) * | 2018-03-16 | 2021-11-26 | 上海交通大学 | Visual SLAM loop detection method fusing geometric information |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108629793A (en) * | 2018-03-22 | 2018-10-09 | 中国科学院自动化研究所 | The vision inertia odometry and equipment demarcated using line duration |
CN108731700B (en) * | 2018-03-22 | 2020-07-31 | 东南大学 | Weighted Euler pre-integration method in visual inertial odometer |
CN108731700A (en) * | 2018-03-22 | 2018-11-02 | 东南大学 | A kind of weighting Euler's pre-integration method in vision inertia odometer |
CN108981693A (en) * | 2018-03-22 | 2018-12-11 | 东南大学 | VIO fast joint initial method based on monocular camera |
CN108827287B (en) * | 2018-04-10 | 2021-12-21 | 南京航空航天大学 | Robust visual SLAM system in complex environment |
CN108827287A (en) * | 2018-04-10 | 2018-11-16 | 南京航空航天大学 | A kind of robust vision SLAM system under complex environment |
US11205282B2 (en) | 2018-04-27 | 2021-12-21 | Tencent Technology (Shenzhen) Company Limited | Relocalization method and apparatus in camera pose tracking process and storage medium |
CN108615247B (en) * | 2018-04-27 | 2021-09-14 | 深圳市腾讯计算机系统有限公司 | Method, device and equipment for relocating camera attitude tracking process and storage medium |
CN108615247A (en) * | 2018-04-27 | 2018-10-02 | 深圳市腾讯计算机系统有限公司 | Method for relocating, device, equipment and the storage medium of camera posture tracing process |
WO2019205842A1 (en) * | 2018-04-27 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Method and device for repositioning in camera orientation tracking process, and storage medium |
US11189037B2 (en) | 2018-04-27 | 2021-11-30 | Tencent Technology (Shenzhen) Company Limited | Repositioning method and apparatus in camera pose tracking process, device, and storage medium |
CN108597036A (en) * | 2018-05-03 | 2018-09-28 | 三星电子(中国)研发中心 | Reality environment danger sense method and device |
CN108717710B (en) * | 2018-05-18 | 2022-04-22 | 京东方科技集团股份有限公司 | Positioning method, device and system in indoor environment |
CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
US11295472B2 (en) | 2018-05-18 | 2022-04-05 | Boe Technology Group Co., Ltd. | Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database |
CN108717712B (en) * | 2018-05-29 | 2021-09-03 | 东北大学 | Visual inertial navigation SLAM method based on ground plane hypothesis |
CN108717712A (en) * | 2018-05-29 | 2018-10-30 | 东北大学 | A kind of vision inertial navigation SLAM methods assumed based on ground level |
CN108776976A (en) * | 2018-06-07 | 2018-11-09 | 驭势科技(北京)有限公司 | A kind of while positioning and the method, system and storage medium for building figure |
CN108776976B (en) * | 2018-06-07 | 2020-11-20 | 驭势科技(北京)有限公司 | Method, system and storage medium for simultaneously positioning and establishing image |
JP2019212196A (en) * | 2018-06-08 | 2019-12-12 | 株式会社ミツバ | Controller and control method |
CN109029433A (en) * | 2018-06-28 | 2018-12-18 | 东南大学 | Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing |
CN109029448A (en) * | 2018-06-28 | 2018-12-18 | 东南大学 | The IMU of monocular vision inertial positioning assists trace model |
CN110660098B (en) * | 2018-06-28 | 2022-08-12 | 北京京东叁佰陆拾度电子商务有限公司 | Positioning method and device based on monocular vision |
CN110660098A (en) * | 2018-06-28 | 2020-01-07 | 北京京东尚科信息技术有限公司 | Positioning method and device based on monocular vision |
CN109008909B (en) * | 2018-07-13 | 2024-01-26 | 宜宾学院 | Low-power-consumption capsule endoscope image acquisition and three-dimensional reconstruction system |
CN109008909A (en) * | 2018-07-13 | 2018-12-18 | 宜宾学院 | A kind of low-power consumption capsule endoscope Image Acquisition and three-dimensional reconstruction system |
CN109102525B (en) * | 2018-07-19 | 2021-06-18 | 浙江工业大学 | Mobile robot following control method based on self-adaptive posture estimation |
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109073390A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method and device, electronic equipment and readable storage medium storing program for executing |
CN109074407A (en) * | 2018-07-23 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | Multi-source data mapping method, related device and computer-readable storage medium |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN110855601A (en) * | 2018-08-21 | 2020-02-28 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN110855601B (en) * | 2018-08-21 | 2021-11-19 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN110147705B (en) * | 2018-08-28 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on visual perception and electronic equipment |
CN110147705A (en) * | 2018-08-28 | 2019-08-20 | 北京初速度科技有限公司 | A kind of vehicle positioning method and electronic equipment of view-based access control model perception |
CN108846867A (en) * | 2018-08-29 | 2018-11-20 | 安徽云能天智能科技有限责任公司 | A kind of SLAM system based on more mesh panorama inertial navigations |
CN109307508B (en) * | 2018-08-29 | 2022-04-08 | 中国科学院合肥物质科学研究院 | Panoramic inertial navigation SLAM method based on multiple key frames |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109493415A (en) * | 2018-09-20 | 2019-03-19 | 北京大学 | A kind of the global motion initial method and system of aerial images three-dimensional reconstruction |
CN109506642A (en) * | 2018-10-09 | 2019-03-22 | 浙江大学 | A kind of robot polyphaser vision inertia real-time location method and device |
CN109341706A (en) * | 2018-10-17 | 2019-02-15 | 张亮 | A kind of production method of the multiple features fusion map towards pilotless automobile |
CN109341706B (en) * | 2018-10-17 | 2020-07-03 | 张亮 | Method for manufacturing multi-feature fusion map for unmanned vehicle |
CN109520497B (en) * | 2018-10-19 | 2022-09-30 | 天津大学 | Unmanned aerial vehicle autonomous positioning method based on vision and imu |
CN109520497A (en) * | 2018-10-19 | 2019-03-26 | 天津大学 | The unmanned plane autonomic positioning method of view-based access control model and imu |
CN109506624B (en) * | 2018-10-31 | 2021-11-02 | 台州职业技术学院 | Distributed visual positioning system and method based on mobile robot |
CN109506624A (en) * | 2018-10-31 | 2019-03-22 | 台州职业技术学院 | A kind of distributed vision positioning system and method based on mobile robot |
CN110136199A (en) * | 2018-11-13 | 2019-08-16 | 北京初速度科技有限公司 | A kind of vehicle location based on camera, the method and apparatus for building figure |
CN110136199B (en) * | 2018-11-13 | 2022-09-13 | 北京魔门塔科技有限公司 | Camera-based vehicle positioning and mapping method and device |
CN109461208A (en) * | 2018-11-15 | 2019-03-12 | 网易(杭州)网络有限公司 | Three-dimensional map processing method, device, medium and calculating equipment |
CN109739254A (en) * | 2018-11-20 | 2019-05-10 | 国网浙江省电力有限公司信息通信分公司 | Using the unmanned plane and its localization method of visual pattern positioning in a kind of electric inspection process |
CN109739254B (en) * | 2018-11-20 | 2021-11-09 | 国网浙江省电力有限公司信息通信分公司 | Unmanned aerial vehicle adopting visual image positioning in power inspection and positioning method thereof |
CN109671105A (en) * | 2018-12-19 | 2019-04-23 | 青岛小鸟看看科技有限公司 | A kind of the tracking restoration methods and device of vision navigation system |
CN109752003A (en) * | 2018-12-26 | 2019-05-14 | 浙江大学 | A kind of robot vision inertia dotted line characteristic positioning method and device |
CN109798889A (en) * | 2018-12-29 | 2019-05-24 | 航天信息股份有限公司 | Optimization method, device, storage medium and electronic equipment based on monocular VINS system |
CN109767470A (en) * | 2019-01-07 | 2019-05-17 | 浙江商汤科技开发有限公司 | A kind of tracking system initial method and terminal device |
CN109767470B (en) * | 2019-01-07 | 2021-03-02 | 浙江商汤科技开发有限公司 | Tracking system initialization method and terminal equipment |
CN109764869A (en) * | 2019-01-16 | 2019-05-17 | 中国矿业大学 | A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion |
CN109798891A (en) * | 2019-01-25 | 2019-05-24 | 上海交通大学 | Inertial Measurement Unit calibration system based on high-precision motion capture system |
WO2020155615A1 (en) * | 2019-01-28 | 2020-08-06 | 速感科技(北京)有限公司 | Vslam method, controller, and mobile device |
CN109859266A (en) * | 2019-01-28 | 2019-06-07 | 西安理工大学 | Vision positions and drawing practice simultaneously under a kind of big visual angle change based on pre-transform |
CN109859266B (en) * | 2019-01-28 | 2022-11-25 | 西安理工大学 | Pre-transformation-based visual simultaneous positioning and drawing method under large visual angle change |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN109816696A (en) * | 2019-02-01 | 2019-05-28 | 西安全志科技有限公司 | A kind of robot localization and build drawing method, computer installation and computer readable storage medium |
CN110058587A (en) * | 2019-03-18 | 2019-07-26 | 西安科技大学 | The method that coal mine fully-mechanized mining face based on SLAM technology is maked an inspection tour unmanned vehicle and independently maked an inspection tour |
CN111752265A (en) * | 2019-03-26 | 2020-10-09 | 通用汽车环球科技运作有限责任公司 | Hyper-correlation in a context memory |
CN111752265B (en) * | 2019-03-26 | 2024-04-05 | 通用汽车环球科技运作有限责任公司 | Super-association in context memory |
CN110084272A (en) * | 2019-03-26 | 2019-08-02 | 哈尔滨工业大学(深圳) | A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression |
CN110187306B (en) * | 2019-04-16 | 2021-01-08 | 浙江大学 | TDOA-PDR-MAP fusion positioning method applied to complex indoor space |
CN110119144A (en) * | 2019-04-19 | 2019-08-13 | 苏州大学 | Based on the matched multirobot SLAM algorithm of sub- map feature |
CN110119144B (en) * | 2019-04-19 | 2022-04-22 | 苏州大学 | Multi-robot SLAM algorithm based on sub-map feature matching |
CN110084832A (en) * | 2019-04-25 | 2019-08-02 | 亮风台(上海)信息科技有限公司 | Correcting method, device, system, equipment and the storage medium of camera pose |
CN110084832B (en) * | 2019-04-25 | 2021-03-23 | 亮风台(上海)信息科技有限公司 | Method, device, system, equipment and storage medium for correcting camera pose |
CN110321902A (en) * | 2019-05-09 | 2019-10-11 | 哈尔滨工业大学 | A kind of indoor automatic vision fingerprint collecting method based on SOCP |
CN110321902B (en) * | 2019-05-09 | 2021-07-13 | 哈尔滨工业大学 | Indoor automatic visual fingerprint acquisition method based on SOCP |
CN110132306A (en) * | 2019-05-20 | 2019-08-16 | 广州小鹏汽车科技有限公司 | The correcting method and system of vehicle location error |
CN110132306B (en) * | 2019-05-20 | 2021-02-19 | 广州小鹏汽车科技有限公司 | Method and system for correcting vehicle positioning error |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110726406A (en) * | 2019-06-03 | 2020-01-24 | 北京建筑大学 | Improved nonlinear optimization monocular inertial navigation SLAM method |
CN110349250B (en) * | 2019-06-28 | 2020-12-22 | 浙江大学 | RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene |
CN110349250A (en) * | 2019-06-28 | 2019-10-18 | 浙江大学 | A kind of three-dimensional rebuilding method of the indoor dynamic scene based on RGBD camera |
CN110490900A (en) * | 2019-07-12 | 2019-11-22 | 中国科学技术大学 | Binocular visual positioning method and system under dynamic environment |
CN110490900B (en) * | 2019-07-12 | 2022-04-19 | 中国科学技术大学 | Binocular vision positioning method and system under dynamic environment |
CN110319772A (en) * | 2019-07-12 | 2019-10-11 | 上海电力大学 | Vision large span distance measuring method based on unmanned plane |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
CN110473258A (en) * | 2019-07-24 | 2019-11-19 | 西北工业大学 | Monocular SLAM system initialization algorithm based on dotted line Unified frame |
CN110473258B (en) * | 2019-07-24 | 2022-05-13 | 西北工业大学 | Monocular SLAM system initialization algorithm based on point-line unified framework |
CN110617814A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular vision and inertial sensor integrated remote distance measuring system and method |
CN110617813A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method |
CN110726413A (en) * | 2019-10-25 | 2020-01-24 | 中国人民解放军国防科技大学 | Multi-sensor fusion and data management mechanism facing large-scale SLAM |
CN110726413B (en) * | 2019-10-25 | 2021-03-23 | 中国人民解放军国防科技大学 | Multi-sensor fusion and data management method for large-scale SLAM |
CN110827415A (en) * | 2019-11-11 | 2020-02-21 | 吉林大学 | All-weather unknown environment unmanned autonomous working platform |
CN110827415B (en) * | 2019-11-11 | 2022-08-23 | 吉林大学 | All-weather unknown environment unmanned autonomous working platform |
CN110928311A (en) * | 2019-12-16 | 2020-03-27 | 哈尔滨工业大学 | Indoor mobile robot navigation method based on linear features under panoramic camera |
CN111141295A (en) * | 2019-12-20 | 2020-05-12 | 南京航空航天大学 | Automatic map recovery method based on monocular ORB-SLAM |
CN111144406A (en) * | 2019-12-22 | 2020-05-12 | 复旦大学 | Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot |
CN111044039A (en) * | 2019-12-25 | 2020-04-21 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measuring device and method based on IMU |
CN111044039B (en) * | 2019-12-25 | 2024-03-19 | 中航华东光电有限公司 | Monocular target area self-adaptive high-precision distance measurement device and method based on IMU |
CN113124854A (en) * | 2019-12-31 | 2021-07-16 | 杭州海康机器人技术有限公司 | Visual positioning method, map construction method and map construction device |
CN112016612A (en) * | 2020-08-26 | 2020-12-01 | 四川阿泰因机器人智能装备有限公司 | Monocular depth estimation-based multi-sensor fusion SLAM method |
CN112083726A (en) * | 2020-09-04 | 2020-12-15 | 湖南大学 | Park-oriented automatic driving double-filter fusion positioning system |
CN112129287A (en) * | 2020-09-24 | 2020-12-25 | 北京华捷艾米科技有限公司 | Method and related device for processing based on visual inertial odometer |
CN112288812A (en) * | 2020-10-30 | 2021-01-29 | 西安工程大学 | Mobile robot real-time positioning method based on visual features |
CN113076988B (en) * | 2021-03-25 | 2022-06-03 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113076988A (en) * | 2021-03-25 | 2021-07-06 | 重庆邮电大学 | Mobile robot vision SLAM key frame self-adaptive screening method based on neural network |
CN113077515B (en) * | 2021-06-07 | 2021-09-21 | 之江实验室 | Tight coupling initialization method for underwater vision inertial navigation pressure positioning |
CN113077515A (en) * | 2021-06-07 | 2021-07-06 | 之江实验室 | Tight coupling initialization method for underwater vision inertial navigation pressure positioning |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN114494825B (en) * | 2021-12-31 | 2024-04-19 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN115265560A (en) * | 2022-07-28 | 2022-11-01 | 重庆长安汽车股份有限公司 | High-precision positioning method, system and storage medium based on visual SLAM |
CN115265523A (en) * | 2022-09-27 | 2022-11-01 | 泉州装备制造研究所 | Robot simultaneous positioning and mapping method, device and readable medium |
CN115265523B (en) * | 2022-09-27 | 2023-01-03 | 泉州装备制造研究所 | Robot simultaneous positioning and mapping method, device and readable medium |
CN117589154A (en) * | 2024-01-19 | 2024-02-23 | 深圳竹芒科技有限公司 | Relocation method of self-mobile device, self-mobile device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107193279A (en) | Robot localization and map structuring system based on monocular vision and IMU information | |
CN109029433B (en) | Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN108665540A (en) | Robot localization based on binocular vision feature and IMU information and map structuring system | |
CN106679648B (en) | Visual inertia combination SLAM method based on genetic algorithm | |
CN112649016B (en) | Visual inertial odometer method based on dotted line initialization | |
CN108051002A (en) | Transport vehicle space-location method and system based on inertia measurement auxiliary vision | |
CN112734841B (en) | Method for realizing positioning by using wheel type odometer-IMU and monocular camera | |
CN110345944A (en) | Merge the robot localization method of visual signature and IMU information | |
CN111156984A (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN114001733B (en) | Map-based consistent efficient visual inertial positioning algorithm | |
CN106574836A (en) | A method for localizing a robot in a localization plane | |
CN103994765A (en) | Positioning method of inertial sensor | |
CN109781092A (en) | Localization for Mobile Robot and drawing method is built in a kind of danger chemical accident | |
CN112767546B (en) | Binocular image-based visual map generation method for mobile robot | |
CN116205947A (en) | Binocular-inertial fusion pose estimation method based on camera motion state, electronic equipment and storage medium | |
CN114234967B (en) | Six-foot robot positioning method based on multi-sensor fusion | |
CN112731503B (en) | Pose estimation method and system based on front end tight coupling | |
Panahandeh et al. | Vision-aided inertial navigation using planar terrain features | |
US10977810B2 (en) | Camera motion estimation | |
So et al. | Visual odometry for a hopping rover on an asteroid surface using multiple monocular cameras | |
CN111145267A (en) | IMU (inertial measurement unit) assistance-based 360-degree panoramic view multi-camera calibration method | |
Ye et al. | Robust and efficient vehicles motion estimation with low-cost multi-camera and odometer-gyroscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170922 |
|
WD01 | Invention patent application deemed withdrawn after publication |