CN104732518B - A kind of PTAM improved methods based on intelligent robot terrain surface specifications - Google Patents
A kind of PTAM improved methods based on intelligent robot terrain surface specifications Download PDFInfo
- Publication number
- CN104732518B CN104732518B CN201510025810.7A CN201510025810A CN104732518B CN 104732518 B CN104732518 B CN 104732518B CN 201510025810 A CN201510025810 A CN 201510025810A CN 104732518 B CN104732518 B CN 104732518B
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- msub
- mtr
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Abstract
A kind of PTAM improved methods based on intelligent robot terrain surface specifications, first, complete parameter correction, this includes parameter definition and camera calibration;Then current environment texture information is obtained using camera, and builds four layers of Gaussian image pyramid, extract the characteristic information in present image with FAST Corner Detection Algorithms, the data correlation set up between Corner Feature obtains pose estimation model;In the initial drawing stage of map, obtain two key frames and camera is erected in mobile robot;In initialization procedure, mobile robot starts movement, while angle point information and setting up association in camera capture current scene;After the initialization for realizing sparse 3 D map, update key frame and set up characteristic point sub-pixel precision mapping relations with block matching method using polar curve search, accurately relocated with reference to pose estimation model realization camera.Match point is finally projected into space, current global context three-dimensional map is completed and creates.
Description
Technical field
The invention belongs to robot vision field, it is related to a kind of PTAM algorithm improvements based on terrain surface specifications.
Background technology
As the relation of robot and the mankind are more and more closer, the correlation technique of intelligent robot receives great weight
Depending on.The synchronous positioning of mobile robot is to work as with building figure (Simultaneous Localization and Mapping, SLAM)
One of location technology of preceding intelligent mobile robot most main flow.It is actually an estimation problem, utilizes sensor
The inside and outside data obtained, calculate the position of mobile robot at a time, while setting up relied on ground artwork
Type.The SLAM of view-based access control model is the research category for belonging to vision measurement, because vision sensor possesses itself unique advantage:Body
Product is small, lightweight, cheap, easy for installation, and the external information of extraction is enriched very much.These advantages are further promoted currently
The SLAM of view-based access control model research and application.Application No. 201310678018.2 is provided one kind and regarded using SURF as monocular
Feel SLAM feature detection operator, detection, the generation of SURF description and SURF Point matchings respectively from point-of-interest etc. three
Aspect is innovated and improved, it is proposed that positioned while completing robot based on the SURF feature matching methods that region increases
With map building so that in SURF describes sub- matching process, a certain description description matching only with most probable enters
Row is compared, so as to significantly reduce the comparison number of times in monocular vision SLAM problems, improves matching speed.But this method according to
The problem of right presence can not set up Metric map and can not provide three-dimensional feature information, algorithm robustness and real-time need into
One step is improved.
Establishment environmental model technology is a kind of while solving camera motion track and scene structure model from motion
Method.As SFM exemplary process, G.Klein et al. first proposed PTAM in 2007 in augmented reality (AR) field
The concept of (Parallel Tracking and Mapping), for solving the problems, such as Context awareness in real time.But it is unmodified
The problem of PTAM algorithms are there is also that can not set up Metric map, while the movement to camera has strict limitation.
The content of the invention
The problem of existing for more than, the present invention proposes a kind of PTAM innovatory algorithms based on terrain surface specifications, is adding
On the basis of terrain surface specifications, PTAM synchronizes estimation to the current pose of monocular-camera with three dimensions point, is utilizing
While FAST angle points method updates detection characteristic point, with optimal part and global light-stream adjustment, shooting is continuously updated
The pose of machine and three-dimensional feature point, completes high accuracy positioning and map building, the invention, which is efficiently solved, can not create the metric system
The problem of map, while being extracted in real-time and three-dimensional feature has great raising, positioned for solving robot navigation,
Relevant issues important in inhibiting in three-dimensional reconstruction.
The present invention is adopted the following technical scheme that:
First, parameter correction is completed, this includes parameter definition and camera calibration;Then in key frame feature carry out with
Track, then the initialization of pose is realized, then plane parameter is estimated on the basis of this;Then current environment line is obtained using camera
Information is managed, and builds four layers of Gaussian image pyramid, the characteristic information in present image is extracted with FAST Corner Detection Algorithms,
The data correlation set up between Corner Feature, obtains pose estimation model;In the initial drawing stage of map, two key frames are obtained
Camera is erected in mobile robot, by proceeding by initialization procedure after camera and ground demarcation;In initialization procedure
In, mobile robot starts movement, while angle point information and setting up association in camera capture current scene, model is estimated by pose
Current pose initial estimation is obtained, then with the accurate estimation of the current pose of position and orientation estimation method acquisition based on terrain surface specifications;
After the initialization for realizing sparse 3 D map, update key frame and set up the sub- picture of characteristic point using polar curve search and block matching method
Plain precision mapping relations, are accurately relocated with reference to pose estimation model realization camera.Match point is finally projected into space, completed
Current global context three-dimensional map is created.The uniformity optimization of algorithm is completed using overall situation and partial situation's boundling adjustment simultaneously.
Specifically include following steps:
Step 1, parameter correction
Step 1.1, parameter definition
(1) robot pose
World coordinate system is set up in mobile robot section start, in robot moving process, its pose is represented by:
Understand that mobile robot pose can be described by x=(x, y, θ) by above formula, wherein, x, y represents mobile robot respectively
In x-axis, the translational movement in y-axis direction, θ is the anglec of rotation of the mobile robot around z-axis.
Under world coordinate system, the pose of camera is represented by:
Wherein, Tc∈ SE (3), SE (3) are special rigid body translation group, RcFor 3 × 3 spin matrixs, tcFor 3 × 1 translation squares
Battle array.The position auto―control establishes world coordinate system and point p under camera coordinates systemc, pwCorresponding relation, i.e. pc=Tcpw.In accompanying drawing 2
Shown, camera is fixed in mobile robot in this research, and camera has fixed rotation translation relation T with the machine human worldcr.
In robot moving process, if sometime robot pose increment in interval is Trs, correspond to therewith, the pose increment of camera is
Tcs, then:
Trs=TrcTcsTcr (3)
Wherein,
(2) ground level calibrating parameters
Camera is fixed in mobile robot, and demarcation target is lain in into ground.Camera has demarcation by obtaining
The image information of target sets up position orientation relation between camera and robot.Tcp∈ SE (3) are the conversion square between camera and demarcation target
Battle array, Rcp, tcpT is represented respectivelycpRotation and translational component, its inverse transformationIt is assumed that the origin of coordinates and camera of robot
It is consistent, and the x-axis of robot and the z-axis of camera are towards identical.Under target co-ordinates system, the x direction vectors of robot can
By TcpThe 3rd row vector the first two representation in components.Now, robot coordinate system and the transformation relation of target co-ordinates system can be represented
For:
Wherein, γ is robot coordinate system and the anglec of rotation of target co-ordinates system, (xr,yr) for robot origin in target
Coordinate under coordinate system, the coordinate can be determined by the translation vector of camera and target.Therefore the conversion between robot and camera is closed
System is represented by Tcr=TcpTpr,According to camera and the position orientation relation of target interplanar, it can obtain in camera coordinates
Ground level equation under system:
(0,0,1,0)Tpcpc=0 (5)
Wherein, pcFor three-dimensional point under camera coordinates system, ground level parameter is represented by (nT, d), and n=(0,0,1) Rpc, d
=tp(3)。
Step 1.2, camera calibration
There can be distortion due to lens are in manufacture, distortion is broadly divided into the mirror caused by lens shape to distortion
With the tangential distortion as caused by video camera assembling process.Correction of the FOV model realizations to monocular camera, calibration model are used herein
Such as Fig. 3.The model is that the one kind proposed by Devernay and Faugeras goes distortion method for wide angle camera, its mathematical description
It is as follows:
In formula, xuIt is pixel normalization coordinate, udIt is pixel coordinate after distortion, ω is distortion factor, rd,ruRespectively rectify
Distance of the just front and rear normalization coordinate to coordinate system center.
Using above-mentioned camera model, image pixel coordinates are mapped to normalization coordinate plane, in combination with camera internal reference
Matrix number K, realizes that pattern distortion is corrected, i.e.,:
U=Kxu (7)
Step 2, the initialization based on terrain surface specifications
Step 2.1, signature tracking
FAST Corner Detections are a kind of simple and quick angle point inspections proposed by Edward Rosten and Tom Drummond
Method of determining and calculating.The algorithm is judged using the difference of some neighborhood (generally circular region) interior neighbor pixel gray value of pixel
Whether the pixel is angle point, such as following formula:
Wherein, ufFor candidate angular, s is any point, c (u in candidate angular neighborhoodf) represent ufNeighborhood, I (x) be
Image intensity value at x, if the candidate angular that N exceeds at threshold condition, x is angle point candidate point as angle point.
Before camera track thread, system obtains current environment texture information using camera, and builds four layers of Gaussian image
Pyramid, using FAST Corner Detection Algorithm, with reference to the data correlation set up between Corner Feature of mode of block search, utilizes throwing
The corresponding relation of shadow Relationship Prediction three-dimensional point and characteristic point in present frame, and FX is scanned near position, to obtain
Take accurate matching relationship.
Step 2.2, pose is initialized
In robot moving process, terrestrial information is obtained by camera, and Corner Feature is realized using FAST algorithms
Tracking.According to above-mentioned data association information, the homography relation of planar point can be set up, i.e.,:
π(Hur)=up (9)
Wherein, π (p) is projection function, p=(x, y, z), π (p)=(x/z, y/z, 1), ur, upRespectively reference frame is with working as
Character pair point under previous frame.According to camera translational movement TcAnd ground level equation parameter (nT, d), the homography of planar point can be set up
Relation:
H=KTc(I|nd)TK-1 (10)
Wherein, nd=n/d, K are camera Intrinsic Matrix.Formula (3) is substituted into formula (10), now the pose of robot and ground
Region feature homography relation can be described as:
H=KTcrTrsTrc(I|nd)TK-1 (11)
Using the homography relation of above-mentioned image characteristic point, pose estimation model can be set up:
Wherein, r (x)=π (H (x) uref)-upFor homography relationship error function, i.e. projection error function, H (x) is single
Answering property matrix.According to thin tail sheep it is assumed that error function r first order Taylor expansion is expressed as:
Now, the extremum conditions of energy function is:
I.e.:
Wherein,The pose that mobile robot can be obtained by iterative above formula is estimated.In order to enter one
Step improves positioning precision, and robustness weights are introduced herein and build the pose estimation model based on M-estimation, now energy
Function representation is:
Wherein, w (x) is robustness Tukey weighting functions, and now, the extremum conditions of energy function is:
Step 2.3, plane parameter is estimated
Robot utilizes the homography relational implementation robot localization based on terrain surface specifications, but due to the shadow of non-terrain surface specifications
Ring, it is difficult to ensure positioning precision, therefore the present invention is using a kind of quickly non-terrain surface specifications minimizing technology.Pitched according to parallel vector
The mode of multiplying builds normal estimation model.Experiment is found, because Corner Detection has certain error so that preferable multiplication cross condition can not
Meet, normal estimation model fitness is poor.By the use of projection error r as judgment basis, so that it is determined that non-terrain surface specifications information.But
As distance reduces between non-terrain surface specifications and point of observation, this method discrimination is substantially reduced.Compared to the above method, the present invention is in place
On the basis of appearance initial estimation, using homography matrix direct derivation normal estimation model, by obtaining itself and ground level equation
Whether the difference of parameter, it is terrain surface specifications information to differentiate the angle point.According to plane shock wave sexual intercourse formula, plane parameter n can be set upd
Estimation model:
Wherein, r (nd)=π (H (nd)uref)-upFor homolographic projection error function.It is similar with pose estimation solution procedure,
On the basis of thin tail sheep is assumed, the solving model of formula (18) can be set up according to the first order Taylor formula of error function.
Now, plane parameter can be obtained by iterative (18):
Wherein,
Step 3, pose is estimated
The major function of pose estimation module is the real-time update for completing camera pose.Its specific implementation process is as follows.
Complete after initial work, according to projection error, set up pose estimation model, its mathematical description is as follows:
Wherein, ejIt is projection error, andFor the double power mesh of Tukey
Scalar functions, σTThe unbiased estimator of Point matching standard deviation is characterized, μ is hexa-atomic group of description vectors of camera pose.
For failure scenarios present in tracking, tracking quality of the observation of characteristics ratio to assess every two field picture is proposed.
, can be poor for tracking effect if ratio is less than given threshold, then continue to track, but the frame will not be updated and arrive map by system
In;If all occur in that observation of characteristics ratio is less than the situation of given threshold in the tracking of multiple image, you can think to belong to
The situation of " with losing ", then just initialize tracking process.The accurate estimation of pose is realized by above step, tracking process is obtained
To be normally carried out.
Step 4, map building
Step 4.1, map initialization
System sets up current environment initialization map using using terrain surface specifications and standard stereo camera algorithm model.On ground
In the initialization procedure of figure, using FAST corners Matchings relation in image, with reference to RANSAC algorithms, using based on terrain surface specifications
Initial position and orientation estimation method, calculates the three-dimensional coordinate at current signature point, and determine that starting point is the overall situation with triangulation
World coordinate system origin, completes the initialization of sparse 3 D map.
In the initial drawing stage of map, the detailed process for obtaining two key frames is as follows, and camera is erected at into moving machine
On device people, by proceeding by initialization procedure after camera and ground demarcation.In initialization procedure, mobile robot starts to move
It is dynamic, while angle point information and setting up association in camera capture current scene, initially estimated by solving the current pose of formula (12) acquisition
Meter.When the initial estimation exceedes given threshold d0When, obtain present bit with the position and orientation estimation method based on terrain surface specifications in 2.2
The accurate estimation of appearance.
Step 4.2, key frame updates searches for polar curve
During map initialization, only comprising two key frames, and neighbouring less volumetric spaces can only be described,
When camera leaves initial position, it is necessary to add new key frame and map feature.If camera and current key interframe image
Frame number exceeds threshold condition, and camera tracking effect is optimal, and camera and nearest map key point are apart from hour as far as possible, will be certainly
It is dynamic to perform addition key frame process.Due to the constraint of real-time, tracking system simply may estimate visual properties in frame
Subset, therefore remaining map feature is replanned and measured to mapping thread needs.
In the renewal of key frame, system will carry out Shi-Tomas to all angle points in the key frame that newly increases first
Assess, obtain the current angle point information with notable feature, the significant point near characteristic point that success is observed will be dropped, and be protected
The conspicuousness point stayed will be used as the map nodes of candidate.Because new map nodes need known depth information, only by list
Two field picture can not be obtained, it is therefore desirable to chosen key frame nearest therewith as the second view, utilized polar curve search and Block- matching
Method sets up characteristic point sub-pixel precision mapping relations.In the second view, using zero-mean SSD methods, mould is estimated with reference to pose
Type realizes that camera is accurately relocated.Match point is finally projected into space, to generate current global context three-dimensional map.
Step 4.3, optimize
Realize that current map is complete with the overall situation LM (Levenberg-Marquardt) boundling adjustment algorithm using optimal part
Office's uniformity optimizes to improve robustness.The mathematical description of the boundling adjustment algorithm is:
Wherein, σjiFor in i-th of key frame, the unbiased esti-mator of FAST Feature Points Matching standard deviations, μiRepresent i-th of pass
Hexa-atomic group of description vectors of pose of key frame, piFor the point in global map.
Adjusted by continuous boundling so that the stable cost function in (21) is minimum.Global boundling adjusts all keys
The pose of frame and the position of point map, while using inherent openness in structure-motion problems so that global matrix is decomposed
Complexity present cube rank decline, by O ((N+M)3) it is down to O (N3), but generated again in most estimation situations
Exterior point vector product restriction, with the growth of map, this operand can be dramatically increased.
For problem above, local boundling adjustment is introduced, only the subset pose of key frame is adjusted here, now
Object function is changed into:
X is the set to be adjusted of serial key frame, and Y is anchor-frame collection, and Z is point map subset.Set X is one in map
Newest key frame and 4 immediate key frames are constituted, and the visible dots in all maps of key frame constitute collection
Z is closed, last Y contains all key frames as acquired in measurement data in Z.Thus, local boundling adjusting and optimizing is newest, most
The pose of near key frame, have updated in map institute a little, while complexity is significantly reduced, no more than O (NM).
Compared with the former PTAM algorithms, make improvements, realize following several present invention adds terrain surface specifications
Advantage:
(1) initialization procedure is improved, the strict limitation that PTAM algorithms are moved to camera is effectively relieved.
(2) relatively accurate Metric map can be effectively set up, Reliable guarantee is provided further to carry out environment measurement.
Brief description of the drawings
Fig. 1 is the flow chart of method involved in the present invention;
Fig. 2 camera calibration models;
Fig. 3 plane parameters demarcate schematic diagram.
Embodiment
Patent of the present invention is further elaborated below in conjunction with the accompanying drawings.
Shown in PTAM innovatory algorithms flow chart based on terrain surface specifications such as accompanying drawing (1), following steps are specifically included:
Step 1, parameter correction
Step 1.1, parameter definition
By robot coordinate system and the relation of world coordinate system, build robot pose and represent, and put down by camera with target
Position orientation relation between face definitely plane reference parameter.
Step 1.2, camera calibration
Using the correction of FOV model realization monocular cameras, image pixel coordinates are mapped to normalization coordinate plane, simultaneously
Combining camera Intrinsic Matrix K, realizes that pattern distortion is corrected.
Step 2, the initialization based on terrain surface specifications
Step 2.1, signature tracking
Environment texture information is obtained, feature is extracted, Corner Feature association is set up.
Step 2.2, pose is initialized
Homography relation and pose initialization estimation model are set up, realizes that pose is initialized.
Step 2.3, plane parameter is estimated
Step 3, pose is estimated
On the basis of initialization is completed, model is estimated according to the appearance based on terrain surface specifications, the initialization bit of camera is realized
Appearance is estimated.Further, PTAM initializes pose according to camera, using polar curve way of search, sets up angle point in image pyramid
Feature sub-pixel precision matching relationship, and bring pose estimation model into again, realize that camera is accurately relocated.
Step 4, map building
Step 4.1, map initialization
System sets up current environment initialization map using standard stereo camera algorithm model
Step 4.2, key frame updates searches for polar curve
The new key frame of addition simultaneously sets up characteristic point sub-pixel precision mapping relations using polar curve search with block matching method,
Accurately relocated with reference to pose estimation model realization camera.Match point is finally projected into space, current global context three is completed
Tie up map building.
Step 4.3, optimize
The uniformity optimization of map is realized using overall situation and partial situation's boundling adjustment.
Claims (1)
1. a kind of PTAM improved methods based on intelligent robot terrain surface specifications, it is characterised in that:
First, parameter correction is completed, this includes parameter definition and camera calibration;Then the feature in key frame is tracked,
The initialization of pose is realized again, then plane parameter is estimated on the basis of this;Then current environment texture is obtained using camera
Information, and four layers of Gaussian image pyramid are built, the characteristic information in present image is extracted with FAST Corner Detection Algorithms, is built
Data correlation between vertical Corner Feature, obtains pose estimation model;In the initial drawing stage of map, obtaining two key frames will
Camera is erected in mobile robot, by proceeding by initialization procedure after camera and ground demarcation;In initialization procedure,
Mobile robot starts movement, while angle point information and setting up association in camera capture current scene, estimates that model is obtained by pose
Current pose initial estimation is taken, then with the accurate estimation of the current pose of position and orientation estimation method acquisition based on terrain surface specifications;It is real
After the initialization of existing sparse 3 D map, update key frame and set up characteristic point sub-pix with block matching method using polar curve search
Precision mapping relations, are accurately relocated with reference to pose estimation model realization camera;Match point is finally projected into space, completes to work as
Preceding global context three-dimensional map is created;The uniformity optimization of algorithm is completed using overall situation and partial situation's boundling adjustment simultaneously;
Specifically include following steps:
Step 1, parameter correction
Step 1.1, parameter definition
(1) robot pose
World coordinate system is set up in mobile robot section start, in robot moving process, its pose is expressed as:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>T</mi>
<mi>r</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>R</mi>
<mi>r</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>t</mi>
<mi>r</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>R</mi>
<mi>r</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi>&theta;</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi>&theta;</mi>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi>&theta;</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mi>&theta;</mi>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<msub>
<mi>t</mi>
<mi>r</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mi>x</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>y</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Understand that mobile robot pose is described by x=(x, y, θ) by above formula, wherein, x, y represents mobile robot in x-axis respectively,
The translational movement in y-axis direction, θ is the anglec of rotation of the mobile robot around z-axis;
Under world coordinate system, the pose of camera is expressed as:
<mrow>
<msub>
<mi>T</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>R</mi>
<mi>c</mi>
</msub>
</mtd>
<mtd>
<msub>
<mi>t</mi>
<mi>c</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, Tc∈ SE (3), SE (3) are special rigid body translation group, RcFor 3 × 3 spin matrixs, tcFor 3 × 1 translation matrix;Should
Position auto―control establishes world coordinate system and point p under camera coordinates systemc, pwCorresponding relation, i.e. pc=Tcpw;Camera in this method
It is fixed in mobile robot, camera has fixed rotation translation relation T with the machine human worldcr;In robot moving process,
If sometime robot pose increment in interval is Trs, correspond to therewith, the pose increment of camera is Tcs, then:
Trs=TrcTcsTcr (3)
Wherein,
(2) ground level calibrating parameters
Camera is fixed in mobile robot, and demarcation target is lain in into ground;Camera has demarcation target by obtaining
Image information set up position orientation relation between camera and robot;Tcp∈ SE (3) are the transformation matrix between camera and demarcation target,
Rcp, tcpT is represented respectivelycpRotation and translational component, its inverse transformationIt is assumed that the origin of coordinates of robot is protected with camera
Hold consistent, and the x-axis of robot and the z-axis of camera are towards identical;Under target co-ordinates system, the x direction vectors of robot are by Tcp
The 3rd row vector the first two representation in components;Now, robot coordinate system and the transformation relation of target co-ordinates system are expressed as:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>p</mi>
<mi>r</mi>
</msub>
<mo>=</mo>
<msub>
<mi>T</mi>
<mrow>
<mi>r</mi>
<mi>p</mi>
</mrow>
</msub>
<msub>
<mi>p</mi>
<mi>p</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>T</mi>
<mrow>
<mi>r</mi>
<mi>p</mi>
</mrow>
</msub>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi>&gamma;</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>sin</mi>
<mi>&gamma;</mi>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mi>r</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi>&gamma;</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>cos</mi>
<mi>&gamma;</mi>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mi>r</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, γ is robot coordinate system and the anglec of rotation of target co-ordinates system, (xr,yr) for robot origin in target co-ordinates
Coordinate under system, the coordinate is determined by the translation vector of camera and target;Therefore the transformation relation between robot and camera is represented
For Tcr=TcpTpr,According to camera and the position orientation relation of target interplanar, the Horizon under camera coordinates system is obtained
Face equation:
(0,0,1,0)Tpcpc=0 (5)
Wherein, pcFor three-dimensional point under camera coordinates system, ground level parameter is expressed as (nT, d), and n=(0,0,1) Rpc, d=tp
(3);
Step 1.2, camera calibration
Due to lens are in manufacture can exist distortion, distortion be broadly divided into the mirror caused by lens shape to distortion and by
Tangential distortion caused by video camera assembling process;This method uses correction of the FOV model realizations to monocular camera;The model be by
One kind that Devernay and Faugeras is proposed goes distortion method for wide angle camera, and its mathematical description is as follows:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>u</mi>
<mi>d</mi>
</msub>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>u</mi>
<mn>0</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>v</mi>
<mn>0</mn>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>f</mi>
<mi>u</mi>
</msub>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<msub>
<mi>f</mi>
<mi>v</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfrac>
<msub>
<mi>r</mi>
<mi>d</mi>
</msub>
<msub>
<mi>r</mi>
<mi>u</mi>
</msub>
</mfrac>
<msub>
<mi>x</mi>
<mi>u</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>r</mi>
<mi>d</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>&omega;</mi>
</mfrac>
<mi>arctan</mi>
<mrow>
<mo>(</mo>
<mn>2</mn>
<msub>
<mi>r</mi>
<mi>u</mi>
</msub>
<mi>tan</mi>
<mfrac>
<mi>&omega;</mi>
<mn>2</mn>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>r</mi>
<mi>u</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mi>tan</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mi>d</mi>
</msub>
<mi>&omega;</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>tan</mi>
<mfrac>
<mi>&omega;</mi>
<mn>2</mn>
</mfrac>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula, xuIt is pixel normalization coordinate, udIt is pixel coordinate after distortion, ω is distortion factor, rd,ruBefore and after respectively correcting
Coordinate is normalized to the distance at coordinate system center;
Using camera model, image pixel coordinates are mapped to normalization coordinate plane, in combination with camera Intrinsic Matrix K,
Realize that pattern distortion is corrected, i.e.,:
U=Kxu (7)
Step 2, the initialization based on terrain surface specifications
Step 2.1, signature tracking
FAST Corner Detections are calculated by Edward Rosten and Tom the Drummond a kind of simple and quick Corner Detections proposed
Method;The algorithm utilizes the difference of neighbor pixel gray value in some neighborhood of pixel, and whether be angle point, such as if judging the pixel
Following formula:
<mrow>
<mi>N</mi>
<mo>=</mo>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>s</mi>
<mo>&Element;</mo>
<mi>c</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>u</mi>
<mi>f</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mo>|</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>s</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>u</mi>
<mi>f</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, ufFor candidate angular, s is any point, c (u in candidate angular neighborhoodf) represent ufNeighborhood, I (s) be at s
Image intensity value, if the candidate angular that N exceeds at threshold condition, s is angle point;
Before camera track thread, system obtains current environment texture information using camera, and builds the golden word of four layers of Gaussian image
Tower, using FAST Corner Detection Algorithm, with reference to the data correlation set up between Corner Feature of mode of block search, utilizes projection to close
The corresponding relation of system's prediction three-dimensional point and characteristic point in present frame, and FX is scanned near position, to obtain standard
True matching relationship;
Step 2.2, pose is initialized
In robot moving process, terrestrial information is obtained by camera, and realize using FAST algorithms the tracking of Corner Feature;
According to above-mentioned data association information, the homography relation of planar point is set up, i.e.,:
π(Hur)=up (9)
Wherein, π (p) is projection function, p=(x, y, z), π (p)=(x/z, y/z, 1), ur, upRespectively reference frame and present frame
Lower character pair point;According to camera translational movement TcAnd ground level equation parameter (nT, d), set up the homography relation of planar point:
H=KTc(I|nd)TK-1 (10)
Wherein, nd=n/d, K are camera Intrinsic Matrix;Formula (3) is substituted into formula (10), now the pose of robot and ground are special
Levying homography relationship description is:
H=KTcrTrsTrc(I|nd)TK-1 (11)
Using the homography relation of above-mentioned image characteristic point, pose estimation model is set up:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>x</mi>
</munder>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>12</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, r (x)=π (H (x) uref)-upFor homography relationship error function, i.e. projection error function, H (x) is homography square
Battle array;According to thin tail sheep it is assumed that error function r first order Taylor expansion is expressed as:
<mrow>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mi>&xi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>r</mi>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&xi;</mi>
</mrow>
</mfrac>
<msub>
<mo>|</mo>
<mrow>
<mi>&xi;</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
</msub>
<mi>&xi;</mi>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>13</mn>
<mo>)</mo>
</mrow>
</mrow>
Now, the extremum conditions of energy function is:
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>f</mi>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&xi;</mi>
</mrow>
</mfrac>
<mo>=</mo>
<msup>
<mi>&Sigma;J</mi>
<mi>T</mi>
</msup>
<mi>r</mi>
<mo>=</mo>
<mn>0</mn>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>14</mn>
<mo>)</mo>
</mrow>
</mrow>
I.e.:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<msup>
<mi>J</mi>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>r</mi>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
<mo>+</mo>
<mi>J</mi>
<mi>&xi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>&xi;</mi>
<mo>=</mo>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<msup>
<mi>J</mi>
<mi>T</mi>
</msup>
<mi>J</mi>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<msup>
<mi>J</mi>
<mi>T</mi>
</msup>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>15</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein,The pose for obtaining mobile robot by iterative above formula is estimated;In order to further improve
Positioning precision, introduces robustness weights and builds the pose estimation model based on M-estimation, now energy function is expressed as:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mi>p</mi>
</munder>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>r</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>|</mo>
<mi>r</mi>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>16</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, w (r) is robustness Tukey weighting functions, and now, the extremum conditions of energy function is:
<mrow>
<mi>&xi;</mi>
<mo>=</mo>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<msup>
<mi>J</mi>
<mi>T</mi>
</msup>
<mi>w</mi>
<mo>(</mo>
<mi>r</mi>
<mo>)</mo>
<mi>J</mi>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<munder>
<mi>&Sigma;</mi>
<mi>p</mi>
</munder>
<msup>
<mi>J</mi>
<mi>T</mi>
</msup>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>r</mi>
<mo>)</mo>
</mrow>
<mi>r</mi>
<mrow>
<mo>(</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>17</mn>
<mo>)</mo>
</mrow>
</mrow>
Step 2.3, plane parameter is estimated
Robot utilizes the homography relational implementation robot localization based on terrain surface specifications, but due to the influence of non-terrain surface specifications,
Positioning precision is difficult to ensure that, therefore using a kind of quickly non-terrain surface specifications minimizing technology;According to parallel vector cross multiplication structure
Build normal estimation model;Because Corner Detection has error so that preferable multiplication cross condition can not be met, normal estimation model is adapted to
Degree is poor;By the use of projection error r as judgment basis, so that it is determined that non-terrain surface specifications information;But with non-terrain surface specifications and observation
Distance reduces between point, and this method discrimination is substantially reduced;Compared to the above method, on the basis of pose initial estimation, list is utilized
Whether answering property matrix direct derivation normal estimation model, by obtaining its difference with ground level equation parameter, differentiate the angle point
For terrain surface specifications information;According to plane shock wave sexual intercourse formula, plane parameter n is set updEstimation model:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>n</mi>
<mi>d</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mi>p</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>r</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>n</mi>
<mi>d</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>18</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, r (nd)=π (H (nd)uref)-upFor homolographic projection error function;It is similar with pose estimation solution procedure, in small position
Move on the basis of assuming, the solving model of formula (18) is set up according to the first order Taylor formula of error function;
Now, plane parameter is obtained by iterative (18):
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>n</mi>
<mi>d</mi>
</msub>
<mo>=</mo>
<msub>
<mi>n</mi>
<mi>d</mi>
</msub>
<mo>+</mo>
<mover>
<mi>n</mi>
<mo>^</mo>
</mover>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mover>
<mi>n</mi>
<mo>^</mo>
</mover>
<mo>=</mo>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munder>
<mo>&Sigma;</mo>
<mi>p</mi>
</munder>
<msubsup>
<mi>J</mi>
<mi>n</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>J</mi>
<mi>n</mi>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<munder>
<mo>&Sigma;</mo>
<mi>p</mi>
</munder>
<msubsup>
<mi>J</mi>
<mi>n</mi>
<mi>T</mi>
</msubsup>
<mi>r</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>n</mi>
<mi>d</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>19</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein,
Step 3, pose is estimated
The major function of pose estimation module is the real-time update for completing camera pose;Its specific implementation process is as follows;Complete
After initial work, according to projection error, pose estimation model is set up, its mathematical description is as follows:
<mrow>
<mi>&xi;</mi>
<mo>=</mo>
<munder>
<mi>argmin</mi>
<mi>&xi;</mi>
</munder>
<mi>&Sigma;</mi>
<mi>O</mi>
<mi>b</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>e</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
</mrow>
<msub>
<mi>&sigma;</mi>
<mi>j</mi>
</msub>
</mfrac>
<mo>,</mo>
<msub>
<mi>&sigma;</mi>
<mi>T</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>20</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, ejIt is projection error, andFor the double power target letters of Tukey
Number, σTThe unbiased estimator of Point matching standard deviation is characterized, μ is hexa-atomic group of description vectors of camera pose;
For failure scenarios present in tracking, tracking quality of the observation of characteristics ratio to assess every two field picture is proposed;If than
Value is less than given threshold, is that tracking effect is poor, then continues to track, but system will not update the frame into map;If
When all occurring in that observation of characteristics ratio is less than the situation of given threshold in the tracking of multiple image, that is, think to belong to the feelings of " with losing "
Condition, then just initialize tracking process;The accurate estimation of pose is realized by above step, tracking process is able to normally enter
OK;
Step 4, map building
Step 4.1, map initialization
System sets up current environment initialization map using using terrain surface specifications and standard stereo camera algorithm model;In map
In initialization procedure, using FAST corners Matchings relation in image, with reference to RANSAC algorithms, using based on the initial of terrain surface specifications
Position and orientation estimation method, calculates the three-dimensional coordinate at current signature point, and determine that starting point is the global world with triangulation
Coordinate origin, completes the initialization of sparse 3 D map;
In the initial drawing stage of map, the detailed process for obtaining two key frames is as follows, and camera is erected at into mobile robot
On, by proceeding by initialization procedure after camera and ground demarcation;In initialization procedure, mobile robot starts movement,
Angle point information and association is set up in the current scene of camera capture simultaneously, by solving the current pose initial estimation of formula (12) acquisition;
When the initial estimation exceedes given threshold d0When, obtain current pose with the position and orientation estimation method based on terrain surface specifications in 2.2
Accurate estimation;
Step 4.2, key frame updates searches for polar curve
During map initialization, only comprising two key frames, and neighbouring less volumetric spaces can only be described, in phase
, it is necessary to add new key frame and map feature when machine leaves initial position;If camera and current key interframe number of image frames
Beyond threshold condition, camera tracking effect is optimal, and camera and nearest map key point distance hour as far as possible, will be held automatically
Row addition key frame process;Due to the constraint of real-time, tracking system may simply estimate the subset of visual properties in frame,
Therefore mapping thread needs to replan and measure remaining map feature;
In the renewal of key frame, system will carry out Shi-Tomas assessments to all angle points in the key frame that newly increases first,
The current angle point information with notable feature is obtained, the significant point near characteristic point that success is observed will be dropped, and be retained
The conspicuousness point come will be used as the map nodes of candidate;Because new map nodes need known depth information, only by single frames figure
As that can not obtain, it is therefore desirable to choose key frame nearest therewith as the second view, utilize polar curve search and block matching method
Set up characteristic point sub-pixel precision mapping relations;It is real with reference to pose estimation model using zero-mean SSD methods in the second view
Existing camera is accurately relocated;Match point is finally projected into space, to generate current global context three-dimensional map;
Step 4.3, optimize
Realize the optimization of current map global coherency to improve robustness using optimal part and overall situation LM boundling adjustment algorithms;
The mathematical description of the boundling adjustment algorithm is:
<mrow>
<mo>{</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mn>2</mn>
</msub>
<mo>...</mo>
<msub>
<mi>&mu;</mi>
<mi>N</mi>
</msub>
<mo>}</mo>
<mo>,</mo>
<mo>{</mo>
<msub>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mn>1</mn>
</msub>
<mo>...</mo>
<msub>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mi>M</mi>
</msub>
<mo>}</mo>
<mo>}</mo>
<mo>=</mo>
<munder>
<mrow>
<mi>arg</mi>
<mi>min</mi>
</mrow>
<mrow>
<mo>{</mo>
<mo>{</mo>
<mi>&mu;</mi>
<mo>}</mo>
<mo>,</mo>
<mo>{</mo>
<mi>p</mi>
<mo>}</mo>
<mo>}</mo>
</mrow>
</munder>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>&Element;</mo>
<mi>X</mi>
<mo>&Element;</mo>
<mi>Y</mi>
</mrow>
</munder>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>&Element;</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
</mrow>
</munder>
<mi>O</mi>
<mi>b</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>e</mi>
<mrow>
<mi>j</mi>
<mi>i</mi>
</mrow>
</msub>
<mo>|</mo>
</mrow>
<msub>
<mi>&sigma;</mi>
<mrow>
<mi>j</mi>
<mi>i</mi>
</mrow>
</msub>
</mfrac>
<mo>,</mo>
<msub>
<mi>&sigma;</mi>
<mi>T</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>21</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, σjiFor in i-th of key frame, the unbiased esti-mator of FAST Feature Points Matching standard deviations, μiRepresent i-th of key frame
Hexa-atomic group of description vectors of pose, piFor the point in global map;
Adjusted by continuous boundling so that the stable cost function in formula (21) is minimum;Global boundling adjusts all key frames
Pose and point map position, while using in structure-motion problems it is inherent it is openness cause global matrix decompose
The decline of cube rank is presented in complexity, by O ((N+M)3) it is down to O (N3), but again by generating in most estimation situations
The restriction of exterior point vector product, with the growth of map, this operand can be dramatically increased;
For problem above, local boundling adjustment is introduced, only the subset pose of key frame is adjusted here, target now
Function is changed into:
<mrow>
<mo>{</mo>
<mo>{</mo>
<msub>
<mi>&mu;</mi>
<mi>&chi;</mi>
</msub>
<mo>&Element;</mo>
<mi>X</mi>
<mo>}</mo>
<mo>,</mo>
<mo>{</mo>
<msub>
<msup>
<mi>p</mi>
<mo>&prime;</mo>
</msup>
<mi>z</mi>
</msub>
<mo>&Element;</mo>
<mi>Z</mi>
<mo>}</mo>
<mo>}</mo>
<mo>=</mo>
<munder>
<mi>argmin</mi>
<mrow>
<mo>{</mo>
<mo>{</mo>
<mi>&mu;</mi>
<mo>}</mo>
<mo>,</mo>
<mo>{</mo>
<mi>p</mi>
<mo>}</mo>
<mo>}</mo>
</mrow>
</munder>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>&Element;</mo>
<mi>X</mi>
<mo>&Element;</mo>
<mi>Y</mi>
</mrow>
</munder>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>&Element;</mo>
<mi>Z</mi>
<mo>&cap;</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
</mrow>
</munder>
<mi>O</mi>
<mi>b</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>22</mn>
<mo>)</mo>
</mrow>
</mrow>
X is the set to be adjusted of serial key frame, and Y is anchor-frame collection, and Z is point map subset;Set X in map one it is newest
Key frame and 4 immediate key frames constitute, and the visible dots in all maps of key frame constitute set Z,
Last Y contains all key frames as acquired in measurement data in Z;Thus, local boundling adjusting and optimizing is newest, nearest
The pose of key frame, have updated in map institute a little, while complexity is significantly reduced, no more than O (NM).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510025810.7A CN104732518B (en) | 2015-01-19 | 2015-01-19 | A kind of PTAM improved methods based on intelligent robot terrain surface specifications |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510025810.7A CN104732518B (en) | 2015-01-19 | 2015-01-19 | A kind of PTAM improved methods based on intelligent robot terrain surface specifications |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104732518A CN104732518A (en) | 2015-06-24 |
CN104732518B true CN104732518B (en) | 2017-09-01 |
Family
ID=53456386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510025810.7A Expired - Fee Related CN104732518B (en) | 2015-01-19 | 2015-01-19 | A kind of PTAM improved methods based on intelligent robot terrain surface specifications |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104732518B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109993798A (en) * | 2019-04-09 | 2019-07-09 | 上海肇观电子科技有限公司 | Method, equipment and the storage medium of multi-cam detection motion profile |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6575325B2 (en) * | 2015-11-27 | 2019-09-18 | 富士通株式会社 | Camera position / orientation estimation apparatus, camera position / orientation estimation method, and camera position / orientation estimation program |
CN105678754B (en) * | 2015-12-31 | 2018-08-07 | 西北工业大学 | A kind of unmanned plane real-time map method for reconstructing |
CN109643367B (en) * | 2016-07-21 | 2023-08-08 | 御眼视觉技术有限公司 | Methods, systems, and readable media for crowd sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation |
CN106570904B (en) * | 2016-10-25 | 2019-04-09 | 大连理工大学 | A kind of multiple target relative pose recognition methods based on Xtion camera |
CN106548486B (en) * | 2016-11-01 | 2024-02-27 | 浙江大学 | Unmanned vehicle position tracking method based on sparse visual feature map |
CN106600641B (en) * | 2016-12-07 | 2020-02-11 | 常州工学院 | Embedded visual servo control method based on multi-feature fusion |
CN106885574B (en) * | 2017-02-15 | 2020-02-07 | 北京大学深圳研究生院 | Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy |
CN107358624B (en) * | 2017-06-06 | 2020-01-07 | 武汉几古几古科技有限公司 | Monocular dense instant positioning and map reconstruction method |
CN107272454A (en) * | 2017-06-19 | 2017-10-20 | 中国人民解放军国防科学技术大学 | A kind of real time human-machine interaction method based on virtual reality |
CN107291093A (en) * | 2017-07-04 | 2017-10-24 | 西北工业大学 | Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment |
CN107687850B (en) * | 2017-07-26 | 2021-04-23 | 哈尔滨工业大学深圳研究生院 | Unmanned aerial vehicle pose estimation method based on vision and inertia measurement unit |
CN107677279B (en) * | 2017-09-26 | 2020-04-24 | 上海思岚科技有限公司 | Method and system for positioning and establishing image |
CN107808395B (en) * | 2017-10-31 | 2020-12-04 | 南京维睛视空信息科技有限公司 | Indoor positioning method based on SLAM |
CN107767450B (en) * | 2017-10-31 | 2021-05-28 | 南京维睛视空信息科技有限公司 | Spark-slam-based real-time graph establishing method |
CN107869989B (en) * | 2017-11-06 | 2020-02-07 | 东北大学 | Positioning method and system based on visual inertial navigation information fusion |
CN107862720B (en) * | 2017-11-24 | 2020-05-22 | 北京华捷艾米科技有限公司 | Pose optimization method and pose optimization system based on multi-map fusion |
CN107885871A (en) * | 2017-11-24 | 2018-04-06 | 南京华捷艾米软件科技有限公司 | Synchronous superposition method, system, interactive system based on cloud computing |
CN108780577A (en) * | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
CN108090958B (en) * | 2017-12-06 | 2021-08-27 | 上海阅面网络科技有限公司 | Robot synchronous positioning and map building method and system |
CN108109169B (en) * | 2017-12-12 | 2021-12-14 | 深圳市神州云海智能科技有限公司 | Pose estimation method and device based on rectangular identifier and robot |
CN108171787A (en) * | 2017-12-18 | 2018-06-15 | 桂林电子科技大学 | A kind of three-dimensional rebuilding method based on the detection of ORB features |
CN108253962A (en) * | 2017-12-18 | 2018-07-06 | 中北智杰科技(北京)有限公司 | New energy pilotless automobile localization method under a kind of low light environment |
CN108133496B (en) * | 2017-12-22 | 2021-11-26 | 北京工业大学 | Dense map creation method based on g2o and random fern algorithm |
CN108615244B (en) * | 2018-03-27 | 2019-11-15 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN110389349B (en) * | 2018-04-17 | 2021-08-17 | 北京京东尚科信息技术有限公司 | Positioning method and device |
CN110264509B (en) * | 2018-04-27 | 2022-10-14 | 腾讯科技(深圳)有限公司 | Method, apparatus, and storage medium for determining pose of image capturing device |
CN108765326A (en) * | 2018-05-18 | 2018-11-06 | 南京大学 | A kind of synchronous superposition method and device |
CN110660098B (en) * | 2018-06-28 | 2022-08-12 | 北京京东叁佰陆拾度电子商务有限公司 | Positioning method and device based on monocular vision |
CN109141393B (en) * | 2018-07-02 | 2020-12-08 | 北京百度网讯科技有限公司 | Relocation method, relocation apparatus and storage medium |
CN109100731B (en) * | 2018-07-17 | 2022-11-11 | 重庆大学 | Mobile robot positioning method based on laser radar scanning matching algorithm |
US10953545B2 (en) * | 2018-08-13 | 2021-03-23 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for autonomous navigation using visual sparse map |
CN110858403B (en) * | 2018-08-22 | 2022-09-27 | 杭州萤石软件有限公司 | Method for determining scale factor in monocular vision reconstruction and mobile robot |
CN109215059B (en) * | 2018-10-16 | 2021-06-29 | 西安建筑科技大学 | Local data association method for tracking moving vehicle in aerial video |
CN109544629B (en) * | 2018-11-29 | 2021-03-23 | 南京人工智能高等研究院有限公司 | Camera position and posture determining method and device and electronic equipment |
CN109341707B (en) * | 2018-12-03 | 2022-04-08 | 南开大学 | Method for constructing three-dimensional map of mobile robot in unknown environment |
CN109741306B (en) * | 2018-12-26 | 2021-07-06 | 北京石油化工学院 | Image processing method applied to dangerous chemical storehouse stacking |
CN109815847B (en) * | 2018-12-30 | 2020-12-01 | 中国电子科技集团公司信息科学研究院 | Visual SLAM method based on semantic constraint |
CN111568305B (en) * | 2019-02-18 | 2023-02-17 | 北京奇虎科技有限公司 | Method and device for processing relocation of sweeping robot and electronic equipment |
CN110044358B (en) * | 2019-04-29 | 2020-10-02 | 清华大学 | Mobile robot positioning method based on field line characteristics |
CN110223354A (en) * | 2019-04-30 | 2019-09-10 | 惠州市德赛西威汽车电子股份有限公司 | A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction |
CN111982133B (en) * | 2019-05-23 | 2023-01-31 | 北京地平线机器人技术研发有限公司 | Method and device for positioning vehicle based on high-precision map and electronic equipment |
US11113526B2 (en) | 2019-07-23 | 2021-09-07 | Toyota Research Institute, Inc. | Training methods for deep networks |
CN110992487B (en) * | 2019-12-10 | 2020-09-29 | 南京航空航天大学 | Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank |
CN113129366A (en) * | 2020-01-10 | 2021-07-16 | 北京字节跳动网络技术有限公司 | Monocular SLAM (simultaneous localization and mapping) initialization method and device and electronic equipment |
CN111121774B (en) * | 2020-01-14 | 2021-04-06 | 上海曼恒数字技术股份有限公司 | Infrared positioning camera capable of detecting self posture in real time |
CN111102984A (en) * | 2020-01-21 | 2020-05-05 | 中南民族大学 | Visualized mountain road navigation method, device, equipment and storage medium |
CN111230886A (en) * | 2020-03-05 | 2020-06-05 | 浙江理工大学 | Industrial robot kinematics parameter calibration system and method with tolerance capability |
CN111157009A (en) * | 2020-04-02 | 2020-05-15 | 深圳司南数据服务有限公司 | Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence |
CN111780763B (en) * | 2020-06-30 | 2022-05-06 | 杭州海康机器人技术有限公司 | Visual positioning method and device based on visual map |
CN112116651B (en) * | 2020-08-12 | 2023-04-07 | 天津(滨海)人工智能军民融合创新中心 | Ground target positioning method and system based on monocular vision of unmanned aerial vehicle |
CN112348893B (en) | 2020-10-30 | 2021-11-19 | 珠海一微半导体股份有限公司 | Local point cloud map construction method and visual robot |
US20240029300A1 (en) * | 2020-12-25 | 2024-01-25 | Intel Corporation | Re-localization of robot |
CN112783995B (en) * | 2020-12-31 | 2022-06-03 | 杭州海康机器人技术有限公司 | V-SLAM map checking method, device and equipment |
CN113156948B (en) * | 2021-04-19 | 2022-06-28 | 浙江工业大学 | Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot |
CN113211449B (en) * | 2021-06-10 | 2022-06-17 | 哈工大机器人集团股份有限公司 | Robot calibration method and system based on path analysis |
CN115451920B (en) * | 2022-10-27 | 2023-03-14 | 南京航空航天大学 | Relative pose measurement method for unmanned autonomous landing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN102519481A (en) * | 2011-12-29 | 2012-06-27 | 中国科学院自动化研究所 | Implementation method of binocular vision speedometer |
CN104050671A (en) * | 2014-06-25 | 2014-09-17 | 西安石油大学 | Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2418800A1 (en) * | 2000-08-09 | 2002-02-14 | Dynamic Digital Depth Research Pty Ltd. | Image conversion and encoding techniques |
JP5034852B2 (en) * | 2007-10-09 | 2012-09-26 | 富士通株式会社 | Image measuring method and image measuring apparatus |
-
2015
- 2015-01-19 CN CN201510025810.7A patent/CN104732518B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN102519481A (en) * | 2011-12-29 | 2012-06-27 | 中国科学院自动化研究所 | Implementation method of binocular vision speedometer |
CN104050671A (en) * | 2014-06-25 | 2014-09-17 | 西安石油大学 | Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment |
Non-Patent Citations (5)
Title |
---|
Mobile Robot 3D Map Building Based on Laser Ranging and Stereovision;Jia Songmin等;《Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation》;20110810;第1774-1779页 * |
Mobile Robot Localization and Map Building Based on Laser Ranging and PTAM;Sheng Jinbo等;《Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation》;20110810;第1015-1020页 * |
RTM框架下基于分层拓扑结构的多机器人系统地图拼接;贾松敏等;《机器人》;20130531;第35卷(第3期);第292-298页 * |
基于RGB-D相机的移动机器人三维SLAM;贾松敏等;《华中科技大学学报(自然科学版)》;20140131;第42卷(第1期);第103-109页 * |
基于变分模型的单目视觉三维重建方法;贾松敏等;《光学学报》;20140430;第34卷(第4期);第0415002-1-第0415002-7页 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102525A (en) * | 2018-07-19 | 2018-12-28 | 浙江工业大学 | A kind of mobile robot follow-up control method based on the estimation of adaptive pose |
CN109102525B (en) * | 2018-07-19 | 2021-06-18 | 浙江工业大学 | Mobile robot following control method based on self-adaptive posture estimation |
CN109993798A (en) * | 2019-04-09 | 2019-07-09 | 上海肇观电子科技有限公司 | Method, equipment and the storage medium of multi-cam detection motion profile |
CN109993798B (en) * | 2019-04-09 | 2021-05-28 | 上海肇观电子科技有限公司 | Method and equipment for detecting motion trail by multiple cameras and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104732518A (en) | 2015-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104732518B (en) | A kind of PTAM improved methods based on intelligent robot terrain surface specifications | |
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN106548173B (en) | A kind of improvement no-manned plane three-dimensional information acquisition method based on classification matching strategy | |
CN108534782B (en) | Binocular vision system-based landmark map vehicle instant positioning method | |
CN101908230B (en) | Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method | |
CN109544636A (en) | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method | |
WO2018159168A1 (en) | System and method for virtually-augmented visual simultaneous localization and mapping | |
CN105352509B (en) | Unmanned plane motion target tracking and localization method under geography information space-time restriction | |
CN109509230A (en) | A kind of SLAM method applied to more camera lens combined type panorama cameras | |
CN110807809B (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
CN108801274B (en) | Landmark map generation method integrating binocular vision and differential satellite positioning | |
CN105809687A (en) | Monocular vision ranging method based on edge point information in image | |
CN109579825B (en) | Robot positioning system and method based on binocular vision and convolutional neural network | |
CN104374395A (en) | Graph-based vision SLAM (simultaneous localization and mapping) method | |
CN103106688A (en) | Indoor three-dimensional scene rebuilding method based on double-layer rectification method | |
CN103994765B (en) | Positioning method of inertial sensor | |
CN113658337B (en) | Multi-mode odometer method based on rut lines | |
CN112767546B (en) | Binocular image-based visual map generation method for mobile robot | |
Huang et al. | Metric monocular localization using signed distance fields | |
Liu et al. | Dense stereo matching strategy for oblique images that considers the plane directions in urban areas | |
CN116576850B (en) | Pose determining method and device, computer equipment and storage medium | |
CN110163915B (en) | Spatial three-dimensional scanning method and device for multiple RGB-D sensors | |
CN103236053B (en) | A kind of MOF method of moving object detection under mobile platform | |
CN109961476A (en) | The localization method of the underground parking of view-based access control model | |
JP2018116147A (en) | Map creation device, map creation method and map creation computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170901 Termination date: 20200119 |
|
CF01 | Termination of patent right due to non-payment of annual fee |