CN104732518A - PTAM improvement method based on ground characteristics of intelligent robot - Google Patents

PTAM improvement method based on ground characteristics of intelligent robot Download PDF

Info

Publication number
CN104732518A
CN104732518A CN201510025810.7A CN201510025810A CN104732518A CN 104732518 A CN104732518 A CN 104732518A CN 201510025810 A CN201510025810 A CN 201510025810A CN 104732518 A CN104732518 A CN 104732518A
Authority
CN
China
Prior art keywords
camera
pose
map
point
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510025810.7A
Other languages
Chinese (zh)
Other versions
CN104732518B (en
Inventor
贾松敏
王可
宣璇
张鹏
董政胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510025810.7A priority Critical patent/CN104732518B/en
Publication of CN104732518A publication Critical patent/CN104732518A/en
Application granted granted Critical
Publication of CN104732518B publication Critical patent/CN104732518B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a PTAM improvement method based on ground characteristics of an intelligent robot. The PTAM improvement method based on ground characteristics of the intelligent robot comprises the steps that firstly, parameter correction is completed, wherein parameter correction includes parameter definition and camera correction; secondly, current environment texture information is obtained by means of a camera, a four-layer Gausses image pyramid is constructed, the characteristic information in a current image is extracted by means of the FAST corner detection algorithm, data relevance between corner characteristics is established, and then a pose estimation model is obtained; two key frames are obtained so as to erect the camera on the mobile robot at the initial map drawing stage; the mobile robot begins to move in the initializing process, corner information in the current scene is captured through the camera and association is established at the same time; after a three-dimensional sparse map is initialized, the key frames are updated, the sub-pixel precision mapping relation between characteristic points is established by means of an extreme line searching and block matching method, and accurate re-positioning of the camera is achieved based on the pose estimation model; finally, matched points are projected in the space, so that a three-dimensional map for the current overall environment is established.

Description

A kind of PTAM based on intelligent robot terrain surface specifications improves one's methods
Technical field
The invention belongs to robot vision field, relate to a kind of PTAM algorithm improvement based on terrain surface specifications.
Background technology
Along with the relation of robot and the mankind is more and more tightr, the correlation technique of intelligent robot receives great attention.The synchronous location of mobile robot is one of location technology of the most main flow of current intelligent mobile robot with building figure (Simultaneous Localization and Mapping, SLAM).In fact it is exactly an estimation problem, the inside and outside data utilizing sensor to obtain, and calculates mobile robot position at a time, sets up the cartographic model relied on simultaneously.The SLAM of view-based access control model is the research category belonging to vision measurement, and because vision sensor has the advantage of self uniqueness: volume is little, lightweight, low price is easy for installation, and the external information of extraction is very abundant.These advantages further promote the investigation and application of the SLAM of current view-based access control model.Application number is 201310678018.2 provide a kind of SURF that adopts as the feature detection operator of monocular vision SLAM, respectively from the detection of point-of-interest, three aspects such as the generation of SURF descriptor and SURF Point matching are innovated and improve, propose the simultaneous localization and mapping that the SURF feature matching method increased based on region completes robot, make in SURF descriptor matching process, the descriptor that a certain descriptor only matches with most probable is compared, thus the comparison number of times decreased significantly in monocular vision SLAM problem, improve matching speed.But the method still exists the problem can not set up Metric map and cannot provide three-dimensional feature information, Algorithm robustness and real-time need to be improved further.
From motion, creation environment modelling technique is a kind of method simultaneously solving camera motion track and scene structure model.As the exemplary process of SFM, the people such as G.Klein first proposed the concept of PTAM (Parallel Tracking and Mapping) in augmented reality (AR) field in 2007, be used for solving Context awareness problem in real time.But also there is the problem cannot setting up Metric map in the PTAM algorithm do not improved, has strict restriction to the movement of camera simultaneously.
Summary of the invention
For above Problems existing, the present invention proposes a kind of PTAM innovatory algorithm based on terrain surface specifications, on the basis adding terrain surface specifications, PTAM synchronously estimates the current pose of monocular-camera and three dimensions point, upgrade while detecting unique point utilizing FAST angle point method, with local and the overall light-stream adjustment of optimum, constantly upgrade the pose of video camera and three-dimensional feature point, complete hi-Fix and map building, this invention efficiently solves the problem that cannot create Metric map, extract in real-time and three-dimensional feature simultaneously and have great raising, for solution robot navigation location, relevant issues important in inhibiting in three-dimensional reconstruction.
The present invention adopts following technical scheme:
First, complete parameter correction, this comprises parameter definition and camera calibration; Then the feature in key frame is followed the tracks of, then realize the initialization of pose, then this basis is estimated plane parameter; Then utilize camera to obtain current environment texture information, and build four layers of Gaussian image pyramid, use the characteristic information in FAST Corner Detection Algorithm extraction present image, set up the data correlation between Corner Feature, obtain pose estimation model; In the initial drawing stage of map, obtain two key frames and camera frame is located on mobile robot, start to carry out initialization procedure after being demarcated by camera and ground; In initialization procedure, mobile robot starts mobile, camera is caught angle point information in current scene and is associated simultaneously, obtains current pose initial estimation, then use the position and orientation estimation method based on terrain surface specifications to obtain the accurate estimation of current pose by pose estimation model; After realizing the initialization of sparse 3 D map, upgrade key frame and utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method, realize the accurate reorientation of camera in conjunction with pose estimation model.Finally match point is projected to space, complete current global context three-dimensional map and create.Utilize overall situation and partial situation's boundling to adjust the consistance optimization of algorithm simultaneously.
Specifically comprise the following steps:
Step 1, parameter correction
Step 1.1, parameter defines
(1) robot pose
Set up world coordinate system at mobile robot's section start, in robot moving process, its pose can be expressed as:
T r = R r t r 0 1 R r = cos θ sin θ 0 - sin θ cos θ 0 0 0 1 , t r = x y 0 - - - ( 1 )
Can be described by x=(x, y, θ) by the known mobile robot's pose of above formula, wherein, x, y represent that mobile robot is in x-axis respectively, the translational movement in y-axis direction, and θ is the anglec of rotation of mobile robot around z-axis.
Under world coordinate system, the pose of camera can be expressed as:
T c = R c t c 0 1 - - - ( 2 )
Wherein, T c∈ SE (3), SE (3) are special rigid body translation group, R cbe 3 × 3 rotation matrixs, t cbe 3 × 1 translation matrix.This position auto―control puts p under establishing world coordinate system and camera coordinates system c, p wcorresponding relation, i.e. p c=T cp w.Shown in accompanying drawing 2, in this research, camera is fixed on mobile robot, there is fixing rotation translation relation T between camera and robot cr.In robot moving process, if robot pose increment in interval is T sometime rs, corresponding with it, the pose increment of camera is T cs, then:
T rs=T rcT csT cr(3)
Wherein, T cr = T rc - 1 .
(2) ground level calibrating parameters
Camera is fixed on mobile robot, and demarcation target is lain in ground.Camera sets up position orientation relation between camera and robot by obtaining the image information with demarcation target.T cp∈ SE (3) is the transformation matrix between camera and demarcation target, R cp, t cprepresent T respectively cprotation and translation component, its inverse transformation T cp=T p - c 1.Assuming that the true origin of robot and camera are consistent, and the z-axis of the x-axis of robot and camera is towards identical.Under target co-ordinates system, the x direction vector of robot can by T cpthe first two representation in components of the 3rd row vector.Now, the transformation relation of robot coordinate system and target co-ordinates system can be expressed as:
p r=T rpp p
T rp = cos γ sin γ 0 x r - sin γ cos γ 0 y r 0 0 1 0 0 0 0 1 - - - ( 4 )
Wherein, γ is the anglec of rotation of robot coordinate system and target co-ordinates system, (x r, y r) be the coordinate of robot initial point under target co-ordinates system, this coordinate can be determined by the translation vector of camera and target.Therefore the transformation relation between robot and camera can be expressed as T cr=T cpt pr, according to camera and the interplanar position orientation relation of target, the ground level equation under camera coordinates system can be obtained:
(0,0,1,0)T pcp c=0 (5)
Wherein, p cfor three-dimensional point under camera coordinates system, ground level parameter can be expressed as (n t, d), and n=(0,0,1) R pc, d=t p(3).
Step 1.2, camera calibration
Because the reason of lens on manufacturing can exist distortion, distortion mainly divides for the mirror caused by lens shape to the tangential distortion distorted and caused by video camera assembling process.Adopt FOV model realization to the rectification of monocular camera herein, calibration model is as Fig. 3.This model is that the one proposed by Devernay and Faugeras goes distortion method for wide angle camera, and its mathematical description is as follows:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 ω arctan ( 2 r u tan ω 2 ) - - - ( 6 )
r u = tan ( r d ω ) 2 tan ω 2
In formula, x upixel normalization coordinate, u dbe pixel coordinate after distortion, ω is distortion factor, r d, r ube respectively and correct the distance of normalization coordinate in front and back to coordinate system center.
Utilize above-mentioned camera model, image pixel coordinates is mapped to normalization coordinate plane, simultaneously combining camera Intrinsic Matrix K, realize pattern distortion and correct, that is:
u=Kx u(7)
Step 2, based on the initialization of terrain surface specifications
Step 2.1, signature tracking
FAST Corner Detection is a kind of simple and quick Corner Detection Algorithm proposed by Edward Rosten and Tom Drummond.This algorithm utilizes the difference of the interior neighbor pixel gray-scale value of certain neighborhood of pixel (being generally border circular areas), judges whether this pixel is angle point, as shown in the formula:
N = Σ S ∈ C ( u f ) | I ( s ) - I ( u f ) | - - - ( 8 )
Wherein, u ffor candidate angular, s is any point in candidate angular neighborhood, c (u f) represent u fneighborhood, I (x) is at x place image intensity value, if N exceeds threshold condition, this candidate point is angle point.
Before camera track thread, system utilizes camera to obtain current environment texture information, and build four layers of Gaussian image pyramid, adopt the Corner Detection Algorithm of FAST, the mode of combined block search sets up the data correlation between Corner Feature, utilize projection relation to predict the corresponding relation of unique point in three-dimensional point and present frame, and fixed area is searched near position, to obtain matching relationship accurately.
Step 2.2, pose initialization
In robot moving process, obtain terrestrial information by camera, and utilize the tracking of FAST algorithm realization Corner Feature.According to above-mentioned data association message, the homography relation of planar point can be set up, that is:
π(Hu r)=u p(9)
Wherein, π (p) is projection function, p=(x, y, z), π (p)=(x/z, y/z, 1), u r, u pbe respectively character pair point under reference frame and present frame.According to camera translational movement T cand ground level equation parameter (n t, d), the homography relation of planar point can be set up:
H=KT c(I|n d) TK -1(10)
Wherein, n d=n/d, K are camera Intrinsic Matrix.Formula (3) is substituted into formula (10), and now the pose of robot and terrain surface specifications homography relation can be described as:
H=KT rcT rsT rc(I|n d) TK -1(11)
Utilize the homography relation of above-mentioned image characteristic point, pose estimation model can be set up:
x = arg min x f ( x ) f ( x ) = Σ p | | r ( x ) | | 2 - - - ( 12 )
Wherein, r (x)=π (H (x) u ref)-u pfor homography relationship error function, i.e. projection error function, H (x) is homography matrix.According to thin tail sheep hypothesis, the first order Taylor of error function r launches to be expressed as:
r ( ξ ) ( 0 ) + ∂ r ∂ ξ | ξ = 0 ξ - - - ( 13 )
Now, the extremum conditions of energy function is:
∂ f ∂ ξ = Σ J T r = 0 - - - ( 14 )
That is:
Σ p J T ( r ( 0 ) + Jξ ) = 0 ξ = - ( Σ p J T J ) - 1 Σ p J T r ( 0 ) - - - ( 15 )
Wherein, the pose obtaining mobile robot by iterative above formula is estimated.In order to improve positioning precision further, introduce the pose estimation model of robustness weights structure based on M-estimation herein, now energy function is expressed as:
f ( x ) = Σ p w ( r ) | | r | | 2 - - - ( 16 )
Wherein, w (x) is robustness Tukey weighting function, and now, the extremum conditions of energy function is:
ξ = - ( Σ p J T w ( r ) J ) - 1 Σ p J T w ( r ) r ( 0 ) - - - ( 17 )
Step 2.3, plane parameter is estimated
Robot utilizes based on the homography relational implementation robot localization of terrain surface specifications, but due to the impact of non-terrain surface specifications, be difficult to ensure positioning precision, therefore the present invention adopts one non-terrain surface specifications minimizing technology fast.Normal estimation model is built according to parallel vector cross multiplication.Experiment finds, because Corner Detection exists certain error, desirable multiplication cross condition cannot be met, and normal estimation model fitness is poor.Utilize projection error r as judgment basis, thus determine non-terrain surface specifications information.But along with non-terrain surface specifications and observation point spacing reduce, the method discrimination obviously reduces.Compare said method, the present invention, on the basis of pose initial estimation, utilizes homography matrix direct derivation normal estimation model, by obtaining the difference of itself and ground level equation parameter, differentiates whether this angle point is terrain surface specifications information.According to plane shock wave sexual intercourse formula, plane parameter n can be set up destimation model:
f ( n d ) = Σ p | | r ( n d ) | | 2 - - - ( 18 )
Wherein, r (n d)=π (H (n d) u ref)-u pfor homolographic projection error function.Estimate that solution procedure is similar with pose, on the basis of thin tail sheep hypothesis, the solving model of formula (18) can be set up according to the first order Taylor formula of error function.
Now, plane parameter obtains by iterative (18):
n d = n d + n ^
n ^ = - ( Σ p J n T J n ) - 1 Σ p J n T r ( n d ) - - - ( 19 )
Wherein, J n = ∂ r ( n d ) ∂ n d = ∂ π ∂ p ∂ H ∂ n d .
Step 3, pose is estimated
The major function of pose estimation module has been the real-time update of camera pose.Its specific implementation process is as follows.After completing initial work, according to projection error, set up pose estimation model, its mathematical description is as follows:
ξ = arg min ξ ΣObj ( | e j | σ j , σ T ) - - - ( 20 )
Wherein, e jprojection error, and for Tukey two power objective function, σ tfor the unbiased estimator of Feature Points Matching standard deviation, μ is camera pose hexa-atomic group of description vectors.
For the failure scenarios existed in tracking, observation of characteristics ratio is proposed in order to assess the tracking quality of every two field picture.If ratio is lower than setting threshold value, can be that tracking effect is poor, then continue to follow the tracks of, but this frame can not be updated in map by system; If when all there is the situation of observation of characteristics ratio lower than setting threshold value in the tracking of multiple image, the situation belonging to " with losing " can have been thought, so with regard to initialization tracing process.Achieved the accurate estimation of pose by above step, tracing process is normally carried out.
Step 4, map building
Step 4.1, map initialization
System utilizes terrain surface specifications and standard stereo camera algorithm model to set up current environment initialization map.In the initialization procedure of map, utilize FAST corners Matching relation in image, in conjunction with RANSAC algorithm, adopt the initial bit orientation estimation method based on terrain surface specifications, triangulation is used to calculate the three-dimensional coordinate at current signature point place, and determine that starting point is overall world coordinate system initial point, complete the initialization of sparse 3 D map.
In the initial drawing stage of map, the detailed process obtaining two key frames is as follows, is located at by camera frame on mobile robot, starts to carry out initialization procedure after being demarcated by camera and ground.In initialization procedure, mobile robot starts mobile, and camera is caught angle point information in current scene and is associated simultaneously, obtains current pose initial estimation by solving formula (12).When this initial estimation exceedes setting threshold value d 0time, use the accurate estimation obtaining current pose in 2.2 based on the position and orientation estimation method of terrain surface specifications.
Step 4.2, key frame upgrades to be searched for polar curve
During map initialization, only comprise two key frames, and can only be described contiguous less volumetric spaces, when camera leaves initial position, need to add new key frame and map feature.If camera and current key interframe number of image frames exceed threshold condition, camera tracking effect is best, and camera and nearest map key point distance are as far as possible hour, will automatically perform interpolation key frame process.Due to the constraint of real-time, tracker just may estimate the subset of visual properties in frame, and therefore mapping thread needs to replan and measure remaining map feature.
In the renewal of key frame, first system will carry out Shi-Tomas assessment to angle points all in the key frame newly increased, obtain the current angle point information with notable feature, significant point near the unique point that success is observed will be dropped, and the conspicuousness point remained is by map nodes alternatively.Because new map nodes needs known depth information, only cannot obtain by single-frame images, therefore need to choose key frame nearest with it as the second view, utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method.In the second view, adopt zero-mean SSD method, realize the accurate reorientation of camera in conjunction with pose estimation model.Finally match point is projected to space, to generate current global context three-dimensional map.
Step 4.3, optimizes
Utilize optimum local to realize current map global coherency with overall LM (Levenberg-Marquardt) boundling adjustment algorithm and optimize to improve robustness.The mathematical description of this boundling adjustment algorithm is:
{ { μ 2 . . . μ N } , { p ' 1 . . . p ' M } } = arg min { { μ } , { p } } Σ i ∈ X ∈ Y Σ j ∈ S i Obj ( | e ji | σ ji , σ T ) - - - ( 21 )
Wherein, σ jifor in i-th key frame, the unbiased esti-mator of FAST Feature Points Matching standard deviation, μ irepresent pose hexa-atomic group of description vectors of i-th key frame, p ifor the point in global map.
Adjusted by continuous boundling, make the stable cost function in (21) minimum.Overall situation boundling adjusts the pose of all key frames and the position of point map, utilizes the openness complexity that global matrix is decomposed in the inherence in structure-motion problems to present the decline of cube rank, by O ((N+M) simultaneously 3) be down to O (N 3), but the restriction of the exterior point vector product generated again in the estimation situation of majority, along with the growth of map, this operand can significantly increase.
For above problem, introduce local boundling adjustment, only adjust the subset pose of key frame here, objective function now becomes:
{ { μ χ ∈ X } , { p ' z ∈ Z } } = arg min { { μ } , { p } } Σ i ∈ X ∈ Y Σ j ∈ Z ∩ S i Obj ( i , j ) - - - ( 22 )
X is the set to be adjusted of serial key frame, and Y is anchor-frame collection, and Z is point map subset.Set X is made up of a up-to-date key frame and 4 immediate key frames in map, and the visible dots in all maps of key frame constitutes set Z, last Y and contains all key frames obtained by measurement data in Z.Thus, the pose of the key frame that local boundling adjusting and optimizing is up-to-date, nearest, have updated institute in map a little, complexity also significantly reduces simultaneously, is no more than O (NM).
Compared with former PTAM algorithm, present invention adds terrain surface specifications to make improvements, achieve following several advantages:
(1) improve initialization procedure, efficient solution is except PTAM algorithm is to the strict restriction of camera movement.
(2) can effectively setting up relatively accurate Metric map, providing Reliable guarantee for carrying out environment measurement further.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of method involved in the present invention;
Fig. 2 camera calibration model;
Fig. 3 plane parameter demarcates schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, patent of the present invention is further elaborated.
Based on terrain surface specifications PTAM innovatory algorithm process flow diagram as shown in Figure 1, specifically comprise following step:
Step 1, parameter correction
Step 1.1, parameter defines
By the relation of robot coordinate system and world coordinate system, build robot pose and represent, and by camera and the interplanar position orientation relation of target plane reference parameter definitely.
Step 1.2, camera calibration
Adopt the correction of FOV model realization monocular camera, image pixel coordinates is mapped to normalization coordinate plane, simultaneously combining camera Intrinsic Matrix K, realize pattern distortion and correct.
Step 2, based on the initialization of terrain surface specifications
Step 2.1, signature tracking
Obtain environment texture information, extract feature, set up Corner Feature association.
Step 2.2, pose initialization
Set up homography relation and pose initialization estimation model, realize pose initialization.
Step 2.3, plane parameter is estimated
Step 3, pose is estimated
Complete on initialized basis, according to the appearance estimation model based on terrain surface specifications, the initialization pose realizing camera is estimated.Further, PTAM, according to camera initialization pose, adopts polar curve way of search, sets up Corner Feature sub-pixel precision matching relationship in image pyramid, and again bring pose estimation model into, realize the accurate reorientation of camera.
Step 4, map building
Step 4.1, map initialization
System utilizes standard stereo camera algorithm model to set up current environment initialization map
Step 4.2, key frame upgrades to be searched for polar curve
Add new key frame and utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method, realize the accurate reorientation of camera in conjunction with pose estimation model.Finally match point is projected to space, complete current global context three-dimensional map and create.
Step 4.3, optimizes
Overall situation and partial situation's boundling adjustment is utilized to realize the consistance optimization of map.

Claims (2)

1. improve one's methods based on the PTAM of intelligent robot terrain surface specifications for one kind, it is characterized in that:
First, complete parameter correction, this comprises parameter definition and camera calibration; Then the feature in key frame is followed the tracks of, then realize the initialization of pose, then this basis is estimated plane parameter; Then utilize camera to obtain current environment texture information, and build four layers of Gaussian image pyramid, use the characteristic information in FAST Corner Detection Algorithm extraction present image, set up the data correlation between Corner Feature, obtain pose estimation model; In the initial drawing stage of map, obtain two key frames and camera frame is located on mobile robot, start to carry out initialization procedure after being demarcated by camera and ground; In initialization procedure, mobile robot starts mobile, camera is caught angle point information in current scene and is associated simultaneously, obtains current pose initial estimation, then use the position and orientation estimation method based on terrain surface specifications to obtain the accurate estimation of current pose by pose estimation model; After realizing the initialization of sparse 3 D map, upgrade key frame and utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method, realize the accurate reorientation of camera in conjunction with pose estimation model; Finally match point is projected to space, complete current global context three-dimensional map and create; Utilize overall situation and partial situation's boundling to adjust the consistance optimization of algorithm simultaneously;
Specifically comprise the following steps:
Step 1, parameter correction
Step 1.1, parameter defines
(1) robot pose
Set up world coordinate system at mobile robot's section start, in robot moving process, its pose can be expressed as:
Can be described by x=(x, y, θ) by the known mobile robot's pose of above formula, wherein, x, y represent that mobile robot is in x-axis respectively, the translational movement in y-axis direction, and θ is the anglec of rotation of mobile robot around z-axis;
Under world coordinate system, the pose of camera can be expressed as:
Wherein, T c∈ SE (3), SE (3) are special rigid body translation group, R cbe 3 × 3 rotation matrixs, t cbe 3 × 1 translation matrix; This position auto―control puts p under establishing world coordinate system and camera coordinates system c, p wcorresponding relation, i.e. p c=T cp w; In this method, camera is fixed on mobile robot, there is fixing rotation translation relation T between camera and robot cr; In robot moving process, if robot pose increment in interval is T sometime rs, corresponding with it, the pose increment of camera is T cs, then:
T rs=T rcT csT cr(3)
Wherein,
(2) ground level calibrating parameters
Camera is fixed on mobile robot, and demarcation target is lain in ground; Camera sets up position orientation relation between camera and robot by obtaining the image information with demarcation target; T cp∈ SE (3) is the transformation matrix between camera and demarcation target, R cp, t cprepresent T respectively cprotation and translation component, its inverse transformation assuming that the true origin of robot and camera are consistent, and the z-axis of the x-axis of robot and camera is towards identical; Under target co-ordinates system, the x direction vector of robot can by T cpthe first two representation in components of the 3rd row vector; Now, the transformation relation of robot coordinate system and target co-ordinates system can be expressed as:
p r=T rpp p
Wherein, γ is the anglec of rotation of robot coordinate system and target co-ordinates system, (x r, y r) be the coordinate of robot initial point under target co-ordinates system, this coordinate can be determined by the translation vector of camera and target; Therefore the transformation relation between robot and camera can be expressed as T cr=T cpt pr, according to camera and the interplanar position orientation relation of target, the ground level equation under camera coordinates system can be obtained:
(0,0,1,0)T pcp c=0 (5)
Wherein, p cfor three-dimensional point under camera coordinates system, ground level parameter can be expressed as (n t, d), and n=(0,0,1) R pc, d=t p(3);
Step 1.2, camera calibration
Because the reason of lens on manufacturing can exist distortion, distortion mainly divides for the mirror caused by lens shape to the tangential distortion distorted and caused by video camera assembling process; This method adopts FOV model realization to the rectification of monocular camera; This model is that the one proposed by Devernay and Faugeras goes distortion method for wide angle camera, and its mathematical description is as follows:
In formula, x upixel normalization coordinate, u dbe pixel coordinate after distortion, ω is distortion factor, r d, r ube respectively
Before and after correcting, normalization coordinate is to the distance at coordinate system center;
Utilize above-mentioned camera model, image pixel coordinates is mapped to normalization coordinate plane, simultaneously combining camera Intrinsic Matrix K, realize pattern distortion and correct, that is:
u=Kx u(7)
Step 2, based on the initialization of terrain surface specifications
Step 2.1, signature tracking
FAST Corner Detection is a kind of simple and quick Corner Detection Algorithm proposed by Edward Rosten and Tom Drummond; This algorithm utilizes the difference of the interior neighbor pixel gray-scale value of certain neighborhood of pixel (being generally border circular areas), judges whether this pixel is angle point, as shown in the formula:
Wherein, u ffor candidate angular, s is any point in candidate angular neighborhood, c (u f) represent u fneighborhood, I (x) is at x place image intensity value, if N exceeds threshold condition, this candidate point is angle point;
Before camera track thread, system utilizes camera to obtain current environment texture information, and build four layers of Gaussian image pyramid, adopt the Corner Detection Algorithm of FAST, the mode of combined block search sets up the data correlation between Corner Feature, utilize projection relation to predict the corresponding relation of unique point in three-dimensional point and present frame, and fixed area is searched near position, to obtain matching relationship accurately;
Step 2.2, pose initialization
In robot moving process, obtain terrestrial information by camera, and utilize the tracking of FAST algorithm realization Corner Feature; According to above-mentioned data association message, the homography relation of planar point can be set up, that is:
π(Hu r)=u p(9)
Wherein, π (p) is projection function, p=(x, y, z), π (p)=(x/z, y/z, 1), u r, u pbe respectively character pair point under reference frame and present frame; According to camera translational movement T cand ground level equation parameter (n t, d), the homography relation of planar point can be set up:
H=KT c(I|n d) TK -1(10)
Wherein, n d=n/d, K are camera Intrinsic Matrix; Formula (3) is substituted into formula (10), and now the pose of robot and terrain surface specifications homography relation can be described as:
H=KT rcT rsT rc(I|n d) TK -1(11)
Utilize the homography relation of above-mentioned image characteristic point, pose estimation model can be set up:
(12)
Wherein, r (x)=π (H (x) u ref)-u pfor homography relationship error function, i.e. projection error function, H (x) is homography matrix; According to thin tail sheep hypothesis, the first order Taylor of error function r launches to be expressed as:
Now, the extremum conditions of energy function is:
That is:
(15)
Wherein, the pose obtaining mobile robot by iterative above formula is estimated; In order to improve positioning precision further, introduce the pose estimation model of robustness weights structure based on M-estimation herein, now energy function is expressed as:
Wherein, w (x) is robustness Tukey weighting function, and now, the extremum conditions of energy function is:
Step 2.3, plane parameter is estimated
Robot utilizes based on the homography relational implementation robot localization of terrain surface specifications, but due to the impact of non-terrain surface specifications, be difficult to ensure positioning precision, therefore the present invention adopts one non-terrain surface specifications minimizing technology fast; Normal estimation model is built according to parallel vector cross multiplication; Experiment finds, because Corner Detection exists certain error, desirable multiplication cross condition cannot be met, and normal estimation model fitness is poor; Utilize projection error r as judgment basis, thus determine non-terrain surface specifications information; But along with non-terrain surface specifications and observation point spacing reduce, the method discrimination obviously reduces; Compare said method, the present invention, on the basis of pose initial estimation, utilizes homography matrix direct derivation normal estimation model, by obtaining the difference of itself and ground level equation parameter, differentiates whether this angle point is terrain surface specifications information; According to plane shock wave sexual intercourse formula, plane parameter n can be set up destimation model:
Wherein, r (n d)=π (H (n d) u ref)-u pfor homolographic projection error function; Estimate that solution procedure is similar with pose, on the basis of thin tail sheep hypothesis, the solving model of formula (18) can be set up according to the first order Taylor formula of error function;
Now, plane parameter obtains by iterative (18):
Wherein,
Step 3, pose is estimated
The major function of pose estimation module has been the real-time update of camera pose; Its specific implementation process is as follows; After completing initial work, according to projection error, set up pose estimation model, its mathematical description is as follows:
Wherein, e jprojection error, and for Tukey two power objective function, σ tfor the unbiased estimator of Feature Points Matching standard deviation, μ is camera pose hexa-atomic group of description vectors;
For the failure scenarios existed in tracking, observation of characteristics ratio is proposed in order to assess the tracking quality of every two field picture; If ratio is lower than setting threshold value, can be that tracking effect is poor, then continue to follow the tracks of, but this frame can not be updated in map by system; If when all there is the situation of observation of characteristics ratio lower than setting threshold value in the tracking of multiple image, the situation belonging to " with losing " can have been thought, so with regard to initialization tracing process; Achieved the accurate estimation of pose by above step, tracing process is normally carried out;
Step 4, map building
Step 4.1, map initialization
System utilizes terrain surface specifications and standard stereo camera algorithm model to set up current environment initialization map; In the initialization procedure of map, utilize FAST corners Matching relation in image, in conjunction with RANSAC algorithm, adopt the initial bit orientation estimation method based on terrain surface specifications, triangulation is used to calculate the three-dimensional coordinate at current signature point place, and determine that starting point is overall world coordinate system initial point, complete the initialization of sparse 3 D map;
In the initial drawing stage of map, the detailed process obtaining two key frames is as follows, is located at by camera frame on mobile robot, starts to carry out initialization procedure after being demarcated by camera and ground; In initialization procedure, mobile robot starts mobile, and camera is caught angle point information in current scene and is associated simultaneously, obtains current pose initial estimation by solving formula (12); When this initial estimation exceedes setting threshold value d 0time, use the accurate estimation obtaining current pose in 2.2 based on the position and orientation estimation method of terrain surface specifications;
Step 4.2, key frame upgrades to be searched for polar curve
During map initialization, only comprise two key frames, and can only be described contiguous less volumetric spaces, when camera leaves initial position, need to add new key frame and map feature; If camera and current key interframe number of image frames exceed threshold condition, camera tracking effect is best, and camera and nearest map key point distance are as far as possible hour, will automatically perform interpolation key frame process; Due to the constraint of real-time, tracker just may estimate the subset of visual properties in frame, and therefore mapping thread needs to replan and measure remaining map feature;
In the renewal of key frame, first system will carry out Shi-Tomas assessment to angle points all in the key frame newly increased, obtain the current angle point information with notable feature, significant point near the unique point that success is observed will be dropped, and the conspicuousness point remained is by map nodes alternatively; Because new map nodes needs known depth information, only cannot obtain by single-frame images, therefore need to choose key frame nearest with it as the second view, utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method; In the second view, adopt zero-mean SSD method, realize the accurate reorientation of camera in conjunction with pose estimation model; Finally match point is projected to space, to generate current global context three-dimensional map;
Step 4.3, optimizes
Utilize optimum local to realize current map global coherency with overall LM (Levenberg-Marquardt) boundling adjustment algorithm and optimize to improve robustness; The mathematical description of this boundling adjustment algorithm is:
Wherein, σ jifor in i-th key frame, the unbiased esti-mator of FAST Feature Points Matching standard deviation, μ irepresent pose hexa-atomic group of description vectors of i-th key frame, p ifor the point in global map;
Adjusted by continuous boundling, make the stable cost function in (21) minimum; Overall situation boundling adjusts the pose of all key frames and the position of point map, utilizes the openness complexity that global matrix is decomposed in the inherence in structure-motion problems to present the decline of cube rank, by O ((N+M) simultaneously 3) be down to O (N 3), but the restriction of the exterior point vector product generated again in the estimation situation of majority, along with the growth of map, this operand can significantly increase;
For above problem, introduce local boundling adjustment, only adjust the subset pose of key frame here, objective function now becomes:
X is the set to be adjusted of serial key frame, and Y is anchor-frame collection, and Z is point map subset; Set X is made up of a up-to-date key frame and 4 immediate key frames in map, and the visible dots in all maps of key frame constitutes set Z, last Y and contains all key frames obtained by measurement data in Z; Thus, the pose of the key frame that local boundling adjusting and optimizing is up-to-date, nearest, have updated institute in map a little, complexity also significantly reduces simultaneously, is no more than O (NM).
2. a kind of PTAM based on intelligent robot terrain surface specifications according to claim 1 improves one's methods, and it is characterized in that:
The method specifically comprises following step:
Step 1, parameter correction
Step 1.1, parameter defines
By the relation of robot coordinate system and world coordinate system, build robot pose and represent, and by camera and the interplanar position orientation relation of target plane reference parameter definitely;
Step 1.2, camera calibration
Adopt the correction of FOV model realization monocular camera, image pixel coordinates is mapped to normalization coordinate plane, simultaneously combining camera Intrinsic Matrix K, realize pattern distortion and correct;
Step 2, based on the initialization of terrain surface specifications
Step 2.1, signature tracking
Obtain environment texture information, extract feature, set up Corner Feature association;
Step 2.2, pose initialization
Set up homography relation and pose initialization estimation model, realize pose initialization;
Step 2.3, plane parameter is estimated
Step 3, pose is estimated
Complete on initialized basis, according to the appearance estimation model based on terrain surface specifications, the initialization pose realizing camera is estimated; Further, PTAM, according to camera initialization pose, adopts polar curve way of search, sets up Corner Feature sub-pixel precision matching relationship in image pyramid, and again bring pose estimation model into, realize the accurate reorientation of camera;
Step 4, map building
Step 4.1, map initialization
System utilizes standard stereo camera algorithm model to set up current environment initialization map
Step 4.2, key frame upgrades to be searched for polar curve
Add new key frame and utilize polar curve to search for and set up unique point sub-pixel precision mapping relations with block matching method, realize the accurate reorientation of camera in conjunction with pose estimation model; Finally match point is projected to space, complete current global context three-dimensional map and create;
Step 4.3, optimizes
Overall situation and partial situation's boundling adjustment is utilized to realize the consistance optimization of map.
CN201510025810.7A 2015-01-19 2015-01-19 A kind of PTAM improved methods based on intelligent robot terrain surface specifications Expired - Fee Related CN104732518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510025810.7A CN104732518B (en) 2015-01-19 2015-01-19 A kind of PTAM improved methods based on intelligent robot terrain surface specifications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510025810.7A CN104732518B (en) 2015-01-19 2015-01-19 A kind of PTAM improved methods based on intelligent robot terrain surface specifications

Publications (2)

Publication Number Publication Date
CN104732518A true CN104732518A (en) 2015-06-24
CN104732518B CN104732518B (en) 2017-09-01

Family

ID=53456386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510025810.7A Expired - Fee Related CN104732518B (en) 2015-01-19 2015-01-19 A kind of PTAM improved methods based on intelligent robot terrain surface specifications

Country Status (1)

Country Link
CN (1) CN104732518B (en)

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN106548486A (en) * 2016-11-01 2017-03-29 浙江大学 A kind of unmanned vehicle location tracking method based on sparse visual signature map
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN106600641A (en) * 2016-12-07 2017-04-26 常州工学院 Embedded type visual servo control method based on multi-feature fusion
US20170154429A1 (en) * 2015-11-27 2017-06-01 Fujitsu Limited Estimation device and method
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN107272454A (en) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 A kind of real time human-machine interaction method based on virtual reality
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN107358624A (en) * 2017-06-06 2017-11-17 武汉几古几古科技有限公司 The dense positioning immediately of monocular and map reconstruction method
CN107687850A (en) * 2017-07-26 2018-02-13 哈尔滨工业大学深圳研究生院 A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN107767450A (en) * 2017-10-31 2018-03-06 南京维睛视空信息科技有限公司 It is a kind of that drawing method is built based on sparse slam in real time
CN107808395A (en) * 2017-10-31 2018-03-16 南京维睛视空信息科技有限公司 A kind of indoor orientation method based on SLAM
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN107885871A (en) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 Synchronous superposition method, system, interactive system based on cloud computing
CN108090958A (en) * 2017-12-06 2018-05-29 上海阅面网络科技有限公司 A kind of robot synchronously positions and map constructing method and system
CN108109169A (en) * 2017-12-12 2018-06-01 深圳市神州云海智能科技有限公司 A kind of position and orientation estimation method, device and robot based on rectangle mark
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108253962A (en) * 2017-12-18 2018-07-06 中北智杰科技(北京)有限公司 New energy pilotless automobile localization method under a kind of low light environment
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN108765326A (en) * 2018-05-18 2018-11-06 南京大学 A kind of synchronous superposition method and device
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN109100731A (en) * 2018-07-17 2018-12-28 重庆大学 A kind of method for positioning mobile robot based on laser radar scanning matching algorithm
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109141393A (en) * 2018-07-02 2019-01-04 北京百度网讯科技有限公司 Method for relocating, equipment and storage medium
CN109215059A (en) * 2018-10-16 2019-01-15 西安建筑科技大学 Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Mobile robot three-dimensional map construction method under circumstances not known
CN109544629A (en) * 2018-11-29 2019-03-29 南京人工智能高等研究院有限公司 Camera pose determines method and apparatus and electronic equipment
WO2019062651A1 (en) * 2017-09-26 2019-04-04 上海思岚科技有限公司 Localization and mapping method and system
CN109643367A (en) * 2016-07-21 2019-04-16 御眼视觉技术有限公司 Crowdsourcing and the sparse map of distribution and lane measurement for autonomous vehicle navigation
CN109741306A (en) * 2018-12-26 2019-05-10 北京石油化工学院 Image processing method applied to hazardous chemical storehouse stacking
CN109815847A (en) * 2018-12-30 2019-05-28 中国电子科技集团公司信息科学研究院 A kind of vision SLAM method based on semantic constraint
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110223354A (en) * 2019-04-30 2019-09-10 惠州市德赛西威汽车电子股份有限公司 A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction
CN110264509A (en) * 2018-04-27 2019-09-20 腾讯科技(深圳)有限公司 Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus
CN110389349A (en) * 2018-04-17 2019-10-29 北京京东尚科信息技术有限公司 Localization method and device
CN110660098A (en) * 2018-06-28 2020-01-07 北京京东尚科信息技术有限公司 Positioning method and device based on monocular vision
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN110874100A (en) * 2018-08-13 2020-03-10 北京京东尚科信息技术有限公司 System and method for autonomous navigation using visual sparse maps
CN110992487A (en) * 2019-12-10 2020-04-10 南京航空航天大学 Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank
CN111102984A (en) * 2020-01-21 2020-05-05 中南民族大学 Visualized mountain road navigation method, device, equipment and storage medium
CN111121774A (en) * 2020-01-14 2020-05-08 上海曼恒数字技术股份有限公司 Infrared positioning camera capable of detecting self posture in real time
CN111157009A (en) * 2020-04-02 2020-05-15 深圳司南数据服务有限公司 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence
CN111230886A (en) * 2020-03-05 2020-06-05 浙江理工大学 Industrial robot kinematics parameter calibration system and method with tolerance capability
CN111568305A (en) * 2019-02-18 2020-08-25 北京奇虎科技有限公司 Method and device for processing relocation of sweeping robot and electronic equipment
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN111982133A (en) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 Method and device for positioning vehicle based on high-precision map and electronic equipment
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112783995A (en) * 2020-12-31 2021-05-11 杭州海康机器人技术有限公司 V-SLAM map checking method, device and equipment
CN113129366A (en) * 2020-01-10 2021-07-16 北京字节跳动网络技术有限公司 Monocular SLAM (simultaneous localization and mapping) initialization method and device and electronic equipment
CN113156948A (en) * 2021-04-19 2021-07-23 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113211449A (en) * 2021-06-10 2021-08-06 哈工大机器人集团股份有限公司 Robot calibration method and system based on path analysis
US11288883B2 (en) 2019-07-23 2022-03-29 Toyota Research Institute, Inc. Autonomous task performance based on visual embeddings
CN114581517A (en) * 2022-02-10 2022-06-03 北京工业大学 Improved VINS method for complex illumination environment
WO2022134057A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Re-localization of robot
CN115451920A (en) * 2022-10-27 2022-12-09 南京航空航天大学 Relative pose measurement method for unmanned autonomous landing
JP2023509153A (en) * 2020-10-30 2023-03-07 珠海一微半導体股▲ふん▼有限公司 Local point cloud map construction method and visual robot

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048395A1 (en) * 2000-08-09 2002-04-25 Harman Philip Victor Image conversion and encoding techniques
JP2009093412A (en) * 2007-10-09 2009-04-30 Fujitsu Ltd Image measurement method and image measurement device
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN102519481A (en) * 2011-12-29 2012-06-27 中国科学院自动化研究所 Implementation method of binocular vision speedometer
CN104050671A (en) * 2014-06-25 2014-09-17 西安石油大学 Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048395A1 (en) * 2000-08-09 2002-04-25 Harman Philip Victor Image conversion and encoding techniques
JP2009093412A (en) * 2007-10-09 2009-04-30 Fujitsu Ltd Image measurement method and image measurement device
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN102519481A (en) * 2011-12-29 2012-06-27 中国科学院自动化研究所 Implementation method of binocular vision speedometer
CN104050671A (en) * 2014-06-25 2014-09-17 西安石油大学 Oil well pipe sleeve damage image edge detection method based on grey correlation analysis and Zernike moment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIA SONGMIN等: "Mobile Robot 3D Map Building Based on Laser Ranging and Stereovision", 《PROCEEDINGS OF THE 2011 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION》 *
SHENG JINBO等: "Mobile Robot Localization and Map Building Based on Laser Ranging and PTAM", 《PROCEEDINGS OF THE 2011 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION》 *
贾松敏等: "RTM框架下基于分层拓扑结构的多机器人系统地图拼接", 《机器人》 *
贾松敏等: "基于RGB-D相机的移动机器人三维SLAM", 《华中科技大学学报(自然科学版)》 *
贾松敏等: "基于变分模型的单目视觉三维重建方法", 《光学学报》 *

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10223804B2 (en) * 2015-11-27 2019-03-05 Fujitsu Limited Estimation device and method
US20170154429A1 (en) * 2015-11-27 2017-06-01 Fujitsu Limited Estimation device and method
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN109643367B (en) * 2016-07-21 2023-08-08 御眼视觉技术有限公司 Methods, systems, and readable media for crowd sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation
CN109643367A (en) * 2016-07-21 2019-04-16 御眼视觉技术有限公司 Crowdsourcing and the sparse map of distribution and lane measurement for autonomous vehicle navigation
CN106570904A (en) * 2016-10-25 2017-04-19 大连理工大学 Multi-target relative posture recognition method based on Xtion camera
CN106548486B (en) * 2016-11-01 2024-02-27 浙江大学 Unmanned vehicle position tracking method based on sparse visual feature map
CN106548486A (en) * 2016-11-01 2017-03-29 浙江大学 A kind of unmanned vehicle location tracking method based on sparse visual signature map
CN106600641A (en) * 2016-12-07 2017-04-26 常州工学院 Embedded type visual servo control method based on multi-feature fusion
CN106600641B (en) * 2016-12-07 2020-02-11 常州工学院 Embedded visual servo control method based on multi-feature fusion
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN106885574B (en) * 2017-02-15 2020-02-07 北京大学深圳研究生院 Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy
CN107358624A (en) * 2017-06-06 2017-11-17 武汉几古几古科技有限公司 The dense positioning immediately of monocular and map reconstruction method
CN107358624B (en) * 2017-06-06 2020-01-07 武汉几古几古科技有限公司 Monocular dense instant positioning and map reconstruction method
CN107272454A (en) * 2017-06-19 2017-10-20 中国人民解放军国防科学技术大学 A kind of real time human-machine interaction method based on virtual reality
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN107687850B (en) * 2017-07-26 2021-04-23 哈尔滨工业大学深圳研究生院 Unmanned aerial vehicle pose estimation method based on vision and inertia measurement unit
CN107687850A (en) * 2017-07-26 2018-02-13 哈尔滨工业大学深圳研究生院 A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
WO2019062651A1 (en) * 2017-09-26 2019-04-04 上海思岚科技有限公司 Localization and mapping method and system
CN107808395B (en) * 2017-10-31 2020-12-04 南京维睛视空信息科技有限公司 Indoor positioning method based on SLAM
CN107767450A (en) * 2017-10-31 2018-03-06 南京维睛视空信息科技有限公司 It is a kind of that drawing method is built based on sparse slam in real time
CN107808395A (en) * 2017-10-31 2018-03-16 南京维睛视空信息科技有限公司 A kind of indoor orientation method based on SLAM
CN107767450B (en) * 2017-10-31 2021-05-28 南京维睛视空信息科技有限公司 Spark-slam-based real-time graph establishing method
CN107869989B (en) * 2017-11-06 2020-02-07 东北大学 Positioning method and system based on visual inertial navigation information fusion
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN107862720B (en) * 2017-11-24 2020-05-22 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on multi-map fusion
CN107885871A (en) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 Synchronous superposition method, system, interactive system based on cloud computing
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108090958A (en) * 2017-12-06 2018-05-29 上海阅面网络科技有限公司 A kind of robot synchronously positions and map constructing method and system
CN108090958B (en) * 2017-12-06 2021-08-27 上海阅面网络科技有限公司 Robot synchronous positioning and map building method and system
CN108109169B (en) * 2017-12-12 2021-12-14 深圳市神州云海智能科技有限公司 Pose estimation method and device based on rectangular identifier and robot
CN108109169A (en) * 2017-12-12 2018-06-01 深圳市神州云海智能科技有限公司 A kind of position and orientation estimation method, device and robot based on rectangle mark
CN108253962A (en) * 2017-12-18 2018-07-06 中北智杰科技(北京)有限公司 New energy pilotless automobile localization method under a kind of low light environment
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN110389349A (en) * 2018-04-17 2019-10-29 北京京东尚科信息技术有限公司 Localization method and device
CN110389349B (en) * 2018-04-17 2021-08-17 北京京东尚科信息技术有限公司 Positioning method and device
CN110264509B (en) * 2018-04-27 2022-10-14 腾讯科技(深圳)有限公司 Method, apparatus, and storage medium for determining pose of image capturing device
CN110264509A (en) * 2018-04-27 2019-09-20 腾讯科技(深圳)有限公司 Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus
CN108765326A (en) * 2018-05-18 2018-11-06 南京大学 A kind of synchronous superposition method and device
CN110660098B (en) * 2018-06-28 2022-08-12 北京京东叁佰陆拾度电子商务有限公司 Positioning method and device based on monocular vision
CN110660098A (en) * 2018-06-28 2020-01-07 北京京东尚科信息技术有限公司 Positioning method and device based on monocular vision
CN109141393A (en) * 2018-07-02 2019-01-04 北京百度网讯科技有限公司 Method for relocating, equipment and storage medium
CN109100731A (en) * 2018-07-17 2018-12-28 重庆大学 A kind of method for positioning mobile robot based on laser radar scanning matching algorithm
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109102525B (en) * 2018-07-19 2021-06-18 浙江工业大学 Mobile robot following control method based on self-adaptive posture estimation
CN110874100A (en) * 2018-08-13 2020-03-10 北京京东尚科信息技术有限公司 System and method for autonomous navigation using visual sparse maps
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109215059A (en) * 2018-10-16 2019-01-15 西安建筑科技大学 Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane
CN109544629A (en) * 2018-11-29 2019-03-29 南京人工智能高等研究院有限公司 Camera pose determines method and apparatus and electronic equipment
CN109544629B (en) * 2018-11-29 2021-03-23 南京人工智能高等研究院有限公司 Camera position and posture determining method and device and electronic equipment
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Mobile robot three-dimensional map construction method under circumstances not known
CN109741306A (en) * 2018-12-26 2019-05-10 北京石油化工学院 Image processing method applied to hazardous chemical storehouse stacking
CN109741306B (en) * 2018-12-26 2021-07-06 北京石油化工学院 Image processing method applied to dangerous chemical storehouse stacking
CN109815847B (en) * 2018-12-30 2020-12-01 中国电子科技集团公司信息科学研究院 Visual SLAM method based on semantic constraint
CN109815847A (en) * 2018-12-30 2019-05-28 中国电子科技集团公司信息科学研究院 A kind of vision SLAM method based on semantic constraint
CN111568305B (en) * 2019-02-18 2023-02-17 北京奇虎科技有限公司 Method and device for processing relocation of sweeping robot and electronic equipment
CN111568305A (en) * 2019-02-18 2020-08-25 北京奇虎科技有限公司 Method and device for processing relocation of sweeping robot and electronic equipment
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110223354A (en) * 2019-04-30 2019-09-10 惠州市德赛西威汽车电子股份有限公司 A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction
CN111982133A (en) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 Method and device for positioning vehicle based on high-precision map and electronic equipment
US11288883B2 (en) 2019-07-23 2022-03-29 Toyota Research Institute, Inc. Autonomous task performance based on visual embeddings
CN110992487A (en) * 2019-12-10 2020-04-10 南京航空航天大学 Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank
CN113129366A (en) * 2020-01-10 2021-07-16 北京字节跳动网络技术有限公司 Monocular SLAM (simultaneous localization and mapping) initialization method and device and electronic equipment
CN113129366B (en) * 2020-01-10 2024-04-30 北京字节跳动网络技术有限公司 Monocular SLAM initialization method and device and electronic equipment
CN111121774A (en) * 2020-01-14 2020-05-08 上海曼恒数字技术股份有限公司 Infrared positioning camera capable of detecting self posture in real time
CN111102984A (en) * 2020-01-21 2020-05-05 中南民族大学 Visualized mountain road navigation method, device, equipment and storage medium
CN111230886A (en) * 2020-03-05 2020-06-05 浙江理工大学 Industrial robot kinematics parameter calibration system and method with tolerance capability
CN111157009A (en) * 2020-04-02 2020-05-15 深圳司南数据服务有限公司 Indoor positioning navigation creating method and system based on mobile terminal and AR (augmented reality) intelligence
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN112116651B (en) * 2020-08-12 2023-04-07 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
JP2023509153A (en) * 2020-10-30 2023-03-07 珠海一微半導体股▲ふん▼有限公司 Local point cloud map construction method and visual robot
JP7375206B2 (en) 2020-10-30 2023-11-07 珠海一微半導体股▲ふん▼有限公司 Construction method of local point cloud map and visual robot
WO2022134057A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Re-localization of robot
CN112783995A (en) * 2020-12-31 2021-05-11 杭州海康机器人技术有限公司 V-SLAM map checking method, device and equipment
CN112783995B (en) * 2020-12-31 2022-06-03 杭州海康机器人技术有限公司 V-SLAM map checking method, device and equipment
CN113156948B (en) * 2021-04-19 2022-06-28 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113156948A (en) * 2021-04-19 2021-07-23 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113211449A (en) * 2021-06-10 2021-08-06 哈工大机器人集团股份有限公司 Robot calibration method and system based on path analysis
CN114581517A (en) * 2022-02-10 2022-06-03 北京工业大学 Improved VINS method for complex illumination environment
CN115451920A (en) * 2022-10-27 2022-12-09 南京航空航天大学 Relative pose measurement method for unmanned autonomous landing
CN115451920B (en) * 2022-10-27 2023-03-14 南京航空航天大学 Relative pose measurement method for unmanned autonomous landing

Also Published As

Publication number Publication date
CN104732518B (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN104732518A (en) PTAM improvement method based on ground characteristics of intelligent robot
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN110428467B (en) Robot positioning method combining camera, imu and laser radar
Ramalingam et al. Skyline2gps: Localization in urban canyons using omni-skylines
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN110070615A (en) A kind of panoramic vision SLAM method based on polyphaser collaboration
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN107677274B (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN109583409A (en) A kind of intelligent vehicle localization method and system towards cognitive map
CN110807809A (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN103954275A (en) Lane line detection and GIS map information development-based vision navigation method
CN113658337B (en) Multi-mode odometer method based on rut lines
CN108801274A (en) A kind of terrestrial reference ground drawing generating method of fusion binocular vision and differential satellite positioning
CN105352509A (en) Unmanned aerial vehicle motion target tracking and positioning method under geographic information space-time constraint
CN110033411A (en) The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN112484746B (en) Monocular vision auxiliary laser radar odometer method based on ground plane
CN109579825A (en) Robot positioning system and method based on binocular vision and convolutional neural networks
CN112833892B (en) Semantic mapping method based on track alignment
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN111161334A (en) Semantic map construction method based on deep learning
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
Huang et al. Metric monocular localization using signed distance fields
Jin et al. Beyond learning: Back to geometric essence of visual odometry via fusion-based paradigm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170901

Termination date: 20200119

CF01 Termination of patent right due to non-payment of annual fee