CN109815966A - A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm - Google Patents

A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm Download PDF

Info

Publication number
CN109815966A
CN109815966A CN201910139665.3A CN201910139665A CN109815966A CN 109815966 A CN109815966 A CN 109815966A CN 201910139665 A CN201910139665 A CN 201910139665A CN 109815966 A CN109815966 A CN 109815966A
Authority
CN
China
Prior art keywords
characteristic point
mobile robot
point
double points
binaryzation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910139665.3A
Other languages
Chinese (zh)
Inventor
郑恩辉
王谈谈
刘政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201910139665.3A priority Critical patent/CN109815966A/en
Publication of CN109815966A publication Critical patent/CN109815966A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of based on the mobile robot visual odometer implementation method for improving SIFT algorithm.Depth camera robot front visual field environment by being mounted in mobile robot obtains the two-dimensional image information and three-dimensional coordinate information of the spatial point in front of mobile robot in visual field environment into information collection;Using SIFT feature matching algorithm is improved, processing obtains the matching double points of preliminary matches;The Mismatching point in candidate feature point is rejected using random consistency algorithm, obtains accurate matched matching double points;Moveable robot movement parameter is solved using accurate matched matching double points.The method of the present invention carries out information collection using depth camera, the three-dimensional information of spatial point can be directly acquired, and carry out Feature Points Matching using improved SIFT feature matching algorithm, to significantly improve the efficiency and accuracy of localization for Mobile Robot.

Description

A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
Technical field
The present invention relates to a kind of mobile robot autonomous navigation technical field, in particular to a kind of based on improving SIFT The mobile robot visual odometer implementation method of algorithm.
Background technique
In recent years, mobile robot is rapidly progressed and uses, and penetrates into the every field of life.It is traditional from In localization method, based on the method for self-locating of wheeled odometer, the drift in result is led to due to the phenomenon that there are wheel sideslips It moves, the method based on sonar and supersonic sensing interferes with each other because the two is active sensor, the positioning side based on GPS Method can not be positioned in the weak enclosed areas of signal.Due to the limitation of the relevant technologies, mobile robot is made by oneself at present Position, there are no the solutions that relative maturity is stable.
Visual odometry can be more to determine the self-positioning problem of mobile robot by acquiring and analyzing image sequence Mend traditional method for self-locating there are the problem of, enhance the efficiency and precision of localization for Mobile Robot.With inexpensive depth phase The release of machine, depth camera is more and more for localization for Mobile Robot and navigation.Different from traditional vision mileage It is more to avoid monocular odometer needs for meter, the two-dimensional image information and three-dimensional coordinate information of the available spatial point of depth camera The problem of secondary transformed coordinate system simultaneously can just solve the three-dimensional coordinate of spatial point by a large amount of calculating, improve the speed of calculating with Precision.
Summary of the invention
In order to solve the disadvantage that the prior art and deficiency, the present invention provide a kind of based on the mobile machine for improving SIFT algorithm People's visual odometry implementation method.
The purpose of the present invention is realized by the technical solution of following steps:
1) by the depth camera that is mounted in mobile robot to visual field environment in front of mobile robot into information collection, obtain The two-dimensional image information and three-dimensional coordinate information of spatial point in front of mobile robot in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points;
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) kinematic parameter of mobile robot is solved using accurate matched matching double points.
The step 2) specifically: progress characteristic point detection first;Secondly the spy of binaryzation is carried out to the characteristic point of acquisition Sign point description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains preliminary matches Matching double points.
The step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper, Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
Data characterization calculation amount, Ji Nengbao can be greatly reduced after handling in this way by above-mentioned improvement SIFT feature matching algorithm The accuracy that match point obtains is demonstrate,proved, and can be shortened the time in matching primitives.
In the step 4), the calculation for solving the kinematic parameter of mobile robot is as follows:
4.1) following moveable robot movement parameter equation is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate;
The depth camera uses real senseD435 depth camera.
The present invention is used only real senseD435 depth camera and carries out information collection, and can directly acquire spatial point Three-dimensional coordinate information.
The SIFT algorithm used different from traditional visual odometry in the feature point description discrete consuming a large amount of time, There is a problem of real-time difference, the present invention carries out Feature Points Matching using improved SIFT feature matching algorithm, by SIFT feature Vector carries out binaryzation, to significantly improve the efficiency of localization for Mobile Robot.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1, the present invention obtains the two-dimensional image information and three-dimensional coordinate information of spatial point using real senseD435 depth camera, The three-dimensional coordinate that monocular odometer needs multiple transformed coordinate system and can just solve spatial point by a large amount of calculating is avoided, is mentioned The high speed and precision that calculate.
2, the present invention proposes SIFT feature vector carrying out binaryzation, while realizing that raising guarantees detection accuracy, solves Conventional visual odometer uses SIFT algorithm in the feature point description discrete consuming plenty of time, and then leads to asking for real-time difference Topic.
Detailed description of the invention
Fig. 1 is the logical flow chart of the method for the present invention.
Specific embodiment
The invention will be further described for explanation and specific embodiment with reference to the accompanying drawing.
As shown in Figure 1, specific embodiments of the present invention are as follows:
1) by the real senseD435 depth camera that is mounted in mobile robot to visual field environment in front of mobile robot Into information collection, the two-dimensional image information and three-dimensional coordinate information of the spatial point in front of mobile robot in visual field environment are obtained;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled It carries out process of convolution with gaussian kernel function again after reason to obtain the sampled images of different images size and constitute Gaussian scale-space, so Different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space afterwards and constitutes DoG ruler Space is spent, extreme point (Blob) is detected in DoG scale space and is used as characteristic point;And for each of DoG scale space When pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and it is upper, Under adjacent scale DoG scale image in total 26 pixels of each 9 × 2 pixel be compared, can ensure that in this way Extreme point can be detected in scale space and two dimensional image space;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) moveable robot movement parameter is solved using accurate matched matching double points:
4.1) following equation group is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate.
The two adjacent frame sequence images that the present embodiment is acquired first with depth camera, carry out altogether three groups of feature extractions with The comparative experiments of characteristic matching, time needed for obtaining former SIFT feature matching algorithm and improved algorithmic match time compare As shown in table 1.
Table 1
By 1 gained of table, improved SIFT algorithm substantially reduces matching and is consumed while realizing that raising guarantees detection accuracy Time.It solves Conventional visual odometer and consumes the plenty of time using SIFT matching algorithm, and then lead to asking for real-time difference Topic, improves the efficiency of localization for Mobile Robot.
Then the present embodiment is directed to the two adjacent frame sequence images of depth camera acquisition, solve the movement of mobile robot Parameter, the transformation results solved are following (unit: m):
T=[- 0.0351 0.0423 0.282]T
It is recorded according to odometer, camera actual rotation angle is 6 °, and mobile is x=0.04m, y=0.04m, z=0.3m.By upper Calculation result is stated, obtains maximum absolute error as 0.018m, relative error 5%, error illustrates present invention side in allowed band Method is more accurate for localization for Mobile Robot.
More than, it is merely preferred embodiments of the present invention;But scope of protection of the present invention is not limited thereto.It is any Those familiar with the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its improves Design is subject to equivalent substitution or change, should all cover in protection of the invention.

Claims (5)

1. a kind of based on the mobile robot visual odometer implementation method for improving SIFT algorithm, which is characterized in that including following Step:
1) by the depth camera that is mounted in mobile robot to visual field environment in front of mobile robot into information collection, obtain The two-dimensional image information and three-dimensional coordinate information of spatial point in front of mobile robot in visual field environment;
2) two-dimensional image information obtained to step 1), using SIFT feature matching algorithm is improved, processing obtains preliminary matches Matching double points;
3) characteristic point pair of the preliminary matches obtained to step 2) rejects candidate feature using random consistency algorithm (RANSAC) Mismatching point in point obtains accurate matched matching double points;
4) kinematic parameter of mobile robot is solved using accurate matched matching double points.
2. a kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm according to claim 1, It is characterized by: the step 2) specifically: progress characteristic point detection first;Secondly binaryzation is carried out to the characteristic point of acquisition Feature point description;The characteristic point adjacent two images is slightly matched with the feature point description of binaryzation, obtains preliminary The matching double points matched.
3. according to claim 1 or 2 a kind of based on the mobile robot visual odometer realization side for improving SIFT algorithm Method, it is characterised in that: the step 2) improve SIFT feature matching algorithm specifically includes the following steps:
2.1) it carries out characteristic point detection: two dimensional image space being established by two dimensional image, is then carried out by two dimensional image down-sampled Process of convolution is carried out with gaussian kernel function again after reason, obtain the sampled images of different images size and constitutes Gaussian scale-space, Then different DoG scale images is obtained by every adjacent two layers sampled images work difference in Gaussian scale-space and constitutes DoG Scale space detects extreme point (Blob) in DoG scale space and is used as characteristic point;And for every in DoG scale space When a pixel carries out detection extreme point, each pixel with in the DoG scale image of scale 8 neighbor pixels and Total 26 pixels of each 9 × 2 pixel in the DoG scale image of upper and lower adjacent scale are compared;
2.2) feature point description is carried out to the characteristic point of acquisition and obtains the gradient eigenvector of characteristic point, the gradient of characteristic point is special It levies vector and carries out binaryzation, obtain binaryzation gradient eigenvector, specific formula are as follows:
Wherein, a is binarization threshold, and f indicates the gradient eigenvector of characteristic point, f=[f1,f2...f128], fiIndicate that gradient is special Levy i-th of gradient information in vector, fi∈f,biI-th of gradient information after indicating binaryzation;
2.3) through the above steps 2.1) -2.2) characteristic point and the binaryzation description processing for obtaining every piece image, to adjacent two Characteristic point between frame image is slightly matched, and candidate feature point is obtained: to adjacent two field pictures, using the binaryzation ladder of characteristic point Decision metric of the Euclidean distance between feature vector as characteristic point similitude in two field pictures is spent, characteristic point in two field pictures is taken Binaryzation gradient eigenvector between close a pair of of the characteristic point of Euclidean distance nearest a pair of of characteristic point and Euclidean distance time, In these two pair characteristic point, if nearest Euclidean distance is less than preset ratio threshold value divided by secondary close Euclidean distance, Europe is judged Nearest a pair of of the characteristic point of formula distance is similar, as matching double points, casts out close a pair of of the characteristic point of Euclidean distance time;
2.4) repeat the above steps 2.2) process, until all matching double points for obtaining meeting condition in two field pictures.
4. a kind of calculation based on the mobile robot visual mileage for improving SIFT algorithm according to claim 1 is such as Under:
4.1) following moveable robot movement parameter equation is initially set up:
4.2) then building residual sum of squares (RSS) function min { R, T } solves rotation when obtaining objective function minimum with least square method Torque battle array R and translation vector T:
Min { R, T }=| | Pqj-(RPpj+T)||2
In formula, PpjAnd PqjThe corresponding three-dimensional coordinate of two characteristic points of matching double points in respectively adjacent two frame sequences image, by The three-dimensional coordinate information that the accurate matched characteristic point and step 1) that step 3) obtains obtain is combined and is obtained;Before subscript p is represented One frame image, subscript q represent a later frame image, and j is the ordinal number of accurate matched matching double points, x, y, and z represents three-dimensional coordinate.
5. a kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm according to claim 1, It is characterized by: the depth camera uses real senseD435 depth camera.
CN201910139665.3A 2019-02-26 2019-02-26 A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm Pending CN109815966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910139665.3A CN109815966A (en) 2019-02-26 2019-02-26 A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910139665.3A CN109815966A (en) 2019-02-26 2019-02-26 A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm

Publications (1)

Publication Number Publication Date
CN109815966A true CN109815966A (en) 2019-05-28

Family

ID=66607529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910139665.3A Pending CN109815966A (en) 2019-02-26 2019-02-26 A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm

Country Status (1)

Country Link
CN (1) CN109815966A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079786A (en) * 2019-11-15 2020-04-28 北京理工大学 ROS and Gazebo-based rotating camera feature matching algorithm
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079786A (en) * 2019-11-15 2020-04-28 北京理工大学 ROS and Gazebo-based rotating camera feature matching algorithm
CN114111787A (en) * 2021-11-05 2022-03-01 上海大学 Visual positioning method and system based on three-dimensional road sign
CN114111787B (en) * 2021-11-05 2023-11-21 上海大学 Visual positioning method and system based on three-dimensional road sign

Similar Documents

Publication Publication Date Title
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN106767399B (en) The non-contact measurement method of logistics goods volume based on binocular stereo vision and dot laser ranging
CN103093191B (en) A kind of three dimensional point cloud is in conjunction with the object identification method of digital image data
CN105335973B (en) Apply to the visual processing method of strip machining production line
CN111210477B (en) Method and system for positioning moving object
CN104121902B (en) Implementation method of indoor robot visual odometer based on Xtion camera
CN106826815A (en) Target object method of the identification with positioning based on coloured image and depth image
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN106295512B (en) Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
CN103136525B (en) A kind of special-shaped Extended target high-precision locating method utilizing Generalized Hough Transform
CN105574812B (en) Multi-angle three-dimensional data method for registering and device
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN104766309A (en) Plane feature point navigation and positioning method and device
CN110763204B (en) Planar coding target and pose measurement method thereof
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN110223355B (en) Feature mark point matching method based on dual epipolar constraint
Liang et al. Automatic registration of terrestrial laser scanning data using precisely located artificial planar targets
CN110648362B (en) Binocular stereo vision badminton positioning identification and posture calculation method
Nagy et al. Online targetless end-to-end camera-LiDAR self-calibration
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN111415376A (en) Automobile glass sub-pixel contour extraction method and automobile glass detection method
CN113049184A (en) Method, device and storage medium for measuring mass center
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190528

WD01 Invention patent application deemed withdrawn after publication