CN109671120A - A kind of monocular SLAM initial method and system based on wheel type encoder - Google Patents
A kind of monocular SLAM initial method and system based on wheel type encoder Download PDFInfo
- Publication number
- CN109671120A CN109671120A CN201811324601.2A CN201811324601A CN109671120A CN 109671120 A CN109671120 A CN 109671120A CN 201811324601 A CN201811324601 A CN 201811324601A CN 109671120 A CN109671120 A CN 109671120A
- Authority
- CN
- China
- Prior art keywords
- key frame
- wheel type
- point
- type encoder
- slam
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a kind of monocular SLAM initial method and system based on wheel type encoder, the first key frame and the second key frame are chosen in SLAM initialisation image sequence, characteristic matching is carried out, all match points is calculated in the world coordinate system coordinate of the first key frame, establishes initial point cloud map;And the camera pose of the second key frame is determined using the displacement information of wheel type encoder, the camera pose of the first key frame and the second key frame is fixed in optimization, part bundle collection adjustment is not carried out to this two frames camera pose.The present invention is combined using the displacement information of wheel type encoder with the point feature of visual sensor image information, play the role of message complementary sense, it very can accurately determine the camera pose of the second key frame, remain the dimensional information of real world, improve the positioning accuracy of camera, and operation is simplified, response speed is improved.
Description
Technical field
The present invention relates to the initialization of synchronous superposition, especially a kind of monocular based on wheel type encoder
SLAM initial method and system.
Background technique
Simultaneous localization and mapping (Simultaneous Localization and Mapping, SLAM) its target
It is the three-dimensional structure of real-time reconstruction environment and to be positioned simultaneously to robot itself in a unknown environment.Vision SLAM
Technology is inferred to orientation of the visual sensor in circumstances not known, and constructing environment map simultaneously according to the video information of shooting,
Its basic principle is multi-view geometry principle.The target of vision SLAM is while recovering the corresponding camera motion ginseng of every frame image
Number C1…CmAnd scene three-dimensional structure X1…Xn;Wherein each camera motion parameter CiContain position and the posture letter of camera
Breath, is typically expressed as one 3 × 3 spin matrix RiWith a three-dimensional position vector Pi。
Mur-Artal et al. proposes ORB-SLAM, the monocular SLAM initialization based on image that the method discloses a kind of
Method provides the picture pick-up device to acquire ambient enviroment image;Initialization procedure is realized using the characteristic point of image;It should
Method includes following content: feature extraction matching: extracting ORB characteristic point, and progress respectively to the two continuous frames image of input
Match;The selection of model of place: plane and non-planar two kinds of situations are considered, according to the ratio score value preference pattern of calculating, plane pair
Homography matrix is answered, non-planar corresponding basis matrix acquires matrix using 8 methods;Calculate camera pose: previous step can obtain one
A matrix (homography matrix or basis matrix), several groups of pose solutions can be obtained by decomposing the matrix;Restore characteristic point depth: to upper one
Each group of camera pose in step, the depth value of each characteristic point is recovered using trigonometric ratio;Select best pose solution: according to
Every group of three-dimensional point cloud found out of previous step, camera pose of that the best group pose of selection result as the second frame of initialization.
This method can effectively calculate the camera pose and a series of initial point map of the second frame, but simultaneously there is also it is following not
Foot: 1, calculating essential matrix, to lead to the problem of scale uncertain, i.e., there are some zoom factor between calculated value and true value,
And due to not knowing for scale, the zoom factor initialized every time is different from, so the multiple survey for same true value
There may be relatively large deviations for magnitude, this is very serious problem in some cases, such as the application scenarios of sweeping robot;2, it surveys
The error of magnitude and true value is larger.
Summary of the invention
Goal of the invention: in view of the above-mentioned drawbacks of the prior art, the present invention is intended to provide a kind of realization scale it is determining and
The high monocular SLAM initial method and system based on wheel type encoder of positioning accuracy.
Technical solution: a kind of monocular SLAM initial method based on wheel type encoder includes the following steps: at the beginning of SLAM
The first key frame and the second key frame are chosen in beginningization image sequence, and feature is carried out to the first key frame and the second key frame
Match, the image coordinate information of matching double points, the displacement information of wheel type encoder and the camera internal reference obtained according to matching calculates
All match points establish initial point cloud map in the world coordinate system coordinate of the first key frame;And utilize the position of wheel type encoder
The camera pose that information determines the second key frame is moved, the camera pose of the first key frame and the second key frame is fixed in optimization,
Part bundle collection adjustment is not carried out to this two frames camera pose.
For ease of operation, further, chosen in initialisation image sequence the first key frame and the second key frame it
Before, advanced row data acquisition: on ROS platform, graph node and wheel type encoder node is enabled to release news outward;It is saved in SLAM
The information of the two nodes publication is subscribed in point;After SLAM node receives image information and wheel type encoder displacement information,
By image information association corresponding with displacement information.
Further, feature extraction is first carried out before characteristic matching, and ORB feature, traversal specifically are extracted to image information
Each pixel of image judges whether the condition for meeting FAST angle point, finds out all key points for meeting condition, then to every
One FAST angle point calculates its BRIEF description and principal direction.
In order to improve quality of match, further, the first key frame and the second key are chosen in initialisation image sequence
Frame, carrying out characteristic matching to the first key frame and the second key frame has constraint condition, comprising:
Characteristic point number constraint: the characteristic point quantity of image, which is greater than characteristic point threshold value, can just be chosen for the first key frame or the
Two key frames;
Displacement interval constraint: the second key frame can be just chosen for by being greater than displacement threshold value with the displacement spacing of the first key frame;
Matching double points number constraint: the matching double points quantity for carrying out characteristic matching with the first key frame is greater than match point logarithm
Amount threshold value can just be chosen for the second key frame.
In order to which cloud map is more accurate, further, also carry out rejecting Mismatching point after the characteristic matching to operation.
Further, the rejecting Mismatching point is to using ACRANSAC algorithm or RANSAC algorithm.
Further, described to reject Mismatching point pair method particularly includes: reject point of the parallax no more than parallax threshold value and
The point of depth value overdepth preset range.
Further, the too small point of parallax is specifically first rejected, then calculates match point in the world coordinates of the first key frame
It is the z coordinate in coordinate (x, y, z), z value is depth value, then rejects the point of z value overdepth preset range.
Further, according to the displacement information and camera internal reference of the image coordinate information of matching double points, wheel type encoder
It is specifically to use national forest park in Xiaokeng to calculate world coordinate system to sit that match point, which is calculated, in the world coordinate system coordinate of the first key frame
It marks (x, y, z):
Wherein, fxX-axis to camera focus, Δ s=s2-s1For the difference of two wheel type encoders displacement, s1、s2Respectively
First key frame wheel type encoder displacement information corresponding with the second key frame;(u1, v1) be first frame image characteristic point pixel
Coordinate, (cx, cy) it is datum mark, (fx, fy) it is camera focus.
Further, the displacement information using wheel type encoder determines that the camera pose of the second key frame specifically will
The pose T of first key frame1It is set as 4 × 4 unit matrix, then calculates the camera pose T of the second key frame2:
Wherein, (x1,y1)、(x2,y2) it is respectively wheel type encoder in the first key frame, the displacement information of the second key frame.
Further, the optimization of the camera pose for fixing the first key frame and the second key frame specifically:
Wherein, X is point map world coordinates, and R is spin matrix, and t is translation vector, KLFor total view key frame set but not
Include the first key frame and the second key frame, PLFor the visual point set in these key frames, KFFor except KLExcept it is all crucial
Frame, XkFor PLIn point and key frame k in key point matching double points set, ρ be robust Huber cost function, xjFor key
Point, π are projection function, and ∑ is covariance matrix.
A kind of monocular SLAM initialization system based on wheel type encoder, comprising:
Visual sensor, for shooting SLAM initialisation image sequence and being transmitted to SLAM module;
Wheel type encoder, for converting displacement information for motion information and being transmitted to SLAM module;
SLAM module, for choosing the first key frame and the second key frame in SLAM initialisation image sequence, to first
Key frame and the second key frame carry out characteristic matching, according to the image coordinate information for matching the matching double points obtained, wheeled coding
The displacement information and camera internal reference of device calculate all match points in the world coordinate system coordinate of the first key frame, establish initial point
Cloud map, and determine using the displacement information of wheel type encoder the camera pose of the second key frame, the first pass is fixed in optimization
The camera pose of key frame and the second key frame does not carry out part bundle collection adjustment to this two frames camera pose.
The utility model has the advantages that the present invention utilizes the displacement information and the point feature phase of visual sensor image information of wheel type encoder
In conjunction with playing the role of message complementary sense, so that the pose of camera be calculated, and construct three-dimensional point cloud map.Using wheeled
The displacement information of encoder very can accurately determine the camera pose of the second key frame, and while optimizing fixes the first key frame
The dimensional information that real world can be remained with the camera pose of the second key frame improves the positioning accuracy of camera, and letter
Change operation, improves response speed.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
Specific embodiment
The technical program is described in detail below by embodiment combination attached drawing.
As shown in Figure 1, the present embodiment is by a kind of monocular SLAM initial method based on wheel type encoder and system application
Indoors on the monocular top view sweeping robot of environment, the image of camera shooting and the displacement number of wheel type encoder are obtained in real time
According to, ORB feature is extracted and matched to the two field pictures sufficiently large to spacing, according to existing various data, using camera aperture at
As a series of initial point map of model calculating, initial point cloud map is finally constructed, subsequent use is facilitated.
Monocular SLAM initialization system based on wheel type encoder includes visual sensor, wheel type encoder and SLAM mould
Block, visual sensor is for shooting SLAM initialisation image sequence and being transmitted to SLAM module;Visual sensor is generally colour
Camera uses depth camera component also in combination with 3D depth camera;Wheel type encoder (wheel encoder) will be for that will transport
Dynamic information is converted into displacement information and is transmitted to SLAM module, it is able to detect the angle of wheel rotation, is indirectly converted into position
Information is moved, help identifies robot position;Major function, that is, initial method of SLAM module is: initializing and schemes in SLAM
As choosing the first key frame and the second key frame in sequence, characteristic matching is carried out to the first key frame and the second key frame, according to
It matches the image coordinate information of matching double points obtained, the displacement information of wheel type encoder and camera internal reference and calculates all matchings
O'clock the first key frame world coordinate system coordinate, using all match points the first key frame world coordinate system coordinate establish
Initial point cloud map;And the camera pose of the second key frame is determined using the displacement information of wheel type encoder, initialization and then
For the matching double points of 3D point and 2D point, the camera pose of subsequent key frame is solved using the method for PnP, further according to camera pose
Coordinates computed is continuously replenished and improves point cloud map.The camera pose of the first key frame and the second key frame is fixed in optimization, no
Part bundle collection adjustment is carried out to this two frames camera pose.The scale determined using two framing bit appearance camera poses is to third frame and subsequent
Image carries out bundle collection adjustment, and part bundle collection adjustment (Bundle Adjustment) can optimize currently processed key frame, minimum
Change re-projection error.Concrete scheme is as follows:
One, input data is obtained
Input data includes the image sequence of visual sensor shooting and the displacement data of wheel type encoder output.Initial
Change before choosing the first key frame and the second key frame in image sequence, advanced row data acquisition: in ROS (Robot
Operating System, robot operating system) on platform, enable graph node and wheel type encoder node publish language
Sentence releases news outward;The information of the two nodes publication is subscribed to subsribe sentence in SLAM node, and defines this
The call back function of two nodes, in this way, two call back functions executed in list, SLAM section can be recycled when ROS operation
Point can receive image data and wheel type encoder displacement data;SLAM node receives image information and wheel type encoder
It after displacement information, using displacement data as a member variable of class, is stored in variable, thus by image information and displacement
Information correspondence associates, and subsequent program is facilitated to use.
Two, two field pictures are initialized
Feature extraction is first carried out before characteristic matching, and ORB feature specifically is extracted to image information.ORB(Oriented
FAST and Rotated BRIEF) it is the operator that a characteristic point detects, it is based on oFAST and rBRIEF (rotated
BRIEF a kind of operator).ORB is exactly the combination with directive fast and rBRIEF, that is, detects directive FAST
Then keypoints extracts rBRIEF Feature Descriptor using these points;It can be used to detection characteristic point,
It can be used to calculate feature point description, just as sift is as surf.
Each pixel for traversing image, judges whether the condition for meeting FAST angle point, finds out all conditions that meet
Key point, then its BRIEF description and principal direction are calculated to each FAST angle point.Invariable rotary can be increased by calculating principal direction
Property.FAST angle point is defined as: if certain pixel differs larger with pixel enough in its surrounding neighbors, which may be
Angle point.
The first key frame and the second key frame are chosen in initialisation image sequence, to the first key frame and the second key frame
When carrying out characteristic matching, there is constraint condition, comprising:
Characteristic point number constraint: the characteristic point quantity of image, which is greater than characteristic point threshold value, can just be chosen for the first key frame or the
Two key frames;The range that can be set is 80-150, and characteristic point threshold value is set as 100 in the present embodiment.
Displacement interval constraint: the second key frame can be just chosen for by being greater than displacement threshold value with the displacement spacing of the first key frame;
It is displaced spacing and provides data by wheel type encoder, when the spacing between two frames is too small, the depth estimation error of to map point is just
Can be bigger, the range that can be set is 10-60cm, and displacement threshold value is set as 30cm in the present embodiment.
Matching double points number constraint: the matching double points quantity for carrying out characteristic matching with the first key frame is greater than match point logarithm
Amount threshold value can just be chosen for the second key frame;In addition, when the second key frame and the first key frame matching double points in the present embodiment
When quantity is unsatisfactory for threshold requirement, the first key frame of this group and the second key frame images are directly lost, it is crucial to choose first again
Frame, then choose the second matched key frame;The range that matching double points amount threshold can be set is 80-150, in the present embodiment
Matching double points amount threshold is set as 100.
Only when meeting above-mentioned three constraints simultaneously, program just will do it subsequent processing.
Wherein, characteristic matching method particularly includes: for each characteristic point of the first key frame images, closed respectively with first
Some characteristic points in some specific region in key frame image calculate Hamming distance, take apart from that the smallest characteristic point conduct
Match point.Hamming distance is used in data transmission error control coding the inside, and Hamming distance is a concept, it indicates two
A (equal length) word corresponds to the different quantity in position, we indicate the Hamming distance between two words x, y with d (x, y).To two
Character string carries out XOR operation, and the number that statistical result is 1, then this number is exactly Hamming distance.
Three, national forest park in Xiaokeng calculates point map
It also carries out rejecting Mismatching point after characteristic matching to operation, the specific side of Mismatching point pair is rejected in the present embodiment
Method are as follows: reject the point of parallax too small point and depth value overdepth preset range.The too small point of parallax is specifically first rejected, i.e.,
No more than the point of parallax threshold value, those skilled in the art can set parallax threshold value, the parallax being arranged in the present embodiment according to common sense
The distance of pixel coordinate of the threshold value, that is, matching double points in the first key frame and the second key frame is 1;Match point is calculated again
Z coordinate in the world coordinate system coordinate (x, y, z) of one key frame, z value is depth value, then rejects z value overdepth and preset
The point of range.Depth preset range can need sets itself according to scene, and the present embodiment is due to being to apply sweeping in monocular top view
On floor-washing robot, therefore camera shooting is indoor ceiling image, then can be by the conventional distance range of camera and ceiling
(such as 2-4m) is set as depth preset range, when the z value acquired exceeds the range, just rejects the point, does not carry out subsequent behaviour to it
Make.In addition, z coordinate can be directly obtained at this time if visual sensor uses depth camera component, therefore do not have to first calculate z
Coordinate can first reject no match point (i.e. the point of parallax too small point and depth value overdepth preset range), then calculate seat
It marks (x, y, z).
Under the application scenarios of the present embodiment, the operation for so rejecting Mismatching point proves its effect foot in many experiments
Meet it is subsequent build figure demand, and have the advantages that calculation amount is small, the response time is short.
In addition, to meet application demand under a variety of environment, in other embodiments, Mismatching point is rejected to can also adopt
With the higher ACRANSAC algorithm of accuracy or RANSAC algorithm.RANSAC (the random sample provided in openCV
Consensus, random sampling unification algorism) it is that the most widely used error hiding rejects algorithm at present, comprising: at random from data
It concentrates and extracts ss group data for computation model;Interior point is found by threshold value tt;If the number of iterations is less than nn and interior number
Beyond mm, then iteration is terminated;Otherwise the iteration since first.And in openMVG, it rejects error hiding and uses
ACRANSAC, the parameter that it is used is less, substantially obtains optimal models not against parameter setting.Also, it uses
ACRANSAC directly can objectively compare the difference of two models, to obtain best model.Those skilled in the art can root
Suitable error hiding elimination method is selected according to demand.
According to the calculating of the image pixel coordinates information of matching double points, the displacement information of wheel type encoder and camera internal reference
With o'clock the first key frame camera coordinates system (i.e. world coordinate system coordinate) be specifically using national forest park in Xiaokeng calculate the world
Coordinate system coordinate (x, y, z):
Wherein, fxX-axis to camera focus, Δ s=s2-s1For the difference of two wheel type encoders displacement, s1、s2Respectively
First key frame wheel type encoder displacement information corresponding with the second key frame;(u1, v1) be first frame image characteristic point pixel
Coordinate, above formula are to be obtained by the Mapping inference of national forest park in Xiaokeng combination image coordinate to camera coordinates, and first closes
The world coordinate system coordinate of key frame is camera coordinates system coordinate, therefore world coordinates can directly be calculated by pixel coordinate
It is coordinate, (cx, cy) it is datum mark, (fx, fy) it is camera focus.
Four, initial map is created
The coordinate for having calculated a point map according to the method described above carries out all matching double points for meeting condition identical
Processing, all match points can be obtained in the world coordinate system coordinate of the first key frame, a series of point map is using visual
The mode of change draws out each three-dimensional map point, constructs initial point cloud map.
In this way after initialization, subsequent camera pose can be directed to the matching double points of 3D point and 2D point, utilize
The method of PnP solves the camera pose of subsequent key frame, further according to camera pose coordinates computed, is continuously replenished with improving point cloud
Figure.
Also, since the displacement data of wheel type encoder is more accurate, the scale of real world is contained, can use wheel
The displacement information of formula encoder determines the camera pose of the second key frame, and being able to solve monocular SLAM has scale is uncertain to ask
Topic.Specifically by the pose T of the first key frame1It is set as 4 × 4 unit matrix, then calculates the camera pose T of the second key frame2:
Wherein, for ease of calculation, enable wheel type encoder coordinate system parallel with camera coordinates system x-axis, y-axis, (x1,y1)、
(x2,y2) it is respectively wheel type encoder in the first key frame, the displacement information of the second key frame.Sweeper in the present embodiment
Device people is since the limitation of mechanical realization can only be moved along positive direction of the x-axis, and visual sensor is flat to positive direction of the x-axis upon initialization
When moving △ d (unit can be m or cm etc.), the camera pose T of the second key frame2Are as follows:
The camera pose of key frame before changing due to subsequent part bundle collection adjustment, if the pose of front cross frame becomes
Change, then scale can also change therewith.In order to remain scale that initialization is decided, the present embodiment is in subsequent part bundle
The first, second frame camera pose is fixed in collection adjustment, part bundle collection adjustment is not carried out to this two frames camera pose in optimization,
It is so able to solve the problem that deviation is larger between the multiple measurement to Same Scene, specifically:
Wherein, X is point map world coordinates, and R is spin matrix, and t is translation vector, KLFor total view key frame set but not
Include the first key frame and the second key frame, PLFor the visual point set in these key frames, KFFor except KLExcept it is all crucial
Frame, XkFor PLIn point and key frame k in key point matching double points set, ρ be robust Huber cost function, xjFor key
Point, π are projection function, and ∑ is covariance matrix.In this way, SLAM can keep the scale of real world always.
The above is only the preferred embodiment of the present invention, for those skilled in the art, are not taking off
Under the premise of from the principle of the invention, several improvements and modifications can also be made, these improvements and modifications also should be regarded as of the invention
Protection scope.
Claims (15)
1. a kind of monocular SLAM initial method based on wheel type encoder, which comprises the steps of: at the beginning of SLAM
The first key frame and the second key frame are chosen in beginningization image sequence, and feature is carried out to the first key frame and the second key frame
Match, the image coordinate information of matching double points, the displacement information of wheel type encoder and the camera internal reference obtained according to matching calculates
All match points establish initial point cloud map in the world coordinate system coordinate of the first key frame;And utilize the position of wheel type encoder
The camera pose that information determines the second key frame is moved, the camera pose of the first key frame and the second key frame is fixed in optimization,
Part bundle collection adjustment is not carried out to this two frames camera pose.
2. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that
Before choosing the first key frame and the second key frame in initialisation image sequence, advanced row data acquisition: on ROS platform, enable
Graph node and wheel type encoder node release news outward;The information of the two nodes publication is subscribed in SLAM node;
After SLAM node receives image information and wheel type encoder displacement information, by image information association corresponding with displacement information.
3. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that
Feature extraction is first carried out before the characteristic matching, and ORB feature specifically is extracted to image information, traverses each pixel of image
Point judges whether the condition for meeting FAST angle point, finds out all key points for meeting condition, then calculate each FAST angle point
Its BRIEF description and principal direction.
4. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that institute
It states and chooses the first key frame and the second key frame in initialisation image sequence, the first key frame and the second key frame are carried out special
Sign matching has constraint condition, comprising:
Characteristic point number constraint: the characteristic point quantity of image, which is greater than characteristic point threshold value, can just be chosen for the first key frame or the second pass
Key frame;
Displacement interval constraint: the second key frame can be just chosen for by being greater than displacement threshold value with the displacement spacing of the first key frame;
Matching double points number constraint: the matching double points quantity for carrying out characteristic matching with the first key frame is greater than matching double points quantity threshold
Value can just be chosen for the second key frame.
5. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that
Also carry out rejecting Mismatching point after the characteristic matching to operation.
6. a kind of monocular SLAM initial method based on wheel type encoder according to claim 5, which is characterized in that institute
It states and rejects Mismatching point to using ACRANSAC algorithm or RANSAC algorithm.
7. a kind of monocular SLAM initial method based on wheel type encoder according to claim 5, which is characterized in that institute
It states and rejects Mismatching point pair method particularly includes: the point and depth value overdepth for rejecting parallax no more than parallax threshold value preset model
The point enclosed.
8. a kind of monocular SLAM initial method based on wheel type encoder according to claim 7, which is characterized in that tool
Body is first to reject the too small point of parallax, then calculate z of the match point in the world coordinate system coordinate (x, y, z) of the first key frame and sit
Mark, z value is depth value, then rejects the point of z value overdepth preset range.
9. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that root
Match point is calculated in the first key according to the image coordinate information of matching double points, the displacement information of wheel type encoder and camera internal reference
The world coordinate system coordinate of frame is specifically that national forest park in Xiaokeng is used to calculate world coordinate system coordinate (x, y, z):
Wherein, fxX-axis to camera focus, Δ s=s2-s1For the difference of two wheel type encoders displacement, s1、s2Respectively first
Key frame wheel type encoder displacement information corresponding with the second key frame;(u1, v1) be first frame image characteristic point pixel sit
Mark, (cx, cy) it is datum mark, (fx, fy) it is camera focus.
10. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that
The displacement information using wheel type encoder determines that the camera pose of the second key frame is specifically by the pose of the first key frame
T1It is set as 4 × 4 unit matrix, then calculates the camera pose T of the second key frame2:
Wherein, (x1,y1)、(x2,y2) it is respectively wheel type encoder in the first key frame, the displacement information of the second key frame.
11. a kind of monocular SLAM initial method based on wheel type encoder according to claim 1, which is characterized in that
The optimization of the camera pose for fixing the first key frame and the second key frame specifically:
Wherein, X is point map world coordinates, and R is spin matrix, and t is translation vector, KLDepending on key frame set but do not include to be total
First key frame and the second key frame, PLFor the visual point set in these key frames, KFFor except KLExcept all keys,
XkFor PLIn point and key frame k in key point matching double points set, ρ be robust Huber cost function, xjFor key point,
π is projection function, and ∑ is covariance matrix.
12. a kind of monocular SLAM based on wheel type encoder initializes system characterized by comprising
Wheel type encoder, for converting displacement information for motion information and being transmitted to SLAM module;
SLAM module, for choosing the first key frame and the second key frame in SLAM initialisation image sequence, to the first key
Frame and the second key frame carry out characteristic matching, the image coordinate information of the matching double points obtained according to matching, wheel type encoder
Displacement information and camera internal reference calculate all match points in the world coordinate system coordinate of the first key frame, with establishing initial point cloud
Scheme, and determine the camera pose of the second key frame using the displacement information of wheel type encoder, fixes the first key frame in optimization
With the camera pose of the second key frame, part bundle collection adjustment is not carried out to this two frames camera pose.
13. a kind of monocular SLAM based on wheel type encoder according to claim 12 initializes system, which is characterized in that
It further include visual sensor, for shooting SLAM initialisation image sequence and being transmitted to SLAM module.
14. a kind of monocular SLAM based on wheel type encoder according to claim 12 initializes system, which is characterized in that
The SLAM module is also used to carry out feature extraction, extracts ORB feature to image information, traverses each pixel of image,
Judge whether the condition for meeting FAST angle point, finds out all key points for meeting condition, then it is calculated to each FAST angle point
BRIEF description and principal direction.
15. a kind of monocular SLAM based on wheel type encoder according to claim 12 initializes system, which is characterized in that
The SLAM module is also used to reject Mismatching point pair after characteristic matching.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811324601.2A CN109671120A (en) | 2018-11-08 | 2018-11-08 | A kind of monocular SLAM initial method and system based on wheel type encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811324601.2A CN109671120A (en) | 2018-11-08 | 2018-11-08 | A kind of monocular SLAM initial method and system based on wheel type encoder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109671120A true CN109671120A (en) | 2019-04-23 |
Family
ID=66142042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811324601.2A Pending CN109671120A (en) | 2018-11-08 | 2018-11-08 | A kind of monocular SLAM initial method and system based on wheel type encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109671120A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084885A (en) * | 2019-05-06 | 2019-08-02 | 广州市百果园信息技术有限公司 | A kind of cloud and image optimization method, device, equipment and storage medium |
CN110310333A (en) * | 2019-06-27 | 2019-10-08 | Oppo广东移动通信有限公司 | Localization method and electronic equipment, readable storage medium storing program for executing |
CN110706248A (en) * | 2019-08-20 | 2020-01-17 | 广东工业大学 | Visual perception mapping algorithm based on SLAM and mobile robot |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111521204A (en) * | 2020-03-19 | 2020-08-11 | 安徽建筑大学 | Angular displacement visual measurement method based on absolute position rotary encoder |
CN111862219A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111862214A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111882589A (en) * | 2020-06-23 | 2020-11-03 | 广州万维创新科技有限公司 | Image-based monocular vision SLAM initialization method |
CN111928842A (en) * | 2020-10-14 | 2020-11-13 | 蘑菇车联信息科技有限公司 | Monocular vision based SLAM positioning method and related device |
CN112200917A (en) * | 2020-09-30 | 2021-01-08 | 北京零境科技有限公司 | High-precision augmented reality method and system |
CN112219087A (en) * | 2019-08-30 | 2021-01-12 | 深圳市大疆创新科技有限公司 | Pose prediction method, map construction method, movable platform and storage medium |
CN112258546A (en) * | 2020-10-18 | 2021-01-22 | 东南大学 | Key frame selection method of SLAM system |
CN112558602A (en) * | 2020-11-19 | 2021-03-26 | 许昌许继软件技术有限公司 | Robot positioning method based on image characteristics |
WO2021083242A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Map constructing method, positioning method and system, wireless communication terminal, and computer-readable medium |
CN112819744A (en) * | 2021-02-26 | 2021-05-18 | 中国人民解放军93114部队 | GNSS and visual SLAM fused track measuring method and device |
CN113129366A (en) * | 2020-01-10 | 2021-07-16 | 北京字节跳动网络技术有限公司 | Monocular SLAM (simultaneous localization and mapping) initialization method and device and electronic equipment |
CN113298097A (en) * | 2021-07-27 | 2021-08-24 | 电子科技大学 | Feature point extraction method and device based on convolutional neural network and storage medium |
WO2022002150A1 (en) * | 2020-06-30 | 2022-01-06 | 杭州海康机器人技术有限公司 | Method and device for constructing visual point cloud map |
CN114025153A (en) * | 2021-03-05 | 2022-02-08 | 黑芝麻智能科技有限公司 | LiDAR assisted wheel encoder to camera calibration |
CN114154117A (en) * | 2021-06-15 | 2022-03-08 | 元橡科技(苏州)有限公司 | SLAM method |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105425799A (en) * | 2015-12-03 | 2016-03-23 | 昆山穿山甲机器人有限公司 | Bank self-service robot system and automatic navigation method thereof |
CN105783913A (en) * | 2016-03-08 | 2016-07-20 | 中山大学 | SLAM device integrating multiple vehicle-mounted sensors and control method of device |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106840179A (en) * | 2017-03-07 | 2017-06-13 | 中国科学院合肥物质科学研究院 | A kind of intelligent vehicle localization method based on multi-sensor information fusion |
CN107516326A (en) * | 2017-07-14 | 2017-12-26 | 中国科学院计算技术研究所 | Merge monocular vision and the robot localization method and system of encoder information |
-
2018
- 2018-11-08 CN CN201811324601.2A patent/CN109671120A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105425799A (en) * | 2015-12-03 | 2016-03-23 | 昆山穿山甲机器人有限公司 | Bank self-service robot system and automatic navigation method thereof |
CN105783913A (en) * | 2016-03-08 | 2016-07-20 | 中山大学 | SLAM device integrating multiple vehicle-mounted sensors and control method of device |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106840179A (en) * | 2017-03-07 | 2017-06-13 | 中国科学院合肥物质科学研究院 | A kind of intelligent vehicle localization method based on multi-sensor information fusion |
CN107516326A (en) * | 2017-07-14 | 2017-12-26 | 中国科学院计算技术研究所 | Merge monocular vision and the robot localization method and system of encoder information |
Non-Patent Citations (3)
Title |
---|
RA´UL MUR-ARTAL等: "《ORB-SLAM: A Versatile and Accurate Monocular》", 《IEEE TRANSACTIONS ON ROBOTICS》 * |
王强: "基于ROS的移动机器人自主定位与导航方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
鸡块米线LVC: "ORB-SLAM2 程序解读", 《CSDN》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084885A (en) * | 2019-05-06 | 2019-08-02 | 广州市百果园信息技术有限公司 | A kind of cloud and image optimization method, device, equipment and storage medium |
CN110310333A (en) * | 2019-06-27 | 2019-10-08 | Oppo广东移动通信有限公司 | Localization method and electronic equipment, readable storage medium storing program for executing |
CN110310333B (en) * | 2019-06-27 | 2021-08-31 | Oppo广东移动通信有限公司 | Positioning method, electronic device and readable storage medium |
CN110706248A (en) * | 2019-08-20 | 2020-01-17 | 广东工业大学 | Visual perception mapping algorithm based on SLAM and mobile robot |
CN110706248B (en) * | 2019-08-20 | 2024-03-12 | 广东工业大学 | Visual perception mapping method based on SLAM and mobile robot |
CN112219087A (en) * | 2019-08-30 | 2021-01-12 | 深圳市大疆创新科技有限公司 | Pose prediction method, map construction method, movable platform and storage medium |
WO2021083242A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Map constructing method, positioning method and system, wireless communication terminal, and computer-readable medium |
CN113129366A (en) * | 2020-01-10 | 2021-07-16 | 北京字节跳动网络技术有限公司 | Monocular SLAM (simultaneous localization and mapping) initialization method and device and electronic equipment |
CN113129366B (en) * | 2020-01-10 | 2024-04-30 | 北京字节跳动网络技术有限公司 | Monocular SLAM initialization method and device and electronic equipment |
CN111521204A (en) * | 2020-03-19 | 2020-08-11 | 安徽建筑大学 | Angular displacement visual measurement method based on absolute position rotary encoder |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111882589A (en) * | 2020-06-23 | 2020-11-03 | 广州万维创新科技有限公司 | Image-based monocular vision SLAM initialization method |
WO2022002150A1 (en) * | 2020-06-30 | 2022-01-06 | 杭州海康机器人技术有限公司 | Method and device for constructing visual point cloud map |
CN111862214A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111862219A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111862214B (en) * | 2020-07-29 | 2023-08-25 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method, device, computer equipment and storage medium |
CN112200917A (en) * | 2020-09-30 | 2021-01-08 | 北京零境科技有限公司 | High-precision augmented reality method and system |
CN111928842A (en) * | 2020-10-14 | 2020-11-13 | 蘑菇车联信息科技有限公司 | Monocular vision based SLAM positioning method and related device |
CN112258546A (en) * | 2020-10-18 | 2021-01-22 | 东南大学 | Key frame selection method of SLAM system |
CN112558602A (en) * | 2020-11-19 | 2021-03-26 | 许昌许继软件技术有限公司 | Robot positioning method based on image characteristics |
CN112819744A (en) * | 2021-02-26 | 2021-05-18 | 中国人民解放军93114部队 | GNSS and visual SLAM fused track measuring method and device |
CN112819744B (en) * | 2021-02-26 | 2024-05-14 | 中国人民解放军93114部队 | GNSS and vision SLAM fusion track measurement method and device |
CN114025153A (en) * | 2021-03-05 | 2022-02-08 | 黑芝麻智能科技有限公司 | LiDAR assisted wheel encoder to camera calibration |
CN114025153B (en) * | 2021-03-05 | 2023-11-21 | 黑芝麻智能科技有限公司 | LiDAR assisted wheel encoder to camera calibration |
CN114154117A (en) * | 2021-06-15 | 2022-03-08 | 元橡科技(苏州)有限公司 | SLAM method |
CN113298097B (en) * | 2021-07-27 | 2021-10-26 | 电子科技大学 | Feature point extraction method and device based on convolutional neural network and storage medium |
CN113298097A (en) * | 2021-07-27 | 2021-08-24 | 电子科技大学 | Feature point extraction method and device based on convolutional neural network and storage medium |
CN114494825A (en) * | 2021-12-31 | 2022-05-13 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
CN114494825B (en) * | 2021-12-31 | 2024-04-19 | 重庆特斯联智慧科技股份有限公司 | Robot positioning method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109671120A (en) | A kind of monocular SLAM initial method and system based on wheel type encoder | |
CN105856230B (en) | A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
Zakharov et al. | Dpod: 6d pose object detector and refiner | |
CN106127739B (en) | Monocular vision combined RGB-D SLAM method | |
Tanskanen et al. | Live metric 3D reconstruction on mobile phones | |
CN109840940B (en) | Dynamic three-dimensional reconstruction method, device, equipment, medium and system | |
CN110490928A (en) | A kind of camera Attitude estimation method based on deep neural network | |
CN109859266B (en) | Pre-transformation-based visual simultaneous positioning and drawing method under large visual angle change | |
CN103839277B (en) | A kind of mobile augmented reality register method of outdoor largescale natural scene | |
CN111126304A (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
CN100543775C (en) | The method of following the tracks of based on the 3 d human motion of many orders camera | |
CN109345588A (en) | A kind of six-degree-of-freedom posture estimation method based on Tag | |
CN106548519A (en) | Augmented reality method based on ORB SLAM and the sense of reality of depth camera | |
CN107193279A (en) | Robot localization and map structuring system based on monocular vision and IMU information | |
CN110084832A (en) | Correcting method, device, system, equipment and the storage medium of camera pose | |
CN112801074B (en) | Depth map estimation method based on traffic camera | |
US10755433B2 (en) | Method and system for scanning an object using an RGB-D sensor | |
CN105184857A (en) | Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging | |
CN108597009A (en) | A method of objective detection is carried out based on direction angle information | |
Liu et al. | Human motion tracking by multiple RGBD cameras | |
CN110598590A (en) | Close interaction human body posture estimation method and device based on multi-view camera | |
CN108053445A (en) | The RGB-D camera motion methods of estimation of Fusion Features | |
CN107330980A (en) | A kind of virtual furnishings arrangement system based on no marks thing | |
CN108151728A (en) | A kind of half dense cognitive map creation method for binocular SLAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190423 |