CN108986037A - Monocular vision odometer localization method and positioning system based on semi-direct method - Google Patents
Monocular vision odometer localization method and positioning system based on semi-direct method Download PDFInfo
- Publication number
- CN108986037A CN108986037A CN201810512342.XA CN201810512342A CN108986037A CN 108986037 A CN108986037 A CN 108986037A CN 201810512342 A CN201810512342 A CN 201810512342A CN 108986037 A CN108986037 A CN 108986037A
- Authority
- CN
- China
- Prior art keywords
- point
- frame
- image
- key frame
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000004807 localization Effects 0.000 title claims abstract description 15
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 8
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 53
- 238000012937 correction Methods 0.000 claims description 17
- 238000005457 optimization Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000009795 derivation Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000000546 chi-square test Methods 0.000 claims description 2
- 230000007423 decrease Effects 0.000 claims description 2
- 238000006073 displacement reaction Methods 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 241001632422 Radiola linoides Species 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 238000007689 inspection Methods 0.000 claims 1
- 238000005286 illumination Methods 0.000 abstract description 2
- 230000035945 sensitivity Effects 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 241001632427 Radiola Species 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a kind of monocular vision odometer localization methods and positioning system based on semi-direct method, this method comprises: entering target scene using monocular camera and recording scene image data;The scene image of acquisition is pre-processed, characteristic point detection is carried out;System initialization tracks the characteristic point of acquisition, and data set of the characteristic point that tracking is obtained to set as Robust Algorithms is extracted the line segment feature in scene image, the Attitude estimation of camera is carried out using semi-direct method;Choose the key frame in image shot by camera;Map structuring.The present invention realize robust, high quality initialization process.The present invention chooses key frame when each three-dimensional feature is added to depth filter for the first time is and is once updated to the depth information of three-dimensional feature, reduces the uncertainty of new three-dimensional feature depth information, accelerates the convergence of depth;The reference frame of three-dimensional feature is updated, the time interval of reference frame and present frame is shortened, reduces the sensitivity to illumination variation.
Description
Technical field
The invention belongs to vision mileage technical field in three-dimensional reconstruction, a kind of specifically monocular based on semi-direct method
Visual odometry localization method and positioning system.
Background technique
With the continuous evolution of technologies such as computer vision and image procossing in recent years, visual sensor is more and more
Scene is applied, and the odometer of view-based access control model sensor also obtains the concern of more and more researchers.Visual sensor with
The advantages that its is low in cost, abundant information is gradually becoming one of main sensors of odometer application.It is (same with SLAM is based on
Step positioning is with building figure) compared with, visual odometry it is more efficient, lower to hardware requirement.Traditional wheeled odometer is mainly
Mileage calculation is carried out by the rolling of wheel, but wheel is the case where being likely to occur skidding, idle running on rugged road surface,
Biggish error will be brought to odometer measurement, and the odometer of view-based access control model sensor will not be then affected by this.
The vision speedometer of mainstream is divided by hardware at present, is broadly divided into the monocular system for only using a camera and is made
With two class of biocular systems of stereoscopic camera.Monocular refers to using only a camera as unique sensor, and binocular refers to use
One stereoscopic camera acquires two images at each moment simultaneously.Compared with binocular, monocular vision speedometer have it is at low cost,
The advantages that without considering influence of the camera baseline to system accuracy.
In 2014, the one of the characteristics of propose one of Christian Forster combines direct method and method of characteristic point
Kind visual odometry, entitled SVO.SVO does not calculate description of characteristic point first to image zooming-out FAST characteristic point,
But direct method is utilized to estimate the initial pose of a camera, big measure feature detection and matched time are saved, while only right
FAST characteristic point is calculated, and promotes its speed rapidly, and it is limited to be highly suitable for the computing resources such as unmanned plane, smart phone
Platform.But SVO is because excessively pursue ultimate attainment speed, and cause robustness insufficient.
Summary of the invention
The present invention is directed at least solve the technical problems existing in the prior art, especially innovatively propose a kind of based on half
The monocular vision odometer localization method and positioning system of direct method.
In order to realize above-mentioned purpose of the invention, first invention according to the present invention, the present invention provides one kind to be based on
The monocular vision odometer localization method of semi-direct method comprising following steps:
S1 enters target scene using unmanned plane or vehicle-mounted monocular camera and records scene image data;
S2 pre-processes the step S1 scene image obtained, and distortion correction simultaneously carries out feature to the image after correction
Point detection;
S3, system initialization, the characteristic point that tracking step S2 is obtained, the characteristic point that tracking is obtained is to set as robust
Property algorithm data set, using the basis matrix F of 8 methods estimation two images and using four points to calculating homography matrix
H, the interior point by calculating two matrixes carry out subsequent calculating to selection one of basis matrix F and homography matrix H;
S4 carries out tracking and positioning to camera, forms the motion profile of camera, extracts the line segment feature in scene image, make
The Attitude estimation of camera is carried out with semi-direct method, firstly, by sparse image alignment, by current frame image and previous frame image
Characteristic point establish luminosity error, at the beginning of acquiring one slightly camera posture;Then, feature is aligned, by all maps in map
The camera posture that point is acquired using previous step projects on present frame, utilizes the gray scale difference of subpoint and point map on reference frame
Establish luminosity error, the position of optimization subpoint on the image;Finally, the position after optimization and subpoint is utilized to establish about three
The re-projection error of dimension point and camera posture, optimizes camera posture and three-dimensional point, obtains the final camera appearance of present frame
State;
S5 chooses the key frame in image shot by camera;
S6, map structuring carry out depth filtering to the endpoint of the line segment feature in key frame, from the key frame newly obtained
A seed point is created, the nearest several key frames of the key frame newly obtained using distance carry out a pre-updated to it, then will
It is added to inside seed point queue, if seed point queue restrains, a key frame is created, three-dimensional all in map
Point projects on new key frame, if three-dimensional point on new key frame as it can be seen that if the reference frame of the three-dimensional point is changed to newly
Key frame continues to obtain key frame, updates seed point queue if seed point queue does not restrain.
In order to realize above-mentioned purpose of the invention, second invention according to the present invention, the present invention provides one kind to be based on
The monocular vision odometer positioning system of semi-direct method comprising monocular vision camera and processor, the monocular vision camera
There are unmanned plane carrying or vehicle to carry, the monocular vision camera acquires image and is simultaneously transferred to processor, and the processor utilizes
Method of the present invention is handled, and constructs map, positions and show on map the motion profile of camera.
The present invention, which measures mileage using vision speedometer, can reduce the error of mileage measurement.And traditional wheeled mileage
Meter mainly carries out mileage calculation by the rolling of wheel, but wheel is likely to occur skidding, sky on rugged road surface
The case where turning will bring biggish error to odometer measurement, and the odometer of view-based access control model sensor then will not be by this shadow
It rings.
Present invention employs monocular vision speedometers, compare with binocular vision speedometer or even multi-vision visual speedometer, tool
Have the advantages that it is at low cost, without considering influence of the camera baseline to system accuracy.
The present invention realize robust, high quality initialization process.New key frame extraction strategy, enhancing improve its base
In the key frame extraction strategy of scene depth, the limitation for being only used for regarding under camera is eliminated.
The present invention chooses key frame when each three-dimensional feature is added to depth filter for the first time is to the three-dimensional feature
Depth information is once updated, and is reduced the uncertainty of new three-dimensional feature depth information, is accelerated the convergence of depth;To three-dimensional special
The reference frame of sign is updated, and shortens the time interval of reference frame and present frame, reduces the sensitivity to illumination variation.
Additional aspect and advantage of the invention will be set forth in part in the description, and will partially become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect of the invention and advantage will become from the description of the embodiment in conjunction with the following figures
Obviously and it is readily appreciated that, in which:
Fig. 1 is the stream of the monocular vision odometer localization method based on semi-direct method in a kind of preferred embodiment of the present invention
Cheng Tu;
Fig. 2 is distortion correction effect picture in a kind of preferred embodiment of the present invention, and wherein Fig. 2 (a) is the image before correction,
Fig. 2 (b) is the image before correction;
Fig. 3 is initialization process schematic diagram in a kind of preferred embodiment of the present invention;
Fig. 4 is Robust Estimation schematic diagram in a kind of preferred embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not considered as limiting the invention.
In the description of the present invention, unless otherwise specified and limited, it should be noted that term " installation ", " connected ",
" connection " shall be understood in a broad sense, for example, it may be mechanical connection or electrical connection, the connection being also possible to inside two elements can
, can also indirectly connected through an intermediary, for the ordinary skill in the art to be to be connected directly, it can basis
Concrete condition understands the concrete meaning of above-mentioned term.
The present invention provides a kind of monocular vision odometer localization method based on semi-direct method, as shown in Figure 1 comprising
Following steps:
S1 enters target scene using unmanned plane or vehicle-mounted monocular camera and records scene image data;
S2 pre-processes the step S1 scene image obtained, and distortion correction simultaneously carries out feature to the image after correction
Point detection;
S3, system initialization, the characteristic point that tracking step S2 is obtained, the characteristic point that tracking is obtained is to set as robust
Property algorithm data set, using the basis matrix F of 8 methods estimation two images and using four points to calculating homography matrix
H is invented in this hair in other preferred embodiment, can also be calculated with the method for nonlinear optimization etc, by calculating two
The interior point of a matrix carries out subsequent calculating to selection one of basis matrix F and homography matrix H;
S4 tracks camera, extracts the line segment feature in scene image, and the posture of camera is carried out using semi-direct method
Estimation, firstly, establishing luminosity error by the characteristic point on current frame image and previous frame image, asking sparse image alignment
Camera posture at the beginning of obtaining one slightly;Then, feature is aligned, the camera appearance that all point maps in map are acquired using previous step
State projects on present frame, establishes luminosity error using the gray scale difference of subpoint and point map on reference frame, optimizes subpoint
Position on the image;Finally, utilizing the re-projection of position and subpoint foundation about three-dimensional point and camera posture after optimization
Error optimizes camera posture and three-dimensional point, obtains the final camera posture of present frame;
S5 chooses the key frame in image shot by camera;
S6, map structuring carry out depth filtering to the endpoint of the line segment feature in key frame, from the key frame newly obtained
A seed point is created, the nearest several key frames of the key frame newly obtained using distance carry out a pre-updated to it, then will
It is added to inside seed point queue, if seed point queue restrains, a key frame is created, three-dimensional all in map
Point projects on new key frame, if three-dimensional point on new key frame as it can be seen that if the reference frame of the three-dimensional point is changed to newly
Key frame continues to obtain key frame, updates seed point queue if seed point queue does not restrain.
In the present embodiment, two parts of the image deformation radial distortion and tangential distortion of camera form, distortion system
Vector (the k that number is tieed up with one 51,k2,k3,p1,p2) indicate.k3Influence it is smaller, in the present invention, it is not involved in operation.Phase
The distortion factor of machine can be solved during camera calibration, distortion factor as camera internal reference, only with camera
Hardware is related.During distortion correction, fault image be it is known, need to ask is non-fault image, it is common practice to
For giving the coordinate (u, v) of non-fault image, then calculate this should be at the coordinate of pattern distortion image after distortion
(ud,vd), thus obtain the pixel value of non-fault image coordinate (u, v), the coordinate of the image after traversing all distortion corrections
The image after complete distortion correction has just been obtained later.
The method of distortion correction are as follows:
Its physical coordinates (x', y') on the image plane is solved according to given pixel coordinate:
Physical coordinates after distortion are as follows:
r2=x'2+y'2
Physical coordinates after distortion are converted to the pixel coordinate on image:
Wherein, (k1,k2,k3,p1,p2) be camera distortion factor, (u, v) be non-fault image coordinate, (ud,vd) be
The coordinate of (u, v) corresponding fault image, (cx,cy) be camera photocentre position;(fx,fy) it is the direction camera x and the direction y
Focal length.
In Fig. 2, the left side is the image before distortion correction, and the right is the effect picture after distortion correction.Red line in figure is existing
It is straight line in the real world, however because the relationship of distortion, left figure have occurred that more serious bending, pass through school of distorting
After just, the red line position correction in right figure is for straight line.
In the present embodiment, the method for characteristic point detection are as follows:
Using a pixel in image as the center of circle, j is that radius establishes a circle, passes through the pixel and centre point on circle
Difference judge whether pixel is characterized a little, wherein j is the positive integer greater than 1;If the sum of the grayscale values circle of centre point p is taken up an official post
The difference of the gray value for continuous k pixel of anticipating all is greater than together or with threshold value is less than, then point p is exactly a characteristic point, the number of the k
Value is j*j.In the present invention, it is preferred to which j is 3.A large amount of useless characteristic point is arrived in the edge meeting of the above method in the picture, therefore
Non-maxima suppressions are executed to all characteristic points that detected after the completion of primary detection, best feature in chosen area
Point.
In the present embodiment, as shown in figure 3, initialization specifically includes:
S31 judges the number of the characteristic point in picture, if feature point number exceeds threshold value (such as 100), judgement is
It is no to have existed reference frame (i.e. first meets the image for exceeding characteristic point threshold value), if there is reference frame, then follow the steps
S32, if the picture is saved as reference frame, executes step S32 without reference to frame, when characteristic point number without departing from
When threshold value, next scene picture to be entered is waited;
S32 tracks characteristic point, the gray level image I obtained for continuous two moment t, t+1t, It+1, it is assumed that It
In a window w to be tracked, the position of pixel in two images in window w meets:
It(x, y)=It+1(x+ δ x, y+ δ y),
The matching position of window to be tracked is found by minimizing gray scale difference quadratic sum of the window to be tracked between image:
Wherein, e (w) is the matching position of window to be tracked,
Taylor expansion is carried out to high-order linear term to above formula, enabling partial derivative respectively is that zero can acquire minimum, find out to
Movement (δ x, δ y) of the tracking window between image;
S33 judges tracking effect, if meeting threshold requirement (tracking successful characteristic point quantity greater than 50), executes
Otherwise step S34 judges whether the frame is more than that 50 frames if it exceeds then deleting reference frame re-enter figure with reference frame period
Piece re-enters picture if being not above 50 frames;
S34, as shown in figure 4, being randomly selected in the characteristic point pair matched in two images needed for calculating object module
Least data number (4 pairs) are wanted, generate model using the minimum data selected, model is to estimate two images using 8 methods
Basis matrix F or using four points to the homography matrix H of calculating, the number in data set is deacclimatized by this model
According to meeting the interior point for being just denoted as current iteration of "current" model, be otherwise just denoted as exterior point, evaluated by ratio shared by interior point
The quality of "current" model records the interior highest model of point ratio (the accounting highest of interior point), if having met iteration termination condition,
Then terminate, otherwise return to this step and most start, the iteration termination condition includes two parts: first part is to reach maximum
The number of iterations;The second part is that setting one the number of iterations k, k are updated with each iteration, and more new formula is as follows:
Wherein, p is confidence level, and w is the ratio of the interior point under current optimal models, and m is the smallest of model of fit diminution
Data bulk, when the number of iterations all stops iteration greater than any one in k or maximum number of iterations;
The basis matrix F of two images is estimated and using four points to homography matrix H is calculated using 8 methods, estimates base
When plinth matrix, smallest subset is 8 points pair randomly selected, and estimates that smallest subset is four randomly selected when homography matrix
A point pair;
The basis matrix F that estimates and homography matrix H simultaneously are evaluated, for the model M estimated,
Characteristic point is represented by s to the score under collection PM:
Wherein (pi,pi') indicate a pair of of successful characteristic point of tracking, d2(p, p', M) is indicated under model M, two points
Symmetrical Transfer Error, ρM(d2(p, p', M)) it indicates to d2(p, p', M) carries out Chi-square Test to confirm this to point to whether being interior
The quantity of point pair, last interior point pair is exactly the final score of the model, calculates the score s of basis matrix in this wayFAnd list
The score s of answering property matrixH:
If R > 0.4, homography matrix H is selected, otherwise just selects basis matrix F.
In the present embodiment, the step S4 is specifically included:
S41 extracts field with LSD algorithm (specific algorithm reference papers " LSD:A line segment detector ")
The line segment feature of scape image carries out uniform sampling to the line segment feature extracted, is converted into point feature;
S42 carries out feature point extraction as characteristic point, sparse image alignment, by current for newly arrived picture frame
Characteristic point on frame image and previous frame image establishes luminosity error, the camera posture at the beginning of acquiring one slightly;Feature alignment, by ground
The camera posture that all point maps in figure are acquired using previous step projects on present frame, is being joined using subpoint and point map
It examines the gray scale difference on frame and establishes luminosity error, the position of optimization subpoint on the image;Finally, utilizing the position after optimization and throwing
Shadow point establishes the re-projection error about three-dimensional point and camera posture, optimizes to camera posture and three-dimensional point, obtains current
The final camera posture of frame;
Wherein, sparse image alignment refers to:
It is assumed that current time is k, current image frame Ik, last moment k-1, a upper picture frame is Ik-1, ask at this time
Solve the relative attitude of adjacent two frame, point piFor image Ik-1Upper any characteristic point, back projection to three-dimensional space is in its camera coordinates
Point under system is pi, by piProject to IkOn obtained subpoint be pi', then luminosity error are as follows:
E=Ik-1(pi)-Ik(pi')
For line segment feature, uniform sampling is carried out on line segment, by the line segment feature back projection on previous frame image to three
Dimension space reprojection is to present frame;
The luminosity error of line segment feature is the sum of its sampled point luminosity error, if camera internal reference matrix is K, present frame is opposite
The camera posture of previous frame is ξ, then:
pi'=Kexp (ξ ^) Pi,
The increment of one camera posture of iterative solution every time, so that the value of entire cost function decreases up to satisfaction and stops changing
The condition in generation gives one microvariations δ ξ of ξ premultiplication:
By above formula is into Taylor expansion to single order and abbreviation can obtain:
In above formula,It is point pi' place gradient,Derivative for projection equation about camera posture, by chain method
Then:
Wherein, Pi'=exp (ξ ^) PiBe the three-dimensional point under camera coordinates system, remember that its coordinate is (X', Y', Z'), then:
AndFor Pi' derivative about camera posture, according to formula:
Therefore:
So, in sparse image align stage, the Jacobian matrix of luminosity error resume cost function is used are as follows:
Using the continuous iteration of formula, total error is reduced, it finally can be in the hope of previous moment and time at current time phase
The relative attitude of machine, and then accumulate and obtain camera posture of the Current camera under world coordinate system;
Feature alignment include: by map it is all in present image in visible three-dimensional point asked using sparse image alignment
The camera posture obtained projects on image, a series of subpoint is obtained, for the three-dimensional point P in mapi(X, Y, Z), first
The secondary key frame for observing it is known as it with reference to key frame, and remembering that it refers to the image of key frame is Iref, position in reference frame
It is set to pi, the three-dimensional point is visible on the image of moment k and its subpoint is ui, using luminosity error:
E=Iref(pi)-Ik(ui),
The parameter to be optimized of feature alignment is the position u of subpointi,
Line segment is indicated in three-dimensional space by two endpoint, and in feature align stage, the endpoint of three-dimensional line segment is thrown respectively
Shadow is into present image, and then the alignment to line segment feature can be completed in the position for being aligned its endpoint,
Finally, in posture and three-dimensional structure optimizing phase, using the position after the optimization of feature align stage subpoint and not
Re-projection error is established in the distance between position of optimization, the three-dimensional in camera posture and map acquired to sparse align stage
Point position optimizes, it is assumed that any three-dimensional point p in mapiSubpoint ui, it is assumed that its position after optimizing is ui', then:
E=u 'i-ui,
For line segment feature, re-projection error is made of two endpoint, by the Disturbance Model of Lie algebra:
Wherein, Pi' indicate three-dimensional point PiUsing the three-dimensional point under camera coordinates system, above formula is actually to indicate projection side
Derivation of the journey to camera posture, when being optimized to three-dimensional point, it is to be understood that how e changes with the variation of three-dimensional point, analyzes e
About three-dimensional point Pi' derivative, by chain type Rule for derivation:
It is assumed that Current camera posture is R, t, then by projection equation:
Pi'=RPi+t,
Pi' to PiDerivation is actually spin matrix R, Jacobian matrix of the re-projection error relative to three-dimensional point are as follows:
Jacobian matrix of the re-projection error about posture and the Jacobian matrix about three-dimensional point are grouped together just
It is the Jacobian matrix of iteration during posture and structure optimization.
In the present embodiment, the step S5 is specifically included:
Meet one kind in following three kinds of situations just to start to select key frame:
1. map structuring thread is more idle, this is broadly divided into two aspects, first is that key frame queue to be processed is sky,
Simultaneously in map structuring thread, the negligible amounts of three-dimensional point to be updated consider addition key frame at this time to promote map
Consistency;
2. visibility of the characteristic point on the key frame nearest apart from present frame in current image frame is lower, distance is current
The key frame and present frame of frame should have biggish scene to be overlapped, and the very big possible camera of explanation is going into one when this situation occurs
New region needs to add the correct progress that key frame guarantees camera tracking immediately;
3. the displaced portion in present image and previous frame image relative attitude has been more than certain ratio of scene mean depth
Example;
One picture frame, which also needs to meet following condition, just may be selected key frame:
1. at a distance from nearest key frame in a certain range;(0.5-2, this is only manually set, with use
Data and related, so there is no fixed standard in)
2. in feature align stage, can characteristic point on re-projection to present image three-dimensional point quantity and previous frame phase
Than being not less than certain value (40%);
Key frame extraction algorithm are as follows:
Input current image frame, a upper picture frame;
If crucial frame queue is sky, and if not converged three-dimensional point < 100, pseudocode needKfs=true is opened
Begin selection key frame, projects in all three-dimensional points to current image frame on key frame, if the visible three-dimensional point/pass of present frame
All three-dimensional point < 0.5 on key frame, then needKfs=true,
If the displacement of present frame/scene mean depth > 0.1, needKfs=true,
If needKfs=true, present frame is calculated at a distance from key frame,
If<less than maximal distance threshold and present frame is at a distance from key frame>most at a distance from key frame for present frame
Small distance threshold value then projects to all the points in map in current image frame, it is seen that quantity m, by all the points in map
It projects on a picture frame, it is seen that quantity n,
If m/n >=0.4, present frame is key frame.
In the present embodiment, the step S6 is specifically included:
Depth filtering is carried out to the endpoint of line segment feature, it, will be on key frame when depth filter one key frame of acquisition
Characteristic point is initialized as seed point and is put into seed queue, whenever depth filter one normal image frame of acquisition, utilizes the figure
All seed points that seed queue is updated as frame update the depth value of this seed point and the uncertainty value of depth, kind
Son point position on new images is uncertain but on straight line (polar curve);
It is scanned for according to characteristic point u polar curve on present image of the seed point in reference frame, obtains corresponding points u'
(polar curve matching) takes the pixel in characteristic point field to carry out template matching as template when search;
Trigonometric calculations depth is utilized after obtaining match point;Original depth is estimated using bayesian probability model
Meter and uncertainty are updated, and are added in map if depth information convergence after updating, if depth information dissipates after updating
The seed point is then deleted, otherwise waits for updating next time;
Depth pre-updated creates a seed point from new key frame, utilizes the several passes nearest apart from new key frame
Key frame carries out a pre-updated to it, then adds it to again inside seed point queue, specific depth pre-updated step are as follows:
Obtain new key frame;
Obtain the mean depth of new key frame;
The n key frame nearest apart from new key frame is found,
For the characteristic point ft in new key frame:
A seed point is created using characteristic point ft, initialization depth is mean depth;
For the key frame kf in the n key frame:
Polar curve search is carried out with new key frame;
The depth and uncertainty of trigonometric calculations characteristic point ft, and seed point is carried out based on bayesian probability model
It updates;Circular can use " REMODE:Probabilistic, monocular dense reconstruction
The method described inside this paper of in real time ";
Seed point is added to seed point queue
Export new seed point queue;
A key frame is created every time, just three-dimensional point all in map is projected on new key frame, if three-dimensional
Point is on new key frame as it can be seen that the reference frame of the three-dimensional point is just changed to new key frame, by all three-dimensional points throwing in image
Shadow is established into newest key frame with gray scale of gray scale and corresponding three-dimensional points of the subpoint in latest image in reference frame
Luminosity error optimizes position of the subpoint between latest image frame, constructs map.
It should be noted that the calculation method of basis matrix F, homography matrix H and spin matrix R are using existing in the present invention
The acquisition methods of some calculation methods, camera internal reference matrix K use acquisition methods generally in the art.
The present invention also provides a kind of monocular vision mileage systems based on semi-direct method comprising monocular vision camera and
Processor, the monocular vision camera have unmanned plane carrying or vehicle to carry, and the monocular vision camera acquires image and simultaneously transmits
To processor, the processor is handled using method of the present invention, and constructs map.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any
One or more embodiment or examples in can be combined in any suitable manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not
A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this
The range of invention is defined by the claims and their equivalents.
Claims (8)
1. a kind of monocular vision odometer localization method based on semi-direct method, which comprises the steps of:
S1 enters target scene using unmanned plane or vehicle-mounted monocular camera and records scene image data;
S2 pre-processes the step S1 scene image obtained, and distortion correction simultaneously carries out characteristic point inspection to the image after correction
It surveys;
S3, system initialization, the characteristic point that tracking step S2 is obtained, the characteristic point that tracking is obtained calculate set as robustness
The data set of method is estimated the basis matrix F of two images and using four points to homography matrix H is calculated using 8 methods, is led to
The interior point for calculating two matrixes is crossed to the subsequent calculating of selection one of basis matrix F and homography matrix H progress;
S4 is carried out tracking and positioning to camera, extracts the line segment feature in scene image, the posture of camera is carried out using semi-direct method
Estimation, firstly, establishing luminosity error by the characteristic point on current frame image and previous frame image, asking sparse image alignment
Camera posture at the beginning of obtaining one slightly;Then, feature is aligned, the camera appearance that all point maps in map are acquired using previous step
State projects on present frame, establishes luminosity error using the gray scale difference of subpoint and point map on reference frame, optimizes subpoint
Position on the image;Finally, utilizing the re-projection of position and subpoint foundation about three-dimensional point and camera posture after optimization
Error optimizes camera posture and three-dimensional point, obtains the final camera posture of present frame;
S5 chooses the key frame in image shot by camera;
S6, map structuring carry out depth filtering to the endpoint of the line segment feature in key frame, create from the key frame newly obtained
One seed point, the nearest several key frames of the key frame newly obtained using distance carry out a pre-updated to it, then are added
Enter to inside seed point queue, if seed point queue restrains, creates a key frame, three-dimensional point all in map is thrown
On shadow to new key frame, if three-dimensional point on new key frame as it can be seen that if the reference frame of the three-dimensional point is changed to new key
Frame continues to obtain key frame, updates seed point queue if seed point queue does not restrain.
2. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
The method of distortion correction are as follows:
Its physical coordinates (x', y') on the image plane is solved according to given pixel coordinate:
Physical coordinates after distortion are as follows:
r2=x'2+y'2
Physical coordinates after distortion are converted to the pixel coordinate on image:
Wherein, (k1,k2,k3,p1,p2) be camera distortion factor, (u, v) be non-fault image coordinate, (ud,vd) it is (u, v)
The coordinate of corresponding fault image, (cx,cy) be camera photocentre position;(fx,fy) be the direction camera x and the direction y focal length.
3. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
The method of characteristic point detection are as follows:
Using a pixel in image as the center of circle, j is that radius establishes a circle, passes through the difference of pixel and centre point on circle
Different to judge whether pixel is characterized a little, wherein j is the positive integer greater than 1;If the upper any company of sum of the grayscale values circle of centre point p
The difference of the gray value of k continuous pixel is all greater than together or with threshold value is less than, then point p is exactly a characteristic point, and the numerical value of the k is
j*j。
4. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
Step S3 is specifically included:
S31 judges the number of the characteristic point in picture, if feature point number exceeds threshold value, judges whether to have existed ginseng
Frame is examined, if there is reference frame, thens follow the steps S32, if the picture is saved as reference frame, executes step without reference to frame
Rapid S32 waits next scene picture to be entered when the number of characteristic point is without departing from threshold value;
S32 tracks characteristic point, the gray level image I obtained for continuous two moment t, t+1t, It+1, it is assumed that ItIn
The position of pixel in two images in one window w to be tracked, window w meets:
It(x, y)=It+1(x+ δ x, y+ δ y),
The matching position of window to be tracked is found by minimizing gray scale difference quadratic sum of the window to be tracked between image:
Wherein, e (w) is the matching position of window to be tracked,
Taylor expansion is carried out to high-order linear term to above formula, enabling partial derivative respectively is that zero can acquire minimum, is found out to be tracked
Movement (δ x, δ y) of the window between image;
S33 judges tracking effect, if met the requirements, thens follow the steps S34, otherwise judge the frame whether with reference frame period
More than 50 frames, if it exceeds then deleting reference frame, picture is re-entered, if being not above 50 frames, re-enters picture;
S34 randomly selects least data number required for calculating object module in the characteristic point pair matched in two images
Mesh generates model using the minimum data selected, and model is basis matrix F or the utilization that two images are estimated using 8 methods
Four points deacclimatize the data in data set by this model to the homography matrix H of calculating, meet just remembering for "current" model
For the interior point of current iteration, it is otherwise just denoted as exterior point, evaluates the quality of "current" model by ratio shared by interior point, in record
The point highest model of ratio terminates if having met iteration termination condition, and otherwise returning to this step most starts, the iteration
Termination condition includes two parts: first part is to reach maximum number of iterations;The second part is setting one iteration time
K is counted, k is updated with each iteration, and more new formula is as follows:
Wherein, p is confidence level, and w is the ratio of the interior point under current optimal models, and m is the smallest data that model of fit reduces
Quantity, when the number of iterations all stops iteration greater than any one in k or maximum number of iterations;
The basis matrix F of two images is estimated and using four points to homography matrix H is calculated using 8 methods, estimates basic square
When battle array, smallest subset is 8 points pair randomly selected, and estimates that smallest subset is four points randomly selected when homography matrix
It is right;
The basis matrix F and homography matrix H that estimate simultaneously are evaluated, for the model M estimated, in feature
Point is represented by s to the score under collection PM:
Wherein (pi,p′i) indicate a pair of of successful characteristic point of tracking, d2(p, p', M) is indicated under model M, two points it is symmetrical
Transfer Error, ρM(d2(p, p', M)) it indicates to d2(p, p', M) carry out Chi-square Test confirm this to point to whether being interior point pair,
The quantity of point pair is exactly the final score of the model in last, calculates the score s of basis matrix in this wayFWith homography square
The score s of battle arrayH:
If R > 0.4, homography matrix H is selected, otherwise just selects basis matrix F.
5. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
Step S4 is specifically included:
S41 extracts the line segment feature of scene image with LSD algorithm, carries out uniform sampling, conversion to the line segment feature extracted
For point feature;
S42 carries out feature point extraction as characteristic point for newly arrived picture frame, and sparse image alignment passes through present frame figure
Characteristic point on picture and previous frame image establishes luminosity error, the camera posture at the beginning of acquiring one slightly;Feature alignment, will be in map
The camera posture that is acquired using previous step of all point maps project on present frame, using subpoint and point map in reference frame
On gray scale difference establish luminosity error, the position of optimization subpoint on the image;Finally, utilizing the position after optimization and subpoint
The re-projection error about three-dimensional point and camera posture is established, camera posture and three-dimensional point are optimized, obtain present frame most
Whole camera posture;
Wherein, sparse image alignment refers to:
It is assumed that current time is k, current image frame Ik, last moment k-1, a upper picture frame is Ik-1, it is to solve for phase at this time
The relative attitude of adjacent two frames, point piFor image Ik-1Upper any characteristic point, back projection to three-dimensional space is under its camera coordinates system
Point be pi, by piProject to IkOn obtained subpoint be pi', then luminosity error are as follows:
E=Ik-1(pi)-Ik(pi')
For line segment feature, uniform sampling is carried out on line segment, by the line segment feature back projection on previous frame image to three-dimensional space
Between reprojection to present frame;
The luminosity error of line segment feature is the sum of its sampled point luminosity error, if camera internal reference matrix is K, present frame relatively upper one
The camera posture of frame is ξ, then:
p′i=Kexp (ξ ^) Pi,
The increment of one camera posture of iterative solution every time, so that the value of entire cost function, which decreases up to meet, stops iteration
Condition gives one microvariations δ ξ of ξ premultiplication:
By above formula is into Taylor expansion to single order and abbreviation can obtain:
In above formula,It is point p'iThe gradient at place,Derivative for projection equation about camera posture, by chain rule:
Wherein, Pi'=exp (ξ ^) PiBe the three-dimensional point under camera coordinates system, remember that its coordinate is (X', Y', Z'), then:
AndFor Pi' derivative about camera posture, according to formula:
Therefore:
So, in sparse image align stage, the Jacobian matrix of luminosity error resume cost function is used are as follows:
Using the continuous iteration of formula, total error is reduced, it finally can be in the hope of previous moment and time at current time camera
Relative attitude, and then accumulate and obtain camera posture of the Current camera under world coordinate system;
Feature alignment include: by map it is all in present image in visible three-dimensional point acquired using sparse image alignment
Camera posture projects on image, obtains a series of subpoint, for the three-dimensional point P in mapi(X, Y, Z) is seen for the first time
The key frame for measuring it is known as it with reference to key frame, and remembering that it refers to the image of key frame is Iref, the position in reference frame is
pi, the three-dimensional point is visible on the image of moment k and its subpoint is ui, using luminosity error:
E=Iref(pi)-Ik(ui),
The parameter to be optimized of feature alignment is the position u of subpointi,
Line segment is indicated in three-dimensional space by two endpoint, and in feature align stage, the endpoint of three-dimensional line segment is projected to respectively
In present image, and then the alignment to line segment feature can be completed in the position for being aligned its endpoint,
Finally, using the position after the optimization of feature align stage subpoint and being not optimised in posture and three-dimensional structure optimizing phase
The distance between position establish re-projection error, the three-dimensional point in camera posture and map that sparse align stage is acquired
It sets and optimizes, it is assumed that any three-dimensional point p in mapiSubpoint ui, it is assumed that its position after optimizing is u'i, then:
E=u 'i-ui,
For line segment feature, re-projection error is made of two endpoint, by the Disturbance Model of Lie algebra:
Wherein, Pi' indicate three-dimensional point PiUsing the three-dimensional point under camera coordinates system, above formula is actually to indicate projection equation pair
The derivation of camera posture, when being optimized to three-dimensional point, it is to be understood that how e changes with the variation of three-dimensional point, analysis e about
Three-dimensional point Pi' derivative, by chain type Rule for derivation:
It is assumed that Current camera posture is R, t, then by projection equation:
Pi'=RPi+t,
Pi' to PiDerivation is actually spin matrix R, Jacobian matrix of the re-projection error relative to three-dimensional point are as follows:
It is appearance that Jacobian matrix of the re-projection error about posture and the Jacobian matrix about three-dimensional point, which are grouped together,
The Jacobian matrix of iteration during state and structure optimization.
6. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
Step S5 is specifically included:
Meet one kind in following three kinds of situations just to start to select key frame:
1. map structuring thread is more idle, this is broadly divided into two aspects, first is that key frame queue to be processed is sky, simultaneously
In map structuring thread, the negligible amounts of three-dimensional point to be updated consider addition key frame at this time to promote the dense of map
Degree;
2. visibility of the characteristic point on the key frame nearest apart from present frame in current image frame is lower, apart from present frame
Key frame and present frame should have biggish scene to be overlapped, occur to illustrate when this situation very big possible camera be going into one it is new
Region needs to add the correct progress that key frame guarantees camera tracking immediately;
3. the displaced portion in present image and previous frame image relative attitude has been more than the certain proportion of scene mean depth;
One picture frame, which also needs to meet following condition, just may be selected key frame:
1. at a distance from nearest key frame in a certain range;
2. in feature align stage, can the quantity of three-dimensional point of characteristic point on re-projection to present image compared not with previous frame
Lower than certain value;
Key frame extraction algorithm are as follows:
Input current image frame, a upper picture frame;
If crucial frame queue is sky, and if not converged three-dimensional point < 100, pseudocode needKfs=true starts to select
Key frame is selected, the key frame nearest apart from present frame is found, is projected in all three-dimensional points to current image frame on key frame, such as
All three-dimensional point < 0.5 on the visible three-dimensional point/key frame of fruit present frame, then needKfs=true,
If the displacement of present frame/scene mean depth > 0.1, needKfs=true,
If needKfs=true, present frame is calculated at a distance from key frame,
If present frame<less than maximal distance threshold and present frame is at a distance from key frame>most narrow spacing at a distance from key frame
From threshold value, then all the points in map are projected in current image frame, it is seen that quantity m projects all the points in map
Onto a upper picture frame, it is seen that quantity n,
If m/n >=0.4, present frame is key frame.
7. the monocular vision odometer localization method according to claim 1 based on semi-direct method, which is characterized in that described
Step S6 is specifically included:
Depth filtering is carried out to the endpoint of line segment feature, when depth filter obtains a key frame, by the feature on key frame
Point is initialized as seed point and is put into seed queue, whenever depth filter one normal image frame of acquisition, utilizes the picture frame
Update all seed points of seed queue, seed point position on new images is uncertain but point-blank;
It is scanned for according to characteristic point u polar curve on present image of the seed point in reference frame, obtains corresponding points u', searching
The pixel in characteristic point field is taken to carry out template matching as template when rope;
Trigonometric calculations depth is utilized after obtaining match point;Using bayesian probability model to original estimation of Depth and
Uncertainty is updated, and is added in map if depth information convergence after updating, if depth information dissipates after updating
The seed point is deleted, otherwise waits for updating next time;
Depth pre-updated creates a seed point from new key frame, utilizes the several key frames nearest apart from new key frame
A pre-updated is carried out to it, is then added it to again inside seed point queue, specific depth pre-updated step are as follows:
Obtain new key frame;
Obtain the mean depth of new key frame;
The n key frame nearest apart from new key frame is found,
For the characteristic point ft in new key frame:
A seed point is created using characteristic point ft, initialization depth is mean depth;
For the key frame kf in the n key frame:
Polar curve search is carried out with new key frame;
The depth and uncertainty of trigonometric calculations characteristic point ft;
Seed point is updated based on bayesian probability model;
Seed point is added to seed point queue,
Export new seed point queue;
A key frame is created every time, just three-dimensional point all in map is projected on new key frame, if three-dimensional point exists
As it can be seen that the reference frame of the three-dimensional point is just changed to new key frame on new key frame, all three-dimensional points in image are projected to
In newest key frame, luminosity is established with gray scale and corresponding three-dimensional points gray scale in reference frame of the subpoint in latest image
Error optimizes position of the subpoint between latest image frame, constructs map.
8. the monocular vision odometer positioning system according to claim 1 based on semi-direct method, which is characterized in that including
Monocular vision camera and processor, the monocular vision camera has unmanned plane to carry or vehicle carries, the monocular vision camera
Acquisition image is simultaneously transferred to processor, and the processor is handled using method described in one of the claims in the present invention 1-7,
And construct map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810512342.XA CN108986037B (en) | 2018-05-25 | 2018-05-25 | Monocular vision odometer positioning method and positioning system based on semi-direct method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810512342.XA CN108986037B (en) | 2018-05-25 | 2018-05-25 | Monocular vision odometer positioning method and positioning system based on semi-direct method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108986037A true CN108986037A (en) | 2018-12-11 |
CN108986037B CN108986037B (en) | 2020-06-16 |
Family
ID=64542055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810512342.XA Active CN108986037B (en) | 2018-05-25 | 2018-05-25 | Monocular vision odometer positioning method and positioning system based on semi-direct method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108986037B (en) |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109443320A (en) * | 2019-01-10 | 2019-03-08 | 轻客小觅智能科技(北京)有限公司 | Binocular vision speedometer and measurement method based on direct method and line feature |
CN109727269A (en) * | 2019-03-29 | 2019-05-07 | 中国人民解放军国防科技大学 | Monocular vision and road map based matching positioning method |
CN109752008A (en) * | 2019-03-05 | 2019-05-14 | 长安大学 | Intelligent vehicle multi-mode co-located system, method and intelligent vehicle |
CN109816726A (en) * | 2019-01-29 | 2019-05-28 | 京东方科技集团股份有限公司 | A kind of visual odometry map updating method and system based on depth filter |
CN110189390A (en) * | 2019-04-09 | 2019-08-30 | 南京航空航天大学 | A kind of monocular vision SLAM method and system |
CN110335316A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Method, apparatus, medium and electronic equipment are determined based on the pose of depth information |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
CN110473258A (en) * | 2019-07-24 | 2019-11-19 | 西北工业大学 | Monocular SLAM system initialization algorithm based on dotted line Unified frame |
CN110514212A (en) * | 2019-07-26 | 2019-11-29 | 电子科技大学 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
CN110807809A (en) * | 2019-10-25 | 2020-02-18 | 中山大学 | Light-weight monocular vision positioning method based on point-line characteristics and depth filter |
CN111145255A (en) * | 2019-12-27 | 2020-05-12 | 浙江省北大信息技术高等研究院 | Pose calculation method and system combining deep learning and geometric optimization |
CN111322993A (en) * | 2018-12-13 | 2020-06-23 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN111462179A (en) * | 2020-03-26 | 2020-07-28 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111553300A (en) * | 2020-05-08 | 2020-08-18 | 北京工商大学 | Multi-time-domain resolution lip language behavior detection method for three-dimensional point cloud video |
CN111583331A (en) * | 2020-05-12 | 2020-08-25 | 北京轩宇空间科技有限公司 | Method and apparatus for simultaneous localization and mapping |
CN111721318A (en) * | 2020-05-26 | 2020-09-29 | 南京航空航天大学 | Template matching visual odometer based on self-adaptive search area |
CN111829522A (en) * | 2020-07-02 | 2020-10-27 | 浙江大华技术股份有限公司 | Instant positioning and map construction method, computer equipment and device |
CN111862146A (en) * | 2019-04-30 | 2020-10-30 | 北京初速度科技有限公司 | Target object positioning method and device |
CN111950599A (en) * | 2020-07-20 | 2020-11-17 | 重庆邮电大学 | Dense visual odometer method for fusing edge information in dynamic environment |
CN112115980A (en) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | Binocular vision odometer design method based on optical flow tracking and point line feature matching |
CN112580683A (en) * | 2020-11-17 | 2021-03-30 | 中山大学 | Multi-sensor data time alignment system and method based on cross correlation |
CN112633122A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Front-end mileage calculation method and system of monocular VIO system |
CN112665575A (en) * | 2020-11-27 | 2021-04-16 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN112862803A (en) * | 2021-02-26 | 2021-05-28 | 中国人民解放军93114部队 | Infrared imaging SLAM method and device based on edge and feature point fusion |
CN112967311A (en) * | 2019-12-12 | 2021-06-15 | 浙江商汤科技开发有限公司 | Three-dimensional line graph construction method and device, electronic equipment and storage medium |
CN113112542A (en) * | 2021-03-25 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Visual positioning method and device, electronic equipment and storage medium |
CN113108771A (en) * | 2021-03-05 | 2021-07-13 | 华南理工大学 | Movement pose estimation method based on closed-loop direct sparse visual odometer |
CN113206949A (en) * | 2021-04-01 | 2021-08-03 | 广州大学 | Semi-direct monocular vision SLAM method based on entropy weighted image gradient |
CN113362377A (en) * | 2021-06-29 | 2021-09-07 | 东南大学 | VO weighted optimization method based on monocular camera |
CN113409368A (en) * | 2020-03-16 | 2021-09-17 | 北京京东乾石科技有限公司 | Drawing method and device, computer readable storage medium and electronic equipment |
CN113592947A (en) * | 2021-07-30 | 2021-11-02 | 北京理工大学 | Visual odometer implementation method of semi-direct method |
CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | Binocular vision navigation method and device based on landmark points |
CN113689485A (en) * | 2021-08-25 | 2021-11-23 | 北京三快在线科技有限公司 | Method and device for determining depth information of unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
CN113781567A (en) * | 2021-10-08 | 2021-12-10 | 西北工业大学 | Aerial image target geographic positioning method based on three-dimensional map generation |
CN114708392A (en) * | 2022-03-22 | 2022-07-05 | 重庆大学 | Closed-loop-track-based octree map construction method |
WO2022147976A1 (en) * | 2021-01-11 | 2022-07-14 | 浙江商汤科技开发有限公司 | Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device |
WO2023280274A1 (en) * | 2021-07-07 | 2023-01-12 | The Hong Kong University Of Science And Technology | Geometric structure aided visual localization method and system |
CN116625380A (en) * | 2023-07-26 | 2023-08-22 | 广东工业大学 | Path planning method and system based on machine learning and SLAM |
CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
CN113781567B (en) * | 2021-10-08 | 2024-05-31 | 西北工业大学 | Aerial image target geographic positioning method based on three-dimensional map generation |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103680291A (en) * | 2012-09-09 | 2014-03-26 | 复旦大学 | Method for realizing simultaneous locating and mapping based on ceiling vision |
US20140212001A1 (en) * | 2013-01-27 | 2014-07-31 | Quantum Signal Llc | Visual odometry |
CN105809687A (en) * | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
CN105869136A (en) * | 2015-01-22 | 2016-08-17 | 北京雷动云合智能技术有限公司 | Collaborative visual SLAM method based on multiple cameras |
CN107341814A (en) * | 2017-06-14 | 2017-11-10 | 宁波大学 | The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method |
CN107610175A (en) * | 2017-08-04 | 2018-01-19 | 华南理工大学 | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
-
2018
- 2018-05-25 CN CN201810512342.XA patent/CN108986037B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103680291A (en) * | 2012-09-09 | 2014-03-26 | 复旦大学 | Method for realizing simultaneous locating and mapping based on ceiling vision |
US20140212001A1 (en) * | 2013-01-27 | 2014-07-31 | Quantum Signal Llc | Visual odometry |
CN105869136A (en) * | 2015-01-22 | 2016-08-17 | 北京雷动云合智能技术有限公司 | Collaborative visual SLAM method based on multiple cameras |
CN105809687A (en) * | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
CN107341814A (en) * | 2017-06-14 | 2017-11-10 | 宁波大学 | The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
CN107610175A (en) * | 2017-08-04 | 2018-01-19 | 华南理工大学 | The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window |
Non-Patent Citations (3)
Title |
---|
CHRISTIAN FORESTER等: ""SVO:Semidirect visual odometry for monocular and multicamera systems"", 《IEEE TRANSACTIONS ON ROBOTICS》 * |
袁梦等: ""点线特征融合的单目视觉里程计"", 《激光与光电子学进展》 * |
陈常等: ""基于视觉的同时定位与地图构建的研究进展"", 《计算机应用研究》 * |
Cited By (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111322993A (en) * | 2018-12-13 | 2020-06-23 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN111322993B (en) * | 2018-12-13 | 2022-03-04 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN109443320A (en) * | 2019-01-10 | 2019-03-08 | 轻客小觅智能科技(北京)有限公司 | Binocular vision speedometer and measurement method based on direct method and line feature |
CN109816726A (en) * | 2019-01-29 | 2019-05-28 | 京东方科技集团股份有限公司 | A kind of visual odometry map updating method and system based on depth filter |
CN109816726B (en) * | 2019-01-29 | 2021-10-01 | 京东方科技集团股份有限公司 | Visual odometer map updating method and system based on depth filter |
CN109752008A (en) * | 2019-03-05 | 2019-05-14 | 长安大学 | Intelligent vehicle multi-mode co-located system, method and intelligent vehicle |
CN109727269A (en) * | 2019-03-29 | 2019-05-07 | 中国人民解放军国防科技大学 | Monocular vision and road map based matching positioning method |
CN110189390A (en) * | 2019-04-09 | 2019-08-30 | 南京航空航天大学 | A kind of monocular vision SLAM method and system |
CN111862146A (en) * | 2019-04-30 | 2020-10-30 | 北京初速度科技有限公司 | Target object positioning method and device |
CN111862146B (en) * | 2019-04-30 | 2023-08-29 | 北京魔门塔科技有限公司 | Target object positioning method and device |
CN110335316A (en) * | 2019-06-28 | 2019-10-15 | Oppo广东移动通信有限公司 | Method, apparatus, medium and electronic equipment are determined based on the pose of depth information |
CN110363821A (en) * | 2019-07-12 | 2019-10-22 | 顺丰科技有限公司 | Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle |
CN110473258A (en) * | 2019-07-24 | 2019-11-19 | 西北工业大学 | Monocular SLAM system initialization algorithm based on dotted line Unified frame |
CN110473258B (en) * | 2019-07-24 | 2022-05-13 | 西北工业大学 | Monocular SLAM system initialization algorithm based on point-line unified framework |
CN110514212A (en) * | 2019-07-26 | 2019-11-29 | 电子科技大学 | A kind of intelligent vehicle map terrestrial reference localization method merging monocular vision and difference GNSS |
CN110807809A (en) * | 2019-10-25 | 2020-02-18 | 中山大学 | Light-weight monocular vision positioning method based on point-line characteristics and depth filter |
CN110807809B (en) * | 2019-10-25 | 2021-04-09 | 中山大学 | Light-weight monocular vision positioning method based on point-line characteristics and depth filter |
CN112967311B (en) * | 2019-12-12 | 2024-06-07 | 浙江商汤科技开发有限公司 | Three-dimensional line graph construction method and device, electronic equipment and storage medium |
CN112967311A (en) * | 2019-12-12 | 2021-06-15 | 浙江商汤科技开发有限公司 | Three-dimensional line graph construction method and device, electronic equipment and storage medium |
CN111145255A (en) * | 2019-12-27 | 2020-05-12 | 浙江省北大信息技术高等研究院 | Pose calculation method and system combining deep learning and geometric optimization |
CN111145255B (en) * | 2019-12-27 | 2022-08-09 | 浙江省北大信息技术高等研究院 | Pose calculation method and system combining deep learning and geometric optimization |
CN113409368B (en) * | 2020-03-16 | 2023-11-03 | 北京京东乾石科技有限公司 | Mapping method and device, computer readable storage medium and electronic equipment |
CN113409368A (en) * | 2020-03-16 | 2021-09-17 | 北京京东乾石科技有限公司 | Drawing method and device, computer readable storage medium and electronic equipment |
CN111462179B (en) * | 2020-03-26 | 2023-06-27 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111462179A (en) * | 2020-03-26 | 2020-07-28 | 北京百度网讯科技有限公司 | Three-dimensional object tracking method and device and electronic equipment |
CN111553300A (en) * | 2020-05-08 | 2020-08-18 | 北京工商大学 | Multi-time-domain resolution lip language behavior detection method for three-dimensional point cloud video |
CN111553300B (en) * | 2020-05-08 | 2022-03-11 | 北京工商大学 | Multi-time-domain resolution lip language behavior detection method for three-dimensional point cloud video |
CN111583331A (en) * | 2020-05-12 | 2020-08-25 | 北京轩宇空间科技有限公司 | Method and apparatus for simultaneous localization and mapping |
CN111583331B (en) * | 2020-05-12 | 2023-09-01 | 北京轩宇空间科技有限公司 | Method and device for simultaneous localization and mapping |
CN111721318A (en) * | 2020-05-26 | 2020-09-29 | 南京航空航天大学 | Template matching visual odometer based on self-adaptive search area |
CN111721318B (en) * | 2020-05-26 | 2022-03-25 | 南京航空航天大学 | Template matching visual odometer based on self-adaptive search area |
CN111829522B (en) * | 2020-07-02 | 2022-07-12 | 浙江大华技术股份有限公司 | Instant positioning and map construction method, computer equipment and device |
CN111829522A (en) * | 2020-07-02 | 2020-10-27 | 浙江大华技术股份有限公司 | Instant positioning and map construction method, computer equipment and device |
CN111950599A (en) * | 2020-07-20 | 2020-11-17 | 重庆邮电大学 | Dense visual odometer method for fusing edge information in dynamic environment |
CN111950599B (en) * | 2020-07-20 | 2022-07-01 | 重庆邮电大学 | Dense visual odometer method for fusing edge information in dynamic environment |
CN112115980A (en) * | 2020-08-25 | 2020-12-22 | 西北工业大学 | Binocular vision odometer design method based on optical flow tracking and point line feature matching |
CN112580683A (en) * | 2020-11-17 | 2021-03-30 | 中山大学 | Multi-sensor data time alignment system and method based on cross correlation |
CN112580683B (en) * | 2020-11-17 | 2024-01-12 | 中山大学 | Multi-sensor data time alignment system and method based on cross correlation |
CN112665575B (en) * | 2020-11-27 | 2023-12-29 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN112665575A (en) * | 2020-11-27 | 2021-04-16 | 重庆大学 | SLAM loop detection method based on mobile robot |
CN112633122A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Front-end mileage calculation method and system of monocular VIO system |
CN112633122B (en) * | 2020-12-17 | 2024-01-23 | 厦门大学 | Front-end mileage calculation method and system of monocular VIO system |
JP7453470B2 (en) | 2021-01-11 | 2024-03-19 | 浙江商▲湯▼科技▲開▼▲発▼有限公司 | 3D reconstruction and related interactions, measurement methods and related devices and equipment |
WO2022147976A1 (en) * | 2021-01-11 | 2022-07-14 | 浙江商汤科技开发有限公司 | Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device |
CN112862803B (en) * | 2021-02-26 | 2023-09-26 | 中国人民解放军93114部队 | Infrared imaging SLAM method and device based on edge and feature point fusion |
CN112862803A (en) * | 2021-02-26 | 2021-05-28 | 中国人民解放军93114部队 | Infrared imaging SLAM method and device based on edge and feature point fusion |
CN113108771A (en) * | 2021-03-05 | 2021-07-13 | 华南理工大学 | Movement pose estimation method based on closed-loop direct sparse visual odometer |
CN113112542A (en) * | 2021-03-25 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Visual positioning method and device, electronic equipment and storage medium |
CN113206949A (en) * | 2021-04-01 | 2021-08-03 | 广州大学 | Semi-direct monocular vision SLAM method based on entropy weighted image gradient |
CN113362377A (en) * | 2021-06-29 | 2021-09-07 | 东南大学 | VO weighted optimization method based on monocular camera |
CN113674340A (en) * | 2021-07-05 | 2021-11-19 | 北京物资学院 | Binocular vision navigation method and device based on landmark points |
WO2023280274A1 (en) * | 2021-07-07 | 2023-01-12 | The Hong Kong University Of Science And Technology | Geometric structure aided visual localization method and system |
CN113592947A (en) * | 2021-07-30 | 2021-11-02 | 北京理工大学 | Visual odometer implementation method of semi-direct method |
CN113592947B (en) * | 2021-07-30 | 2024-03-12 | 北京理工大学 | Method for realizing visual odometer by semi-direct method |
CN113689485A (en) * | 2021-08-25 | 2021-11-23 | 北京三快在线科技有限公司 | Method and device for determining depth information of unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
CN113781567A (en) * | 2021-10-08 | 2021-12-10 | 西北工业大学 | Aerial image target geographic positioning method based on three-dimensional map generation |
CN113781567B (en) * | 2021-10-08 | 2024-05-31 | 西北工业大学 | Aerial image target geographic positioning method based on three-dimensional map generation |
CN114708392B (en) * | 2022-03-22 | 2024-05-14 | 重庆大学 | Octree map construction method based on closed-loop track |
CN114708392A (en) * | 2022-03-22 | 2022-07-05 | 重庆大学 | Closed-loop-track-based octree map construction method |
CN116625380B (en) * | 2023-07-26 | 2023-09-29 | 广东工业大学 | Path planning method and system based on machine learning and SLAM |
CN116625380A (en) * | 2023-07-26 | 2023-08-22 | 广东工业大学 | Path planning method and system based on machine learning and SLAM |
CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
CN116681733B (en) * | 2023-08-03 | 2023-11-07 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
Also Published As
Publication number | Publication date |
---|---|
CN108986037B (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
CN111028277B (en) | SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network | |
CN102693542B (en) | Image characteristic matching method | |
CN102426019B (en) | Unmanned aerial vehicle scene matching auxiliary navigation method and system | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
CN104268935A (en) | Feature-based airborne laser point cloud and image data fusion system and method | |
CN104574347A (en) | On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data | |
CN105352509A (en) | Unmanned aerial vehicle motion target tracking and positioning method under geographic information space-time constraint | |
CN110187375A (en) | A kind of method and device improving positioning accuracy based on SLAM positioning result | |
CN108917753B (en) | Aircraft position determination method based on motion recovery structure | |
CN111830953A (en) | Vehicle self-positioning method, device and system | |
CN105913435B (en) | A kind of multiscale morphology image matching method and system suitable for big region | |
CN106373088A (en) | Quick mosaic method for aviation images with high tilt rate and low overlapping rate | |
CN112419374A (en) | Unmanned aerial vehicle positioning method based on image registration | |
CN107167826A (en) | The longitudinal direction of car alignment system and method for Image Feature Detection based on variable grid in a kind of automatic Pilot | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN112270698A (en) | Non-rigid geometric registration method based on nearest curved surface | |
CN112805766A (en) | Apparatus and method for updating detailed map | |
CN116645392A (en) | Space target relative pose iterative estimation method and system based on key point weight | |
CN104567879A (en) | Method for extracting geocentric direction of combined view field navigation sensor | |
EP4010658B1 (en) | Method and device for generating a photogrammetric corridor map from a set of images | |
CN108921896A (en) | A kind of lower view vision compass merging dotted line feature | |
CN113744308B (en) | Pose optimization method, pose optimization device, electronic equipment, medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |