CN106909877A - A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously - Google Patents

A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously Download PDF

Info

Publication number
CN106909877A
CN106909877A CN201611142482.XA CN201611142482A CN106909877A CN 106909877 A CN106909877 A CN 106909877A CN 201611142482 A CN201611142482 A CN 201611142482A CN 106909877 A CN106909877 A CN 106909877A
Authority
CN
China
Prior art keywords
feature
image
straight line
characteristic
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611142482.XA
Other languages
Chinese (zh)
Other versions
CN106909877B (en
Inventor
刘勇
左星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201611142482.XA priority Critical patent/CN106909877B/en
Publication of CN106909877A publication Critical patent/CN106909877A/en
Application granted granted Critical
Publication of CN106909877B publication Critical patent/CN106909877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

Figure and localization method are built the invention discloses a kind of vision based on dotted line comprehensive characteristics simultaneously, the line feature and point feature for obtaining are extracted in the method integrated use from binocular camera image, can be used for the robot localization and Attitude estimation of outdoor environment indoors, dotted line feature is used to cause system more robust due to comprehensive, more accurately.For the parametrization of linear feature, we are used for the calculating of straight line, including geometric transformation, three-dimensional reconstruction etc. with Plucker coordinates, and we minimize the number of parameters of straight line with the orthogonal representation of straight line in the optimization of rear end.The offline visual dictionary for setting up comprehensive dotted line feature, for closed loop detection, and by increasing the method for flag bit so that dotted line feature is treated with a certain discrimination in visual dictionary with when setting up image data base, calculating picture similitude.This method can be used for the structure of the scene map of indoor and outdoor, the Map Generalization for constructing characteristic point and characteristic straight line, using the teaching of the invention it is possible to provide more abundant information.

Description

A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
Technical field
Figure and field of locating technology, the binocular vision SLAM of particularly a kind of feature based are built the present invention relates to vision simultaneously (while position and build figure) technical field.
Background technology
Modeled simultaneously and location technology for vision, optimization and figure optimization based on key frame turn into vision SLAM problems Main flow framework.Figure optimisation technique is it is verified that than traditional filtering in terms of the uniformity of the consumed resource of calculating and result Framework has better performance.Point feature is in vision while the feature being most widely used in building figure and location technology, in room All especially enriched in interior and outdoor environment, be easily traced in continuous image sequence, and the convenient meter in geometric transformation Calculate.However, point feature is larger for condition depended, high-quality point feature need robustness high but time-consuming feature detection with retouch State.The representational level of line aspect ratio point feature is high in the picture, in structured environment provide more robust information, using compared with Few line feature combination point feature sets up environmental map can be more efficiently and accurate with positioning.
The content of the invention
The technical problems to be solved by the invention are to provide a kind of vision SLAM methods based on dotted line comprehensive characteristics, can be with For the robot localization and Attitude estimation of outdoor environment indoors, dotted line feature is used to cause system more Shandong due to comprehensive Rod, more accurately.This method can be used for the structure of the scene map of indoor and outdoor, the Map Generalization for constructing characteristic point and spy Levy straight line, using the teaching of the invention it is possible to provide more abundant scene information.Therefore, the present invention provides following technical scheme:
A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously, it is characterised in that including offline foundation Visual dictionary and two parts of sparse visual signature map are set up online:
First, set up tree-shaped visual dictionary offline using clustering method, that is, describe the KD trees of subspace, and determine tree-shaped The inverse text frequency of each node in visual dictionary, described each node is the cluster centre of description:
The feature that every two field picture is included is converted into visual vocabulary, i.e. Feature Descriptor;The visual vocabulary is divided Strata class, sets up the KD trees of description subspace, and the KD trees are referred to as visual dictionary;Set up tree-shaped regarding offline using Feature Descriptor Feel dictionary, the training image of Feature Descriptor is concentrated and is extracted;Description is ORB (Oriented FAST and (BRIEF is Binary Robust Independent Elementary Features binary robusts to Rotated BRIEF Independent essential characteristic description), orientation Fast Corner Detection and binary robust independence essential characteristic describe sub- point feature description Son and LBD linear features describe son;(Line Band Descriptor tapes are described for ORB point features description and LBD Son) linear feature describes son and is binary descriptor, two kinds of binary descriptors are expanded respectively:It is ORB point features Addition flag bit 0, is LBD lines feature addition flag bit 1, and the flag bit 0 and flag bit 1 can distinguish linear feature and Dian Te Levy;First had to before obtaining LBD linear features description with LSD (Line Segment Detector Line segment detections) detection Straight line, then the straight line is described with LBD description;
The inverse text frequency of all Feature Descriptors that the weight of each node is included by the node in visual dictionary is (i.e. IDF, main thought is:It is bigger against text frequency if the picture comprising visual vocabulary t is fewer, then illustrate that vocabulary t has very Good class discrimination ability) determine;
Then, sparse visual signature map is set up online, and step is as follows:
Step one, obtains the image after correction from binocular camera, and to the image after correctionFeature extraction and description
Extract the dotted line feature and its description in the image after correction, On-line testing ORB point features description and LBD lines Feature Descriptor;
Step 2, the image after being corrected in binocular camera carries out characteristic matching and three-dimensional reconstruction:
Characteristic point in image and characteristic straight line after matching and correlation, it is right to set up matching, using binocular vision imaging model Characteristic point and characteristic straight line are carried out three-dimensional reconstruction, characteristic straight line is represented with Plucker coordinates in the reconstruction, and safeguard straight line End points, the sparse features map of comprehensive dotted line feature is set up with characteristic point and characteristic straight line, be used for using Plucker coordinates straight The expression and calculating of line
Step 3, the estimation of front and rear two field picture matching, local map matching and camera:
After the characteristic point and characteristic straight line in reconstructing three dimensions, these Points And lines are tracked with matching, matched Including two parts:Front and rear images match and local map are matched, and the front and rear two field picture is matched for estimating current time phase The pose of machine,
The process for solving pose is as follows, it is assumed that current time left camera coordinates system OcIn world coordinate system OwMiddle rotation peace Move and be respectively RwcAnd twc, the characteristic point j of reconstruct is in world coordinate system OwCoordinate be Pjw, then this feature point is left at current time Camera coordinates system OcUnder coordinate PjcFor:
Pjc=RcwPjw+tcw
The characteristic straight line i of reconstruct is in world coordinate system OwCoordinate be Liw, Liw=[nT,vT]T, then this feature straight line work as Preceding moment left camera coordinates system OcUnder coordinate be:
Wherein Rcw=Rwc T, tcw=-Rwctwc, respectively world coordinates ties up to the rotation and translation in left camera coordinates system, [tcw] it is by vectorial tcwThe antisymmetric matrix of 3 × 3 for constituting;By characteristic point PjcBy pinhole camera model projection to working as front left In camera, the image coordinate of its projection is obtainedBy characteristic straight line LicProject to when in front left camera, obtaining its projection straight line Equation is li;The error of Points And lines feature is defined respectively, and the error of point is re-projection error, i.e. projecting characteristic points coordinateWith sight Survey coordinate pjThe distance between epj;The error of line is to observe two end points ep1 of line segmenti、ep2iTo the geometry of projection straight line equation Apart from eli;The target of estimation is to solve for following non-linear least square problem:
The attitude for solving Current camera causes that the re-projection error of characteristic point and characteristic straight line is minimum;Wherein a, b are a little The weighted value of feature re-projection error and line feature re-projection error, a, b are two constants to reject error image feature The influence matched somebody with somebody, can use Ransac (Random Sample Consensus, stochastical sampling uniformity) method in optimization process Obtain the solution of estimation;
Step 4, makees winding and detects using the visual dictionary obtained in step one:
Point, the line Feature Descriptor obtained to vision key-frame extraction, set up image data base, described image database bag Containing the point in each key frame, line Feature Descriptor, according to the visual dictionary set up, the Feature Conversion of image into word bag to (TF represents the frequency that entry occurs in a frame picture to TF-IDF of amount, wherein word the bag vector comprising each visual vocabulary in image Rate, IDF is inverse text frequency mentioned above, and TF-IDF represents the product of TF and IDF) fraction, if a visual vocabulary exists The frequency TF-IDF fractions higher occurred in same two field picture are higher, but the frequency of occurrences is higher in whole image database TF-IDF fractions can be lower;
When evaluating the similitude of two images, image is converted into according to the feature extracted by word bag vector, then basis Word bag vector calculates similarity score, and the image to Current camera collection is compared with the image in image data base, score The image that more a height of same station acquisition is obtained, as one closed loop, had accessed this position before expression, recycle There are enough matchings in Geometrical consistency, i.e. two field pictures to supporting euclidean transformation, before and after time consistency, i.e. two field pictures Some image sequences also should be much like, further determine whether to be closed loop;
Step 5, the dotted line feature that step 2 to step 4 is obtained, camera motion estimation, closed loop detection etc. are put into and are based on In the figure Optimization Framework of key frame, the number of parameters of straight line is minimized using the orthogonal representation of straight line in the optimization of rear end. Optimize the pose of camera and the pose of feature Points And lines in figure Optimization Framework, realize that the positioning of camera is special with online sparse vision The structure of expropriation of land figure.
On the basis of above-mentioned technical proposal, the present invention can also be using further technical scheme once:
When extracting the dotted line feature in image, FAST Corner Detections are selected in the detection of characteristic point, are described son with ORB and are retouched State, the detection of linear feature is extracted linear feature and describes subrepresentation straight line with LBD using LSD algorithm.
For the expression of linear feature, the calculating of straight line, including geometric transformation, Three-dimensional Gravity to be used for using Plucker coordinates Build, minimize the number of parameters of straight line using the orthogonal representation of straight line in the optimization of rear end.
The offline visual dictionary of comprehensive dotted line feature is set up with clustering method kmeans++ (K averages ++ clustering method), is used Recognizing and inquire about similar image during online carries out winding detection, by increasing flag bit during dictionary is set up Method cause that dotted line feature is treated with a certain discrimination in visual dictionary and when setting up image data base, evaluate the similitude of two images When, image is converted into according to the feature extracted by word bag vector, wherein the TF-IDF comprising each visual vocabulary in image points Number, if frequency this fraction higher that a vocabulary occurs in same two field picture is higher, but in whole data set The frequency of occurrences this fraction higher can be lower;
There is a characteristic v in word bag vectori pWith line characteristic vi l;Two word bag vector vs1, v2Similarity definition For:
Wherein a, b are the weighted value of point feature score and line feature score, are two constants, and meet a+b=1.
Due to using technical scheme, beneficial effects of the present invention to be:Visual dictionary of the present invention should be with various Various mass data collection is trained, and to reach preferable Clustering Effect, visual dictionary can be reused after building up;This hair It is bright to try one's best using less feature to estimate the pose of current time camera, and the feature that local map matching is related to compared with It is many, more accurate solution can be obtained.
Brief description of the drawings
Fig. 1 is the visual dictionary model of the comprehensive dotted line feature that the present invention is set up based on clustering method;
Fig. 2 is that the Plucker coordinates of feature of present invention straight line represents schematic diagram;
Fig. 3 is the selection of the end points without line length straight line in space of the present invention;
Fig. 4 is the re-projection error model of feature of present invention straight line;
Fig. 5 is that graph model is set up in dotted line feature, camera motion estimation, closed loop detection that the present invention is obtained using front end etc..
Specific embodiment
Technical scheme for a better understanding of the present invention, is further described below in conjunction with accompanying drawing.
Visual dictionary is set up offline using clustering method, determines the inverse text frequency (IDF) of node:
In order to judge that whether repeated accesses cross the same area to camera, and vision is converted into by the feature that every two field picture is included in itself Vocabulary.The description subspace of these visual vocabularies correspondence discretization-be referred to as visual dictionary.As shown in Fig. 1, using substantial amounts of Feature Descriptor sets up tree-shaped dictionary offline, and Feature Descriptor is concentrated from substantial amounts of training image and extracted, and sets up tree-shaped word The process of allusion quotation is also the process for constantly being clustered with Kmeans++ algorithms.Here description is that ORB point features description and LBD are straight Line Feature Descriptor.Due to them all it is the binary descriptor of 256, therefore same visual dictionary can be put them on In, the process of setting up visual dictionary can be simplified and the operation carried out when winding is detected is carried out.Point in usual image is special Levy many and line feature few, therefore dotted line feature will be treated with a certain discrimination in visual dictionary.The binary descriptors of two kinds 256 point Do not expanded:It is LSD lines addition flag bit 1 for ORB point features add flag bit 0.So straight line just can be distinguished with flag bit Feature and point feature, when setting up image data base, movement images similitude etc. online, point feature and line feature are also distinguish between. Such as the visual dictionary model that Fig. 1 is the comprehensive dotted line feature set up based on clustering method.Visual dictionary should be with diversified Mass data collection is trained, and to reach preferable Clustering Effect, visual dictionary can be reused after building up.In visual dictionary The inverse text frequency (IDF) of all Feature Descriptors that the weight of each node is included by the node determines.
IDF=log (N/ni)
Wherein, N is the quantity of all images in data set, niTo include the picture number of the feature representated by the node Amount.
The vision SLAM key steps of online comprehensive dotted line feature:
Step one, obtains the image after correction, and carry out the feature extraction and description of image from binocular camera
Extract the point in binocular camera image, line feature and its describe son.FAST angle points are selected in the wherein detection of characteristic point Detection, describes son and is described with ORB.Their calculating and matching speed are all very fast, while having rotation not to visual angle Denaturation.The detection of linear feature is extracted linear feature and uses LBD using LSD (line segment detection) algorithm (line band descriptor) describes subrepresentation straight line.ORB descriptions and LBD description are all that the binary system of 256 is retouched Son is stated, storage organization is identical, this provides convenience to set up the offline dictionary and query image database of comprehensive dotted line feature. The step is with to set up during visual dictionary the part of extracting feature and description offline identical.
Step 2, left images characteristic matching and three-dimensional reconstruction
When left images matching is done, the characteristic point in right image and the midpoint of characteristic straight line are projected to left image On.Because image is corrected, it is only necessary to found in a rectangular window in left figure and the Hamming distance from right figure feature Minimum feature, this feature is the feature with right figure characteristic matching.It is ranked up by Hamming distance size again, self adaptation Ground selected threshold, rejects some matchings in larger distance right, it is ensured that the degree of accuracy of matching.
The three-dimensional reconstruction of characteristic point:
For the image having corrected that, it is assumed that point of the match point in left images is respectively m=[u1 v]TWith m'=[u2 v]T, coordinates of the three-dimensional point M determined by m and m' under left camera coordinates system is [X Y Z]T, then have:
Wherein B, f, ucAnd vcIt is the parameter of Binocular Stereo Vision System after image rectification, B is the baseline distance of binocular camera From f is camera focus, [uc vc]TIt is the pixel coordinate of optical axis and image plane intersection point, d=u1-u2It is the parallax of match point, parallax The depth of three-dimensional point is reacted.
The three-dimensional reconstruction of characteristic straight line:
Obviously represent that straight line is inappropriate with two end points of three-dimensional, because the change at visual angle and some barrier handles The end points of straight line is extracted and followed the trail of from image becomes very difficult.Therefore, 3 d-line in space is expressed as infinite length Straight line it is the most suitable.As shown in Fig. 2 it is used for the calculating of straight line using Plucker coordinates, including geometric transformation, three-dimensional reconstruction Deng being used for the optimization of rear end with the orthogonal representation of straight line.
When three-dimensional reconstruction is carried out to straight line, geometric transformation and calculating must be carried out in order to efficient, using Plucker coordinates L= [nT,vT]TTo represent straight line, such as Fig. 2, wherein n are the normal vector of the plane p that straight line is constituted with camera origin Oc, and v is straight line L's Direction vector.Plucker coordinates has a constraints n perpendicular to v, i.e. n × v=0.The throwing of straight line L in space in image plane Shadow is projected as point a, b for straight line l, corresponding straight line terminal A, B.In camera coordinates system OcIn,c=KC,d=KD,×D,l c ×d, whereinc,d,lFor homogeneous coordinates are represented, × it is apposition, i.e. multiplication cross.K is camera Intrinsic Matrix,
The straight line l in image plane is can be derived from, l=det (K) K is met-Tn.Assuming that left image center is constituted with space line L Plane be pl, right image center is p with the plane that space line L is constitutedr, then the intersection of the two planes be space line. Plane plHomogeneous coordinates be expressed as:
Whereinl lIt is space line imaging in left camera image plane.PlIt is the projection matrix with left camera,
Pl=Kl[I|0]
KlIt is the Intrinsic Matrix of left camera, I is 3 ' 3 unit matrix, and 0 is 3 ' 1 null matrix.Similarly, by camera It is p that outer parameter can obtain right image center with the plane that space line L is constitutedrHomogeneous coordinates representp r.The intersection of two planes The antithesis Pu Lvke matrixes of as space line L, L are expressed as
The relation that antithesis Pu Lvke matrixes and Plucker coordinates are represented is:
Plucker coordinates can be obtained using above formula.
The three-dimensional reconstruction of characteristic straight line is above, further, since to set up scene map, space line L is endless , for the ease of display, it would be desirable to cut the space line, that is, safeguard that two end points C, D of straight line need to safeguard.Space is straight The selection of line L upper extreme points C, D, can be according to certain regular imaging l by space line L in left camera image planelEnd points enter Row geometric transformation is obtained, such as Fig. 3, is the selection schematic diagram of the end points of straight line.E is vertical with l in left camera image plane in figure Straight line lcOn point, the distance between e-c can be set to arbitrary value.Plane p' is to be put down determined by straight line ec and image center Oc Face.The straight line L for going to cut in space with plane p', can obtain end points C.Similarly, end points D can be obtained.In the mistake of camera motion End points c, d of Cheng Zhong, the imaging l of the same space line L in left camera image plane are not fixed, therefore cutting obtains C, D be also different, only from maximum C, D point of distance as the straight line end points safeguarded in space.
The estimation of step 3, front and rear Image Feature Matching and cameraLeft images are special After levying matching and three-dimensional reconstruction, the three-dimensional coordinate Pj of characteristic point j and characteristic straight line i in world coordinate system has been obtainedwAnd feature The Plucker coordinates Li of straight linew, projections of the characteristic point j in current time left image can be obtained after front and rear images match Projection l of the characteristic straight line in current time left imagei.Assuming that current time left camera coordinates system OcIn world coordinate system OwIn Rotation and translation is respectively RwcAnd twc, then this feature point is in current time left camera coordinates system OcUnder coordinate be Pjc=RcwPjw +tcw.Characteristic straight line i is in current time left camera coordinates system OcUnder coordinate be, wherein Rcw=Rwc T, tcw=-Rwctwc, respectively World coordinates ties up to the rotation and translation in left camera coordinates system.[tcw] it is by vectorial tcwThe antisymmetric matrix of 3 × 3 for constituting. By characteristic point PjcBy pinhole camera model projection to the image coordinate in the front left camera, obtaining its projectionFeature is straight Line LicIt is l to project to when its projection straight line equation in front left camera image, is obtainedi.The error of Points And lines feature, point are defined respectively Error be re-projection error, i.e. projecting characteristic points coordinateWith observation coordinate pjThe distance between epj
The error of line is to observe two end points ep1 of line segmenti、ep2iTo the geometric distance e of projection straight line equationli
Wherein ep1i=[ep1i1 ep1i2 1]TIt is end points ep1iHomogeneous coordinates represent, ep2iIt is end points ep2iIt is homogeneous Coordinate representation, lc=[lc1 lc2 lc3]TIt is linear equation lcCoefficient constitute vector.
The target of estimation is to solve for following non-linear least square problem:
The attitude for solving Current camera causes that the re-projection error of characteristic point and characteristic straight line is minimum.Wherein a, b are a little The weighted value of feature re-projection error and line feature re-projection error, is two constants, can rule of thumb be set.In order to reject mistake The influence of Image Feature Matching by mistake, the more preferable solution of estimation can be obtained in this step using Ransac methods.
Step 4, makees winding and detects using the visual dictionary of off-line training
The dotted line Feature Descriptor obtained to vision key-frame extraction, sets up image data base.Key-frame extraction is arrived It is described son and calculates distance with the cluster centre as node in visual dictionary, one layer of conduct in selection lexicographic tree is compared Layer (general choose 4-6 layer), i.e., be divided into nearest apart from it node in dictionary seeds this layer being described of extracting is sub. According to dividing condition, image can be separated into word bag vector, the dimension of word bag vector is the number for comparing node layer, word bag to TF-IDF fraction of the amount comprising each visual vocabulary in image, has a characteristic v in word bag vectori pWith line characteristic vi l。 If frequency this fraction higher that a vocabulary occurs in same two field picture is higher, but occur in whole data set Frequency this fraction higher can be lower.TF-IDF is:
TF-IDF=IDF* (niIt/nIt)
niItIt is in image ItIn the visual vocabulary quantity, nItIt is image ItIn all visual vocabularies quantity, IDF is Inverse text frequency of the visual vocabulary in the offline visual dictionary set up.
Next newly-generated word bag vector can be compared with the word bag vector in image data base, carry out similitude and sentence It is disconnected.Two word bag vector vs1, v2Similarity definition be:
Wherein a, b are the weighted value of point feature score and line feature score, are two constants, and meet a+b=1, can root Set according to experience.Only winding detection is carried out according to similitude flase drop occurs, it is necessary to aid in other information.In database Image is close in time to typically result in similar fraction.Using this characteristic, image close in sequential is grouped, and with group Be unit comparison score, the fraction of image sets be exactly fraction in group per two field picture and.Fraction per two field picture necessarily be greater than certain Individual threshold value can be just added on the fraction of image sets.Once search whole image database, that is organized just to be grouped fraction highest It is selected, and that image of wherein single-frame images highest scoring is regarded as closed image undetermined.Finally recycle several What checking (all characteristic points in movement images), time consistency (also can by the image in the surrounding time section of closed image pair Have similitude) etc. strategy obtain the image pair of closed loop.
Step 5, the dotted line feature that front end is obtained, camera motion estimation, closed loop detection etc. are put into the figure based on key frame To the object function for needing to optimize in Optimization Framework, i.e. the error model of point feature and line feature, closed loop detection error model enters Row modeling.This is nonlinear optimal problem, can set up graph model, then calls Open-Source Tools using solving the openness of the problem g2o(General Graph Optimization)、GTSAM(Georgia Tech Smoothing and Mapping)、 The figure optimization tool such as Ceressolver is iterated optimization.Finally give the camera position attitude after optimization and the point in space And straight line.
The error model of point feature:
Assuming that the left camera coordinates system O of current time icIn world coordinate system OwMiddle rotation and translation is respectively RwcAnd twcIf,The characteristic point j of reconstruct is in world coordinate system OwCoordinate be Pwj, then this feature point is in current time left camera Coordinate system OcUnder coordinate be:
Pij=RcwPwj+tcw
Pij=[xij yij zij]T
pijLeft camera image is projected to by camera projection model, image coordinate isWherein p is Projection equation:
Wherein, fx, fyIt is the focal length in the transverse and longitudinal direction of camera, (uc,vc) it is the imaging origin of camera, it is camera internal reference.
The re-projection error of point is defined as projecting characteristic points coordinateWith observation coordinate pijThe distance between eij
The error model of line feature:
The characteristic straight line k of reconstruct is in world coordinate system OwCoordinate be Lwk, Lwk=[nT,vT]T, then this feature straight line work as Preceding moment left camera coordinates system OcUnder coordinate be:
As shown in figure 4, by characteristic straight line LikProject to when in front left camera image, obtaining its projection straight line equation and be
Straight line L is in the projection straight line of left camera image planeAnd line segment is observed for lik.Order observation line segment likEnd points A, b are to projection straight lineApart from dl1,dl2It is set to error function:
Wherein a=[a1 a2 1]TIt is the homogeneous coordinates of end points a, b=[b1 b2 1]TIt is the homogeneous coordinates of end points b,It is linear equationCoefficient constitute vector.
In the optimization process of rear end, in order that the number of parameters of straight line is minimized prevented parametrization, using orthogonal representation Method (U, W) non-SO3SO2 parameterizes straight line, and wherein SO3 is three-dimensional orthogonal spin matrix, SO2 is two-dimensional quadrature spin moment Battle array, its free degree is respectively 3 and 1.
Order
Here with four minimum parametersWherein θ is the vector of 3 × I,It is a scalar.Can pass throughTo update U, W ∈ SO3 × SO2.
The error model of closed loop constraint:
Assuming that by the position and attitude x of a certain moment camerai, detect position i and walked using closed loop detection method The position i' for crossing is same position, that is, have found a pair of closed loop xiWith xi', generate closed loop constraint Cl.The error that then closed loop is constrained It is ec=xi-g(xi',Cl).Wherein function g is that position and attitude and the closed loop constraint for matching the centering a certain moment according to closed loop are calculated The function of the position and attitude at closed loop matching centering another moment.
Using characteristic point and characteristic straight line as road sign l, the position and attitude x and road sign l of camera as the node in graph model, As for winding is detected ClAnd the observation Z of binocular camera establishes graph model as side, as shown in Figure 5.Figure optimization will be solved Problem certainly is exactly constantly optimized variable l, x in the case of known to u, z, c, therefore known u, z, c as observation Z, Using variable l, x as state X.The figure Optimized model problem to be solved is exactly to maximize joint probability, tries to achieve l*、x*
As it is assumed that observation Z is in state Xi,XjBetween observation error be e0(Xi,Xj), i.e., four kinds of errors above-mentioned. Assuming that it is Ω that all errors obey covariance0 -1Gaussian Profile, then
The negative logarithm of above formula is taken, the object function of figure Optimized model will be changed into:
The problem is nonlinear optimal problem, can be by Gauss-Newton, LM LM in figure Optimization Framework The methods such as (Levenberg-Marquardt Optimization), Dogleg (method that Powell is proposed) are solved.

Claims (4)

1. a kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously, it is characterised in that regarded including offline foundation Feel dictionary and set up two parts of sparse visual signature map online:
First, set up tree-shaped visual dictionary offline using clustering method, that is, describe the KD trees of subspace, and determine tree-shaped vision The inverse text frequency of each node in dictionary, described each node is the cluster centre of description:
The feature that every two field picture is included is converted into visual vocabulary, i.e. Feature Descriptor;Layering is carried out to the visual vocabulary poly- Class, sets up the KD trees of description subspace, and the KD trees are referred to as visual dictionary;Set up tree-shaped visual word offline using Feature Descriptor Allusion quotation, the training image of Feature Descriptor is concentrated and is extracted;Description is ORB, and orientation Fast Corner Detection and two is entered Robust independence essential characteristic processed describes sub- point feature description and LBD linear features describe son;ORB point features description and LBD linear features describe son and are binary descriptor, and two kinds of binary descriptors are expanded respectively:It is ORB point features Addition flag bit 0, is LBD lines feature addition flag bit 1, and the flag bit 0 and flag bit 1 can distinguish linear feature and Dian Te Levy;First have to use LSD detection of straight lines before obtaining LBD linear features description, then son is described with LBD and the straight line is retouched State;
The inverse text frequency of all Feature Descriptors that the weight of each node is included by the node in visual dictionary determines;
Then, sparse visual signature map is set up online, and step is as follows:
Step one, obtains the image after correction from binocular camera, and to the image after correctionFeature extraction and description
Extract the dotted line feature and its description in the image after correction, On-line testing ORB point features description and LBD line features Description;
Step 2, the image after being corrected in binocular camera carries out characteristic matching and three-dimensional reconstruction:
Characteristic point in image and characteristic straight line after matching and correlation, it is right to set up matching, using binocular vision imaging model special Levy a little and characteristic straight line carries out three-dimensional reconstruction, represent characteristic straight line with Plucker coordinates in the reconstruction, and safeguard the end of straight line Point, the sparse features map of comprehensive dotted line feature is set up with characteristic point and characteristic straight line, and straight line is used for using Plucker coordinates Represent and calculate
Step 3, the estimation of front and rear two field picture matching, local map matching and camera:
After the characteristic point and characteristic straight line in reconstructing three dimensions, matching is tracked to these Points And lines, matching includes Two parts:Front and rear images match and local map are matched, and the front and rear two field picture is matched for estimating current time camera Pose,
The process for solving pose is as follows, it is assumed that current time left camera coordinates system OcIn world coordinate system OwMiddle rotation and translation point Wei not RwcAnd twc, the characteristic point j of reconstruct is in world coordinate system OwCoordinate be Pjw, then this feature point is in current time left camera Coordinate system OcUnder coordinate PjcFor:
Pjc=RcwPjw+tcw
The characteristic straight line i of reconstruct is in world coordinate system OwCoordinate be Liw, Liw=[nT,vT]T, then this feature straight line is when current Carve left camera coordinates system OcUnder coordinate be:
Li c = R c w [ t c w ] × R c w 0 R c w Li w ;
Wherein Rcw=Rwc T, tcw=-Rwctwc, respectively world coordinates ties up to the rotation and translation in left camera coordinates system, [tcw] It is by vectorial tcwThe antisymmetric matrix of 3 × 3 for constituting;By characteristic point PjcBy pinhole camera model projection to when front left camera In, obtain the image coordinate of its projectionBy characteristic straight line LicProject to when in front left camera, obtaining its projection straight line equation It is li;The error of Points And lines feature is defined respectively, and the error of point is re-projection error, i.e. projecting characteristic points coordinateSat with observation Mark pjThe distance between epj;The error of line is to observe two end points ep1 of line segmenti、ep2iTo the geometric distance of projection straight line equation eli;The target of estimation is to solve for following non-linear least square problem:
( R w c , t w c ) = arg max R / t α Σ j | | e p j | | + β Σ i | | e l i | |
The attitude for solving Current camera causes that the re-projection error of characteristic point and characteristic straight line is minimum;Wherein a, b are point feature The weighted value of re-projection error and line feature re-projection error, a, b are two constants to reject error image characteristic matching Influence, can obtain the solution of estimation in optimization process using Ransac methods;
Step 4, makees winding and detects using the visual dictionary obtained in step one:
Point, the line Feature Descriptor obtained to vision key-frame extraction, set up image data base, and described image database is comprising every Point, line Feature Descriptor in one key frame, it is according to the visual dictionary set up, the Feature Conversion of image is vectorial into word bag, its (TF represents the frequency that entry occurs in a frame picture, IDF to TF-IDF of the middle word bag vector comprising each visual vocabulary in image It is inverse text frequency mentioned above, TF-IDF represents the product of TF and IDF) fraction, if a visual vocabulary is in same frame The frequency TF-IDF fractions higher occurred in image are higher, but the frequency of occurrences TF-IDF higher in whole image database Fraction can be lower;
When evaluating the similitude of two images, image is converted into according to the feature extracted by word bag vector, then according to word bag Vector calculates similarity score, and the image to Current camera collection is compared with the image in image data base, and score is higher It is the image that same station acquisition is obtained, as one closed loop had accessed this position, and recycled geometry before expression There are enough matchings in uniformity, i.e. two field pictures to supporting euclidean transformation, time consistency, if that is, before and after two field pictures Dry image sequence also should be much like, further determines whether to be closed loop;
Step 5, the dotted line feature that step 2 to step 4 is obtained, camera motion estimation, closed loop detection etc. are put into based on key In the figure Optimization Framework of frame, the number of parameters of straight line is minimized using the orthogonal representation of straight line in the optimization of rear end.In figure Optimize the pose of camera and the pose of feature Points And lines in Optimization Framework, realize the positioning and online sparse visual signature ground of camera The structure of figure.
2. a kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously as described in claim 1, its feature It is that when extracting the dotted line feature in image, FAST Corner Detections are selected in the detection of characteristic point, are described son with ORB and are described, The detection of linear feature is extracted linear feature and describes subrepresentation straight line with LBD using LSD algorithm.
3. a kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously as described in claim 1, its feature It is for the expression of linear feature, to be used for the calculating of straight line, including geometric transformation using Plucker coordinates, three-dimensional reconstruction, The number of parameters of straight line is minimized in the optimization of rear end using the orthogonal representation of straight line.
4. a kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously as described in claim 1, its feature It is that the offline visual dictionary of comprehensive dotted line feature is set up with clustering method kmeans++, for recognizing during online and looking into Asking similar image carries out winding detection, by increasing the method for flag bit so that dotted line feature exists during dictionary is set up Treated with a certain discrimination in visual dictionary and when setting up image data base, when evaluating the similitude of two images, according to the feature extracted Image is converted into word bag vector, wherein the TF-IDF fractions comprising each visual vocabulary in image, if a vocabulary is same Frequency this fraction higher occurred in one two field picture is higher, but the frequency of occurrences this fraction higher in whole data set Can be lower;
There is a characteristic v in word bag vectori pWith line characteristic vi l;Two word bag vector vs1, v2Similarity definition be:
s ( v 1 , v 2 ) = 1 - α 2 | v 1 p | v 1 p | - v 2 p | v 2 p | | - β 2 | v 1 l | v 1 l | - v 2 l | v 2 l | |
Wherein a, b are the weighted value of point feature score and line feature score, are two constants, and meet a+b=1.
CN201611142482.XA 2016-12-13 2016-12-13 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics Active CN106909877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611142482.XA CN106909877B (en) 2016-12-13 2016-12-13 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611142482.XA CN106909877B (en) 2016-12-13 2016-12-13 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics

Publications (2)

Publication Number Publication Date
CN106909877A true CN106909877A (en) 2017-06-30
CN106909877B CN106909877B (en) 2020-04-14

Family

ID=59206482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611142482.XA Active CN106909877B (en) 2016-12-13 2016-12-13 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics

Country Status (1)

Country Link
CN (1) CN106909877B (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329490A (en) * 2017-07-21 2017-11-07 歌尔科技有限公司 Unmanned plane barrier-avoiding method and unmanned plane
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN107752910A (en) * 2017-09-08 2018-03-06 珠海格力电器股份有限公司 Region cleaning method, device, storage medium, processor and sweeping robot
CN107784671A (en) * 2017-12-01 2018-03-09 驭势科技(北京)有限公司 A kind of method and system positioned immediately for vision with building figure
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN107885224A (en) * 2017-11-06 2018-04-06 北京韦加无人机科技股份有限公司 Unmanned plane barrier-avoiding method based on tri-item stereo vision
CN108090959A (en) * 2017-12-07 2018-05-29 中煤航测遥感集团有限公司 Indoor and outdoor one modeling method and device
CN108107897A (en) * 2018-01-11 2018-06-01 驭势科技(北京)有限公司 Real time sensor control method and device
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized
CN108363387A (en) * 2018-01-11 2018-08-03 驭势科技(北京)有限公司 Sensor control method and device
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN108921896A (en) * 2018-06-15 2018-11-30 浙江大学 A kind of lower view vision compass merging dotted line feature
CN108961322A (en) * 2018-05-18 2018-12-07 辽宁工程技术大学 A kind of error hiding elimination method suitable for the sequential images that land
CN109034237A (en) * 2018-07-20 2018-12-18 杭州电子科技大学 Winding detection method based on convolutional Neural metanetwork road sign and sequence search
CN109101981A (en) * 2018-07-19 2018-12-28 东南大学 Winding detection method based on global image bar code under a kind of streetscape scene
CN109165680A (en) * 2018-08-01 2019-01-08 东南大学 Single target object dictionary model refinement method under the indoor scene of view-based access control model SLAM
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109493385A (en) * 2018-10-08 2019-03-19 上海大学 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device
CN110033514A (en) * 2019-04-03 2019-07-19 西安交通大学 A kind of method for reconstructing based on dotted line feature rapid fusion
CN110375732A (en) * 2019-07-22 2019-10-25 中国人民解放军国防科技大学 Monocular camera pose measurement method based on inertial measurement unit and point line characteristics
CN110399892A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Environmental characteristic extracting method and device
CN110455301A (en) * 2019-08-01 2019-11-15 河北工业大学 A kind of dynamic scene SLAM method based on Inertial Measurement Unit
CN110473258A (en) * 2019-07-24 2019-11-19 西北工业大学 Monocular SLAM system initialization algorithm based on dotted line Unified frame
WO2020006686A1 (en) * 2018-07-03 2020-01-09 深圳前海达闼云端智能科技有限公司 Method for creating map, positioning method, terminal, and computer readable storage medium
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111310772A (en) * 2020-03-16 2020-06-19 上海交通大学 Point line feature selection method and system for binocular vision SLAM
CN111830517A (en) * 2019-04-17 2020-10-27 北京地平线机器人技术研发有限公司 Method and device for adjusting scanning range of laser radar and electronic equipment
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112085790A (en) * 2020-08-14 2020-12-15 香港理工大学深圳研究院 Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN112115980A (en) * 2020-08-25 2020-12-22 西北工业大学 Binocular vision odometer design method based on optical flow tracking and point line feature matching
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics
CN113298014A (en) * 2021-06-09 2021-08-24 安徽工程大学 Closed loop detection method, storage medium and equipment based on reverse index key frame selection strategy
CN113393524A (en) * 2021-06-18 2021-09-14 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113432593A (en) * 2021-06-25 2021-09-24 北京华捷艾米科技有限公司 Centralized synchronous positioning and map construction method, device and system
CN113450412A (en) * 2021-07-15 2021-09-28 北京理工大学 Visual SLAM method based on linear features
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
CN113532431A (en) * 2021-07-15 2021-10-22 贵州电网有限责任公司 Visual inertia SLAM method for power inspection and operation
US11270148B2 (en) 2017-09-22 2022-03-08 Huawei Technologies Co., Ltd. Visual SLAM method and apparatus based on point and line features
CN114789446A (en) * 2022-05-27 2022-07-26 平安普惠企业管理有限公司 Robot pose estimation method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012192090A (en) * 2011-03-17 2012-10-11 Kao Corp Information processing method, method for estimating orbitale, method for calculating frankfurt plane, and information processor
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN102967297A (en) * 2012-11-23 2013-03-13 浙江大学 Space-movable visual sensor array system and image information fusion method
CN104639932A (en) * 2014-12-12 2015-05-20 浙江大学 Free stereoscopic display content generating method based on self-adaptive blocking
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic
CN106022304A (en) * 2016-06-03 2016-10-12 浙江大学 Binocular camera-based real time human sitting posture condition detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012192090A (en) * 2011-03-17 2012-10-11 Kao Corp Information processing method, method for estimating orbitale, method for calculating frankfurt plane, and information processor
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN102967297A (en) * 2012-11-23 2013-03-13 浙江大学 Space-movable visual sensor array system and image information fusion method
CN104639932A (en) * 2014-12-12 2015-05-20 浙江大学 Free stereoscopic display content generating method based on self-adaptive blocking
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic
CN106022304A (en) * 2016-06-03 2016-10-12 浙江大学 Binocular camera-based real time human sitting posture condition detection method

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107392964B (en) * 2017-07-07 2019-09-17 武汉大学 The indoor SLAM method combined based on indoor characteristic point and structure lines
CN107329490A (en) * 2017-07-21 2017-11-07 歌尔科技有限公司 Unmanned plane barrier-avoiding method and unmanned plane
CN107752910A (en) * 2017-09-08 2018-03-06 珠海格力电器股份有限公司 Region cleaning method, device, storage medium, processor and sweeping robot
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
US11270148B2 (en) 2017-09-22 2022-03-08 Huawei Technologies Co., Ltd. Visual SLAM method and apparatus based on point and line features
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN107885224A (en) * 2017-11-06 2018-04-06 北京韦加无人机科技股份有限公司 Unmanned plane barrier-avoiding method based on tri-item stereo vision
CN107869989B (en) * 2017-11-06 2020-02-07 东北大学 Positioning method and system based on visual inertial navigation information fusion
CN107784671A (en) * 2017-12-01 2018-03-09 驭势科技(北京)有限公司 A kind of method and system positioned immediately for vision with building figure
CN108090959A (en) * 2017-12-07 2018-05-29 中煤航测遥感集团有限公司 Indoor and outdoor one modeling method and device
CN108230337B (en) * 2017-12-31 2020-07-03 厦门大学 Semantic SLAM system implementation method based on mobile terminal
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized
CN108363387A (en) * 2018-01-11 2018-08-03 驭势科技(北京)有限公司 Sensor control method and device
CN108107897A (en) * 2018-01-11 2018-06-01 驭势科技(北京)有限公司 Real time sensor control method and device
CN110399892A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Environmental characteristic extracting method and device
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN108961322B (en) * 2018-05-18 2021-08-10 辽宁工程技术大学 Mismatching elimination method suitable for landing sequence images
CN108961322A (en) * 2018-05-18 2018-12-07 辽宁工程技术大学 A kind of error hiding elimination method suitable for the sequential images that land
CN108921896B (en) * 2018-06-15 2021-04-30 浙江大学 Downward vision compass integrating dotted line characteristics
CN108921896A (en) * 2018-06-15 2018-11-30 浙江大学 A kind of lower view vision compass merging dotted line feature
WO2020006686A1 (en) * 2018-07-03 2020-01-09 深圳前海达闼云端智能科技有限公司 Method for creating map, positioning method, terminal, and computer readable storage medium
CN109101981A (en) * 2018-07-19 2018-12-28 东南大学 Winding detection method based on global image bar code under a kind of streetscape scene
CN109101981B (en) * 2018-07-19 2021-08-24 东南大学 Loop detection method based on global image stripe code in streetscape scene
CN109034237A (en) * 2018-07-20 2018-12-18 杭州电子科技大学 Winding detection method based on convolutional Neural metanetwork road sign and sequence search
CN109034237B (en) * 2018-07-20 2021-09-17 杭州电子科技大学 Loop detection method based on convolutional neural network signposts and sequence search
CN109165680B (en) * 2018-08-01 2022-07-26 东南大学 Single-target object dictionary model improvement method in indoor scene based on visual SLAM
CN109165680A (en) * 2018-08-01 2019-01-08 东南大学 Single target object dictionary model refinement method under the indoor scene of view-based access control model SLAM
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109493385A (en) * 2018-10-08 2019-03-19 上海大学 Autonomic positioning method in a kind of mobile robot room of combination scene point line feature
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device
CN110033514A (en) * 2019-04-03 2019-07-19 西安交通大学 A kind of method for reconstructing based on dotted line feature rapid fusion
CN111830517B (en) * 2019-04-17 2023-08-01 北京地平线机器人技术研发有限公司 Method and device for adjusting laser radar scanning range and electronic equipment
CN111830517A (en) * 2019-04-17 2020-10-27 北京地平线机器人技术研发有限公司 Method and device for adjusting scanning range of laser radar and electronic equipment
CN110375732A (en) * 2019-07-22 2019-10-25 中国人民解放军国防科技大学 Monocular camera pose measurement method based on inertial measurement unit and point line characteristics
CN110473258A (en) * 2019-07-24 2019-11-19 西北工业大学 Monocular SLAM system initialization algorithm based on dotted line Unified frame
CN110473258B (en) * 2019-07-24 2022-05-13 西北工业大学 Monocular SLAM system initialization algorithm based on point-line unified framework
CN110455301A (en) * 2019-08-01 2019-11-15 河北工业大学 A kind of dynamic scene SLAM method based on Inertial Measurement Unit
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111310772B (en) * 2020-03-16 2023-04-21 上海交通大学 Point line characteristic selection method and system for binocular vision SLAM
CN111310772A (en) * 2020-03-16 2020-06-19 上海交通大学 Point line feature selection method and system for binocular vision SLAM
CN111899334B (en) * 2020-07-28 2023-04-18 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics
CN112085790A (en) * 2020-08-14 2020-12-15 香港理工大学深圳研究院 Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN112115980A (en) * 2020-08-25 2020-12-22 西北工业大学 Binocular vision odometer design method based on optical flow tracking and point line feature matching
CN112507778B (en) * 2020-10-16 2022-10-04 天津大学 Loop detection method of improved bag-of-words model based on line characteristics
CN112507778A (en) * 2020-10-16 2021-03-16 天津大学 Loop detection method of improved bag-of-words model based on line characteristics
US20220406059A1 (en) * 2021-06-09 2022-12-22 Anhui Polytechnic University Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
CN113298014A (en) * 2021-06-09 2021-08-24 安徽工程大学 Closed loop detection method, storage medium and equipment based on reverse index key frame selection strategy
US11645846B2 (en) * 2021-06-09 2023-05-09 Anhui Polytechnic University Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
CN113393524A (en) * 2021-06-18 2021-09-14 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113393524B (en) * 2021-06-18 2023-09-26 常州大学 Target pose estimation method combining deep learning and contour point cloud reconstruction
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
CN113432593A (en) * 2021-06-25 2021-09-24 北京华捷艾米科技有限公司 Centralized synchronous positioning and map construction method, device and system
CN113532431A (en) * 2021-07-15 2021-10-22 贵州电网有限责任公司 Visual inertia SLAM method for power inspection and operation
CN113450412A (en) * 2021-07-15 2021-09-28 北京理工大学 Visual SLAM method based on linear features
CN114789446A (en) * 2022-05-27 2022-07-26 平安普惠企业管理有限公司 Robot pose estimation method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106909877B (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN106909877A (en) A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
Mancini et al. J-mod 2: Joint monocular obstacle detection and depth estimation
CN113012212B (en) Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
Straub et al. The Manhattan frame model—Manhattan world inference in the space of surface normals
Zhang et al. Hierarchical topic model based object association for semantic SLAM
CN102999942B (en) Three-dimensional face reconstruction method
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
US9449392B2 (en) Estimator training method and pose estimating method using depth image
CN107967457A (en) A kind of place identification for adapting to visual signature change and relative positioning method and system
CN107871106A (en) Face detection method and device
Nevatia et al. Structured Descriptions of Complex Objects.
CN102750704B (en) Step-by-step video camera self-calibration method
Qin et al. Semantic loop closure detection based on graph matching in multi-objects scenes
CN106023211A (en) Robot image positioning method and system base on deep learning
CN106228539A (en) Multiple geometric primitive automatic identifying method in a kind of three-dimensional point cloud
CN104637090A (en) Indoor scene modeling method based on single picture
Jiao et al. 2-entity random sample consensus for robust visual localization: Framework, methods, and verifications
CN111998862A (en) Dense binocular SLAM method based on BNN
CN112200915A (en) Front and back deformation amount detection method based on target three-dimensional model texture image
CN101276370A (en) Three-dimensional human body movement data retrieval method based on key frame
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN113034592B (en) Three-dimensional scene target detection modeling and detection method based on natural language description
Yu et al. A deep-learning-based strategy for kidnapped robot problem in similar indoor environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant