CN111899334B - Visual synchronous positioning and map building method and device based on point-line characteristics - Google Patents

Visual synchronous positioning and map building method and device based on point-line characteristics Download PDF

Info

Publication number
CN111899334B
CN111899334B CN202010739596.2A CN202010739596A CN111899334B CN 111899334 B CN111899334 B CN 111899334B CN 202010739596 A CN202010739596 A CN 202010739596A CN 111899334 B CN111899334 B CN 111899334B
Authority
CN
China
Prior art keywords
point
feature
image
points
environment image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010739596.2A
Other languages
Chinese (zh)
Other versions
CN111899334A (en
Inventor
孟宇
王明
孙昊
刘立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202010739596.2A priority Critical patent/CN111899334B/en
Publication of CN111899334A publication Critical patent/CN111899334A/en
Application granted granted Critical
Publication of CN111899334B publication Critical patent/CN111899334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The invention discloses a visual synchronous positioning and map building method and device based on dotted line characteristics, wherein the visual synchronous positioning and map building method comprises the following steps: acquiring an environment image corresponding to the current environment to be positioned and the map construction environment; preprocessing an environment image, and performing characteristic point extraction and linear characteristic extraction on the preprocessed environment image to acquire characteristic point information and linear characteristic information in the environment image; matching the characteristic information of the environment image to be matched to obtain a matching result of the corresponding environment image; and constructing a visual synchronous positioning and mapping system based on the matching result of the corresponding environment image, and realizing self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system. The method of the invention can enable the equipment to finish synchronous positioning and map building under the condition of only using the camera, and also considers the system performance and the real-time property.

Description

Visual synchronous positioning and map building method and device based on point-line characteristics
Technical Field
The invention relates to the technical field of sensor positioning, in particular to a visual synchronous positioning and map building method and device based on point-line characteristics.
Background
The synchronous positioning and mapping (SLAM) technology is a whole technical process that equipment carrying a specific sensor is positioned in an unknown environment through dynamic estimation in a self pose change process and a surrounding environment model is constructed at the same time, and the popular way is that a robot is to' where do i? "and" what is there around me? "is judged. And a visual SLAM (also called V-SLAM) with a camera as a main sensor is one of the current research hotspots in the technical field.
At present, there are two main visual SLAM methods, one is a feature point method, and the other is a direct method. The basic principle of the direct method is based on the assumption of constant brightness, but the assumption of constant illumination is very easy to fail in reality, and even the image is bright or dark due to different exposure parameters of the camera, so that the algorithm fails. The feature point method is easy to use, but the feature point method has the problems that the dependency on features is high, and tracking failure is easy to occur in places lacking textures. Therefore, there is a need for improvements to existing algorithms.
Disclosure of Invention
The invention provides a visual synchronous positioning and map building method and device based on dotted line features, and aims to solve the problems that the existing method is high in feature dependence and easy to track failure in places lacking textures.
In order to solve the technical problems, the invention provides the following technical scheme:
on one hand, the invention provides a visual synchronous positioning and map building method based on dotted line characteristics, which comprises the following steps:
acquiring an environment image corresponding to the current environment to be positioned and the map construction environment;
preprocessing the environment image, and performing feature point extraction and linear feature extraction on the preprocessed environment image to acquire feature point information and linear feature information in the environment image;
matching the characteristic information of the environmental image to be matched to obtain a matching result of the corresponding environmental image;
and constructing a visual synchronous positioning and mapping system based on the matching result of the corresponding environment image so as to realize self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system.
Further, the preprocessing the environment image includes:
and according to a preset camera model and parameters, realizing the distortion correction of the environment image and the alignment of the left eye image and the right eye image.
Further, the feature point extraction of the preprocessed environment image includes:
dividing the preprocessed environment image into a plurality of image subregions according to the image size;
based on a FAST algorithm, sequentially extracting feature points of each divided image subregion to obtain feature points corresponding to each image subregion;
calculating a binary descriptor corresponding to each feature point based on a BRIEF algorithm;
establishing a root node under the size of the environment image, then uniformly dividing the root node into four sub-nodes, traversing all feature points on the environment image and counting the number of the feature points in each node area;
if the number of the characteristic points in the node area corresponding to the current node is one, marking the current node as not to be divided; if the number of the characteristic points in the node area corresponding to the current node is zero, deleting the current node; and if the number of the feature points in the node area corresponding to the current node is more than one, continuing to partition the current node until the total number of the nodes reaches a set threshold value or the number of the feature points in each node area is one.
Further, when the node stops being segmented, if the node area with the characteristic point number larger than one exists, only the characteristic point with the maximum response value in the node area with the characteristic point number larger than one is reserved, and other characteristic points are deleted.
Further, based on the FAST algorithm, feature point extraction is sequentially performed on each of the divided image sub-regions to obtain feature points corresponding to each image sub-region, including:
selecting candidate points from the preprocessed environment image;
taking the candidate point as a circle center, comparing all pixel points on a set neighborhood radius, and sequentially comparing the gray value of all the pixel points on the set neighborhood radius with the gray value of the candidate point;
and when the absolute value of the gray value difference between the pixel points with the continuous preset number and the candidate points is larger than a preset gray threshold value, determining the candidate points as feature points.
Further, with the candidate point as the center of a circle, comparing all pixel points on the set neighborhood radius, and sequentially comparing the gray values of all pixel points on the set neighborhood radius with the gray values of the candidate point, including:
firstly, comparing gray values of pixel points in four directions of the candidate point, namely the upper direction, the lower direction, the left direction and the right direction with the gray value of the candidate point in sequence to obtain an absolute value of a gray value difference value between the four pixel points and the candidate point; when three of the absolute values of the gray value difference values between the four pixel points and the candidate point are greater than a preset gray threshold value, sequentially comparing the gray values of other pixel points on a set neighborhood radius with the gray values of the candidate point; otherwise, directly determining that the candidate point is not a feature point.
Further, based on the BRIEF algorithm, calculating a binary descriptor corresponding to the feature point, including:
determining a neighborhood corresponding to the current feature point, and acquiring a gray value centroid of the neighborhood;
establishing a plane coordinate system which takes the current characteristic point as an origin point and takes a connecting line between the current characteristic point and the gray value centroid of the neighborhood as an X axis;
selecting point pairs in the neighborhood range based on the established plane coordinate system;
respectively carrying out preset operation on the selected point pairs, and combining operation results of all the point pairs to obtain a descriptor of the current characteristic point; wherein the operation result of each point pair is 0 or 1.
Further, performing linear feature extraction on the preprocessed environment image, including:
adopting an LSD algorithm to extract linear features of the preprocessed environment image, and adopting an LBD algorithm to calculate an LBD descriptor of each linear feature;
selecting a preset number of element pairs from the calculated LBD descriptor;
respectively comparing the sizes of two elements in the selected element pairs, and combining the comparison results of all the element pairs to obtain a descriptor of the current linear characteristic; wherein the comparison result of each element pair is 0 or 1.
Further, the linear feature extraction is performed on the preprocessed environment image, and the method further includes:
establishing a polar coordinate system in the preprocessed environment image;
obtaining polar coordinate representation of each linear feature according to the polar coordinate system;
and clustering the linear features represented by the polar coordinates by adopting a J-linking algorithm, distributing the same label number to the linear features belonging to the same class, and then carrying out weighted average on the polar coordinates corresponding to the linear features with the same label number to obtain combined linear features so as to restore the same divided linear feature.
In another aspect, the present invention further provides a visual synchronized positioning and mapping apparatus based on dotted line features, the apparatus comprising a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to perform the steps of:
acquiring an environment image corresponding to the current environment to be positioned and the map construction environment;
preprocessing the environment image, and performing feature point extraction and straight line feature extraction on the preprocessed environment image to acquire feature point information and straight line feature information in the environment image;
matching the characteristic information of the adjacent frame environment images to obtain a matching result of the adjacent frame environment images;
and constructing a visual synchronous positioning and mapping system based on the matching result of the adjacent frame environment images so as to realize self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system.
In yet another aspect, the present invention also provides a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the above method.
The technical scheme provided by the invention has the beneficial effects that at least:
the visual synchronous positioning and map construction method integrates the two-dimensional characteristic of a straight line into a visual SLAM system, so that the problems that in some scenes (such as indoor corridors, highways and the like), the tracking failure and the system precision reduction are easy to occur due to the lack of textures of the SLAM system based on a characteristic point method are solved;
moreover, the invention integrates the linear characteristics to solve the following problems: 1) The segmentation problem of the intersecting straight line features; 2) The problem that continuous straight line features are segmented under a specific scene (such as fuzzy); 3) Short, etc. ineffective straight line features are screened out. On the premise of not influencing pose estimation precision, the relationship between the extraction quantity and the matching quantity of the point features and the linear features and the pose resolving time of the system is analyzed, and finally the purpose of improving the real-time performance of the system is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a visual synchronized positioning and mapping method based on dotted line features according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of FAST feature points provided by the first embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature point coordinate system according to a first embodiment of the present invention;
FIG. 4 is a schematic view of an LBD descriptor for a straight line feature provided in accordance with a first embodiment of the present invention;
FIG. 5 is a schematic representation of a straight line feature in polar coordinate representation provided by a first embodiment of the present invention;
fig. 6 is a schematic block flow diagram of a visual SLAM system according to a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First embodiment
The embodiment provides a visual synchronous positioning and mapping method based on dotted line features, which can be implemented by an electronic device, and the electronic device can be a terminal or a server. The execution flow of the visual synchronous positioning and mapping method based on the dotted line features is shown in fig. 1, and the method comprises the following steps:
s101, acquiring an environment image corresponding to a current environment to be positioned and a map constructed environment;
s102, preprocessing an environment image, and performing feature point extraction and straight line feature extraction on the preprocessed environment image to acquire feature point information and straight line feature information in the environment image;
s103, matching the characteristic information of the environment image to be matched to obtain a matching result of the corresponding environment image;
and S104, constructing a visual synchronous positioning and mapping system based on the matching result of the corresponding environment image, so as to realize self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system.
It should be noted that the process of preprocessing the acquired environment image in this embodiment includes: and preprocessing the acquired environment image according to a preset camera model and parameters (including internal and external parameters and system parameters of the camera), and completing distortion correction of the environment image and alignment of the left and right eye images.
And the existing algorithm is improved aiming at the point feature distribution unevenness in the common point feature extraction algorithm, and a feature point extraction algorithm for homogenized image block detection and feature point quadtree management is provided:
on the basis of ORB characteristics, dividing the preprocessed environment image into a plurality of image sub-regions according to the image size; based on FAST (Features from accessed Segment Test) algorithm, feature point extraction is carried out on each divided image sub-region in sequence, and feature points corresponding to each image sub-region are obtained; in the process of detecting the characteristic points, if the detection result is empty, the threshold value for extracting the characteristic points is adjusted, and the characteristic point detection is set to be carried out for multiple times until the detection result is not empty or the threshold value for extracting the characteristic points is adjusted to a limit value.
Specifically, the idea of performing feature point detection based on the FAST algorithm is as follows: and comparing all pixel points on the radius of the set field by taking the candidate point as the circle center, judging whether the candidate point is a characteristic point or not by comparing the gray value of each pixel point with the gray value of the candidate point, and considering the candidate point as a characteristic point when the absolute value of the gray difference between the continuous N points on the circumference and the candidate point is greater than a set threshold value. The detailed steps are as follows:
1. selecting a candidate point p from the preprocessed environment image, and setting the gray value of the candidate point p as Ip;
2. setting a proper threshold value T;
3. selecting all pixel points on a set neighborhood radius by taking p as a circle center;
4. comparing the gray values of all pixel points on the set neighborhood radius with the gray value of p in sequence;
5. and when the absolute value of the difference value between the gray values of the pixel points with the continuous preset number and the p is larger than the threshold value T, namely the gray values of the pixel points with the continuous preset number are not between [ Ip + T, ip-T ], determining that the p is the characteristic point.
Specifically, in this embodiment, step 3 is to select discrete pixel points on a circumference with a radius of 3, and there are 16 pixel points in total, as shown in fig. 2; in step 4, FAST feature extraction may also be accelerated, for example, the gray values of pixel points (pixel points 1, 5, 9, and 13 in fig. 2) in the up, down, left, and right directions of p may be directly and sequentially compared with the gray value of p to obtain the absolute value of the difference between the gray values of the four pixel points and p; only when three of the absolute values of the difference values of the gray values of the four pixel points and p are greater than T, p is possibly a feature point, and then the gray values of other pixel points on the circumference are sequentially compared with p; otherwise, directly determining that p is not a feature point and directly excluding the feature point. The extraction speed of the FAST feature points can be greatly improved by using the acceleration algorithm.
In addition, the FAST feature points have no direction information, and in order to implement the direction invariance of the feature points, in this embodiment, based on the BRIEF algorithm, a binary descriptor corresponding to each feature point is calculated, and a grayscale centroid method is further used: establishing a plane coordinate system taking the characteristic point p as a center and taking a connection line of the characteristic point p and a centroid Q of the circular image block B as an x axis, as shown in FIG. 3, wherein (a) is an original image and (B) is an image rotated by a certain angle; therefore, in the image rotation process, the relative position of the circle center p and the mass center Q is kept unchanged, so that the established coordinate system is ensured to have rotation invariance. The calculation process is as follows:
in a small image block B, the moments of the image block are defined as:
Figure BDA0002606315780000061
Figure BDA0002606315780000062
Figure BDA0002606315780000063
finding the gray value centroid of the image block by moments:
Figure BDA0002606315780000064
connecting the geometric center and the centroid of the image to obtain a direction vector, and calculating the description direction of the feature points:
Figure BDA0002606315780000065
the steps of calculating the binary descriptor of the feature point using the modified BRIFE algorithm are as follows:
1. determining a neighborhood corresponding to the current feature point, and acquiring a gray value centroid of the neighborhood;
2. establishing a plane coordinate system which takes the current characteristic point as an original point and takes a connecting line between the current characteristic point and the gray value centroid of the neighborhood as an X axis;
3. selecting point pairs in a neighborhood range based on the established plane coordinate system;
4. respectively carrying out preset operation on the selected point pairs, and combining operation results of all the point pairs to obtain a descriptor of the current characteristic point; wherein the operation result of each point pair is 0 or 1.
The improvement enables the FAST characteristic points to have the description of scale and rotation, and the robustness of expression between different images is greatly improved.
The improved BRIFE is used for calculating binary descriptors, the description vectors of the binary descriptors are composed of 0 and 1, the 0 and 1 code the size relationship of two pixels near a key point, random point selection comparison is used, the speed is high, only the Hamming distance of the binary vectors needs to be calculated when feature matching is carried out, the Hamming distance of descriptors of feature points which cannot be matched is about 128, and the corresponding Hamming distance of the matching points is far smaller than 128.
Further, after completing the extraction of each image subregion, the present embodiment manages the detected feature points using a quadtree: establishing a root node under the size of the environment image, uniformly dividing the root node into four sub-nodes, traversing all feature points on the environment image and counting the number of the feature points in each node area; if the number of the characteristic points in the node area corresponding to the current node is one, marking the current node as not to be segmented; if the number of the characteristic points in the node area corresponding to the current node is zero, deleting the current node; and if the number of the feature points in the node area corresponding to the current node is more than one, continuing to partition the current node until the total number of the nodes reaches a set threshold value or the number of the feature points in each node area is one. When the node stops segmenting, if a node area with the characteristic point number larger than one exists, only the characteristic point with the maximum response value in the node area with the characteristic point number larger than one is reserved, and other characteristic points are deleted.
The quadtree management method can avoid the phenomenon that the characteristic points are stacked in a specific area to the maximum extent, thereby achieving the effect of uniformly extracting the characteristic points.
In addition, the embodiment improves the existing algorithm according to the line feature splitting phenomenon, and provides an LBD descriptor straight line extraction algorithm. The LBD descriptor first establishes a rectangle outside the line segment, the rectangle is called a line segment support domain, and defines a main direction d of the line segment characteristic L In a direction perpendicular thereto d . As shown in fig. 4.
The number of stripes of the line segment support field is denoted by m, and the pixel width of the stripes is denoted by w. By means of a global Gaussian function f g Acting on each column of adjacent stripes of the support field, the boundary effect between stripes is reduced. The feature vector BD corresponding to each stripe can be calculated based on the gradients of the adjacent stripes jj Combining the feature vectors of all the strips, i.e. forming the LBD descriptor:
Figure BDA0002606315780000071
the local gradients of each row of each feature vector and its neighboring strip are summed separately. Wherein for the k-th row:
Figure BDA0002606315780000072
Figure BDA0002606315780000073
wherein λ = f g (k)f 1 (k) For the gaussian coefficients, the sum of each line is put together to form the feature vector BD j Corresponding feature description matrix, and finally calculating the mean variance and mean vector of the feature description matrix to obtain feature vector
Figure BDA0002606315780000081
Finally, LBD characteristics are obtained:
Figure BDA0002606315780000082
when m =9,w =7, the LBD descriptor is a 72-dimensional floating-point type eigenvector. In order to realize closed-loop detection, a large number of repeated calculations of distances between feature vectors are required, so that the description method is not applicable to the SLAM system with high real-time requirement. Therefore, in order to improve the calculation efficiency, it is necessary to convert it into a binary descriptor. Similar to the Brife descriptor, 0 and 1 in the binary LBD feature vector encode the magnitude relation between 72-dimensional floating-point feature vector elements, 256 specific elements are taken from the vector to be compared, and finally, the Hamming distance is used as the distance measurement between two feature vectors, and usually only the two binary character strings need to be subjected to XOR and summation processing, so that the matching efficiency is greatly improved.
Specifically, in this embodiment, the step of performing linear feature extraction on the preprocessed environment image includes:
1. performing linear feature extraction on the preprocessed environment image by adopting an LSD algorithm, and calculating an LBD descriptor of each linear feature by adopting an LBD algorithm;
2. selecting a preset number of element pairs from the calculated LBD descriptor;
3. respectively comparing the sizes of two elements in the selected element pairs, and combining the comparison results of all the element pairs to obtain a descriptor of the current linear characteristic; wherein the comparison result of each element pair is 0 or 1.
Further, in order to solve the problem that the image straight line features are divided, the embodiment clusters straight lines by using the idea of hough detection, and restores the same divided straight line. Within the polar coordinates, a straight line can be represented by a polar coordinate (ρ, θ), where ρ represents the distance from the origin of the coordinate to the straight line and θ represents the angle between the perpendicular to the straight line passing through the origin and the polar coordinate axis, as shown in fig. 5.
Using polar coordinates to represent straight lines, it can be known thatThe divided straight lines have the same polar coordinate point. Firstly, establishing a polar coordinate system in an image, and selecting a midpoint of the image
Figure BDA0002606315780000083
As a polar origin O, a cartesian coordinate system is established:
Figure BDA0002606315780000084
/>
Figure BDA0002606315780000085
wherein, (u, v) is the pixel position of the image coordinate system, and the conversion relationship between the polar coordinate of the straight line and the cartesian coordinate is as follows:
ρ=x cos(θ)+y sin(θ)
according to the LSD straight line detection result, the end points of the straight line are brought into solvable polar coordinates, then straight line parameters represented by the polar coordinates are classified, straight lines of the same type are assigned with the same label number, and then the polar coordinates corresponding to line segments with the same label number are weighted and averaged to obtain the corresponding straight line. The method comprises the following specific steps:
1. establishing a polar coordinate system in the preprocessed environment image;
2. obtaining polar coordinate representation of each linear feature according to the established polar coordinate system;
3. and clustering the linear features represented by the polar coordinates by adopting a J-linking algorithm, distributing the same label number to the linear features belonging to the same class, and then carrying out weighted average on the polar coordinates corresponding to the linear features with the same label number to obtain combined linear features so as to restore the same divided linear.
The specific implementation of the method of the present embodiment depends on the construction of the whole SLAM system, including the initialization, tracking, optimization, and loop detection processes of the system, and the whole process is shown in fig. 6.
Firstly, preprocessing an image, after an image frame acquired by a camera is transmitted into a tracking thread by an inlet of a system, preprocessing the image according to a preset camera model and parameters (including camera internal and external parameters and system parameters), firstly, aligning distortion correction of the image with left and right eye images, then extracting and homogenizing feature points, managing a feature point quadtree, and calculating an ORB feature descriptor and processing the scale and rotation invariance of the ORB feature descriptor; extracting a characteristic line, calculating an LBD descriptor, clustering the detected linear characteristic by using a J-linking algorithm, constructing a data structure of characteristic description by using the same storage structure, completing the construction from an image to an information frame, and preparing for subsequent matching and calculation.
Initializing the system, wherein the initialization steps are as follows: firstly, judging whether the image frame meets the initialization condition, then initializing map points of the key frame, and finally assigning values to relevant data. In the process of creating the initial map, 3D points and the initialization of the structural lines are put into parallel threads to improve the running speed. For the initialization of the 3D point map, carrying out matching search through ORB feature points extracted from left and right image frames, then obtaining parallax D by using the positions of the matching points, recovering the depth of map points, and finally obtaining a group of 3D point information to be inserted into the initialization map; for the structural line, since the matching of the line segment is not performed by depending on two end points of the line segment, the triangulation cannot be performed by depending on the end points of the line segment during initialization, in this embodiment, a left image frame is used as a reference, a parallel line is made through the linear end point of the left image frame, an intersection point is made on the corresponding matching straight line of a right image frame, the parallax is solved and the depth of the 3D point is recovered by using the two points, finally, the linearization of the linear spatial coordinate is performed by using the end point coordinate of the structural line, the linear planckian coordinate is obtained, the spatial line is inserted into the initialization map, and the initialization of the system is completed.
Firstly, the working principle of the SLAM system is to perform inverse solution of the spatial pose of the camera according to the position information of the fixed landmark points in the space presented on the two-dimensional plane at the image feature points, so that the accurate matching of the feature points and the feature lines is the premise that the SLAM system can more accurately perform pose calculation.
In the point line feature matching, feature matching of adjacent frames and matching of a local map and a current frame are carried out according to different matching objects, initial value solution of pose estimation can be carried out in the adjacent frames according to less feature quantity, map points of a plurality of image frames which are observed together with the current frame can be constrained by the feature matching of the adjacent frames and the local map, so that nonlinear optimization solution is carried out by utilizing more feature information, and the point line feature of the embodiment adopts the following matching strategy to improve the matching speed and precision:
for point feature matching, in order to accelerate the search speed of matched feature points, a tracking thread uses a uniform motion model hypothesis, namely, image frame intervals with a higher frame rate are regarded as uniform motion, under the model hypothesis, a rough pose initial value can be given to a camera according to pose estimation of a preamble frame, then a map point of a previous frame is projected to a current frame according to a pose estimation value, then feature matching is carried out within a set search radius, and the matching strategy can omit most feature points by using prior information, so that accurate search is carried out, and the effect of simultaneously improving the matching speed and the matching precision is achieved.
The matching search of the spatial straight line is similar to the spatial point, but the observation and the feature point of the spatial straight line are different, the phenomenon of partial observation exists, namely the same straight line in the space possibly has different spatial positions of the observed end points of the straight line along with the movement of a camera, when the projection search is carried out, the straight line with the end point depth value of opposite negative sign (extending to the back of the camera) needs to be cut off, only the straight line feature in the image window range is reserved for searching, meanwhile, the classification search is carried out according to the structural straight line clustering result in the image preprocessing process, the search range of the straight line is reduced, and the matching speed is accelerated.
The initial estimation of the camera pose can be carried out through the matching result of the adjacent frames, but due to the limitation of the feature quantity and the matching quality, a small amount of constraint conditions can cause errors to be generated in the pose estimation result, and if the correlation is not carried out on more frames of image information, the errors can be accumulated, so that the drift phenomenon is caused. Therefore, all key frame image frames with a common-view relation with the current frame can be restrained through the constructed environment map in the system, and nonlinear optimization is carried out, so that more accurate pose estimation and less error accumulation are obtained. Similar to matching adjacent frames, the tracking of the local map is to gather all key frames which are observed together with the current frame, and constrain the temporary map established by map points on the common-view key frames and the feature points of the current frame to optimize the pose of the camera. And adding the key frames, the number of which is larger than a set threshold value, which are observed together with the feature points on the current frame into the common view, performing projection search on map points on the common view, and performing subsequent optimization solution after matching is completed.
After the matching of the feature points is completed, the association between the 3D map points and the image frame feature points is established, and then the motion from the 3D map points to the 2D map points can be solved through pNp (passive-n-Point), namely, the camera pose is solved by using the landmark points inserted into the map at the known coordinate position and the projection position of the landmark points on the two-dimensional image. Considering that the number of feature point pairs used by a linear solution method using direct linear transformation or P3P and the like is small, the influence of a noise item cannot be effectively eliminated, and the projection and linearization in the linear feature are difficult, the present embodiment uses the idea of nonlinear optimization, constructs the PnP problem as a nonlinear least square problem, and uses the idea of minimizing the reprojection error to construct an optimization model of the dotted line feature:
Figure BDA0002606315780000111
and solving to obtain an optimal solution. Pose solution in a trace thread can be divided into the following stages: 1. under a uniform motion model, according to the motion estimation of the previous frame, the motion estimation of the previous frame is used as an initial value of the motion of the current frame, and the assumed motion estimation is used for projecting feature points to complete matching search; 2. and performing least square optimization calculation according to a building function established according to the matching result, wherein the least square optimization calculation is used as a BA problem, and only the camera pose is used as an optimization variable, so that a unitary edge optimization model is constructed, the pose variable is set as the peak of graph optimization, a reprojection error item is used as an edge, and a graph optimization calculation library is called to obtain an optimal solution.
And further performing key frame processing, wherein the system judges whether to insert a key frame after processing each image frame, and does not insert the key frame when the following conditions occur:
1. under the condition that the local map is in the global closed loop, the local map is occupied, and the optimization of the global closed loop can be influenced by inserting the key frame.
2. When the repositioning is performed, the redundancy of the image information of the current frame and the repositioning candidate frame is low, so that the current frame after the repositioning is successful can be inserted into the key frame, and the image closer to the repositioning does not need to be frequently inserted.
Excluding the above, the image frame should satisfy the following condition for inserting the key frame:
1. the tracked interior points must exceed a set threshold and the degree of overlap cannot be excessive.
2. The MAX frame is past from the last key frame inserted; or at least the past MIN frame and the local map thread is idle; or the number of key frame queues in the local map thread is less than 3.
The system adopts strict key frame inserting and screening conditions to ensure that the quality and the quantity of the key frames are simplified, updates the pose and the map points while inserting the key frames, and prepares for subsequent matching and optimization.
And further updating the map, namely, the system firstly establishes a common view for the key frame which is commonly observed with the current frame, then traverses the image characteristics of the current frame, updates observation information and the map for the characteristics which are commonly observed (namely, have a matching relation), and marks the characteristics which are not commonly observed for preparation updating. Screening is also required when creating maps to ensure high quality of map features:
1. in all key frames where the map feature is predicted to be visible, the number of frames actually tracked is more than a quarter of the number of frames to ensure the quality of the feature.
2. The feature is observed for too few times or continuously observed from the last observation and then removed, so as to ensure the compactness of the map.
Although the line segments in the map are represented by the straight lines extended wirelessly, the end points of the line segments can determine the search range during matching, and calculation efficiency is improved, so that maintenance is required.
After the map point is updated, the system needs to perform local Bundle Adjustment once to optimize the inserted map point information and observation information, and after the optimization is completed, key frames need to be screened to further reduce the redundancy of the system information, and when the ratio of the repeatedly observed feature number on any key frame in the common view exceeds 90%, the key frame is redundant, and can be removed.
Further carrying out local optimization, namely firstly, a current key frame (pKF) and a common-view key frame (localKF) which has a common-view relation with the current key frame (pKF) and the observation and state of the current key frame (pKF) jointly form optimization variables of the local optimization, wherein the optimization variables comprise the pose of a camera and the spatial position of characteristics; and secondly, fixed key frames (fix KF) which have a relation with the common view are provided, the information of the key frames only establishes a reprojection relation for constructing an optimized error equation and does not participate in optimization as a variable.
And further performing loop detection, wherein the loop detection mainly comprises two parts: firstly, position identification, namely appearance verification, and judgment of the similarity degree is carried out through characteristic information among images; and secondly, geometric verification, namely further verifying through the geometric relationship between the loop candidate frame and the current key frame. After the bag-of-words model of the dotted line comprehensive characteristics is established in this embodiment, the input current frame is compared with the common view one by one, a norm is used as a measurement score of the similarity between the image frames, and the form is as follows:
Figure BDA0002606315780000121
and sorting the similarity, and obtaining S by minimum matching min As a reference value; after the minimum score is determined, formula frames of the current frame can be excluded to eliminate the problem of scene similarity at a close interval, then key frames are reversely retrieved through BOW vectors, the number of similar words in each frame is counted, and candidate frames within a threshold value condition are selected; matching the candidate frame with the current frame by the bag-of-words vector with the score higher than S min As new candidate frames; and finally, forming a candidate image frame group by the candidate frame and the common-view key frame thereof, detecting whether the key frame in each given image group meets the screening condition of the matching score, and determining the candidate frame as the final candidate frame after the condition for 3 times is met, namely passing the consistency verification. And after appearance similarity detection is finished, geometric verification is carried out, data association is carried out on the current frame and the candidate frame, and pose transformation between frames is solved. Matching according to BOW vectors of image features, determining the matching quantity and a corresponding map, screening outliers and pose calculation by combining a RANSAC method, carrying out reprojection search matching by using a pose solving method, executing bidirectional optimization according to search results, respectively projecting a current frame to a candidate frame and projecting the candidate frame to the current frame to construct a cost function, and then screening inliers according to a threshold value to determine a final loop frame.
And further performing loop correction, fusing the current camera pose and the scene at the loop frame through the determined current frame and the loop frame and the camera pose relationship between the current frame and the loop frame, and eliminating the accumulated error through global optimization to achieve the effect of correcting the track. Updating the common-view key frame of the current frame, and reserving the camera attitude after local optimization and the camera attitude after loop correction for the common-view key frame; all map point locations of the co-view are then updated with the corrected poses: firstly, map points are projected back to an imaging plane by using an old posture, then the map points in the plane are back projected to a three-dimensional space under a new posture, then the map points under the common view of the current frame are replaced by the map features of a loop frame, if no matching relation exists, new features are inserted, and matching is completed; and finally, executing a global BA, optimizing all map points and all key frames, optimizing the pose graph only once due to the large number of map features under the whole system, distributing drift caused by accumulated errors in loop to each part of the whole process, and finishing a loop thread.
In summary, the visual synchronous positioning and map building method of the embodiment integrates the two-dimensional feature of a straight line into a visual SLAM system, thereby solving the problems that in some scenes (such as indoor corridors, highways, and the like), the SLAM system based on the feature point method is easy to fail to track due to lack of textures and the system precision is reduced; moreover, the embodiment integrates the straight line features to solve the following problems: 1) The segmentation problem of the intersecting straight line features; 2) The problem that continuous straight line features are segmented under a specific scene (such as fuzzy); 3) Short, etc. ineffective straight line features are screened out. On the premise of not influencing pose estimation precision, the relationship between the extraction quantity and the matching quantity of the point features and the linear features and the pose resolving time of the system is analyzed, and finally the purpose of improving the real-time performance of the system is achieved.
Second embodiment
The embodiment provides a visual synchronous positioning and mapping device based on dotted line characteristics, which comprises a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to perform the steps of:
acquiring an environment image corresponding to the current environment to be positioned and the map construction environment;
preprocessing the environment image, and performing feature point extraction and linear feature extraction on the preprocessed environment image to acquire feature point information and linear feature information in the environment image;
matching the characteristic information of the adjacent frame environment images to obtain a matching result of the adjacent frame environment images;
and constructing a visual synchronous positioning and mapping system based on the matching result of the adjacent frame environment images so as to realize self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system.
The visual synchronous positioning and mapping device based on the dotted line features of the present embodiment corresponds to the visual synchronous positioning and mapping method based on the dotted line features of the first embodiment; the functions realized by each functional module in the visual synchronous positioning and map building device based on the dotted line characteristics correspond to each flow step of the visual synchronous positioning and map building method based on the dotted line characteristics; therefore, it is not described herein.
Third embodiment
The present embodiments provide a computer-readable storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement the above-mentioned method. The computer readable storage medium may be, among others, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like. The instructions stored therein may be loaded by a processor in the terminal and perform the following steps:
acquiring an environment image corresponding to the current environment to be positioned and the map construction environment;
preprocessing the environment image, and performing feature point extraction and straight line feature extraction on the preprocessed environment image to acquire feature point information and straight line feature information in the environment image;
matching the characteristic information of the adjacent frame environment images to obtain a matching result of the adjacent frame environment images;
and constructing a visual synchronous positioning and mapping system based on the matching result of the adjacent frame environment images so as to realize self positioning and global mapping of the corresponding equipment through the visual synchronous positioning and mapping system.
Furthermore, it should be noted that the present invention may be provided as a method, apparatus or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ ...does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
Finally, it should be noted that while the above describes a preferred embodiment of the invention, it will be appreciated by those skilled in the art that, once having the benefit of the teaching of the present invention, numerous modifications and adaptations may be made without departing from the principles of the invention and are intended to be within the scope of the invention. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Claims (3)

1. A visual synchronous positioning and map building method based on dotted line features is characterized by comprising the following steps:
acquiring an environment image corresponding to a current environment to be positioned and a map constructed environment;
preprocessing the environment image, and performing feature point extraction and linear feature extraction on the preprocessed environment image to acquire feature point information and linear feature information in the environment image;
matching the characteristic information of the environment image to be matched to obtain a matching result of the corresponding environment image;
based on the matching result of the corresponding environment image, a visual synchronous positioning and mapping system is constructed, so that the self-positioning and global mapping of the corresponding equipment are realized through the visual synchronous positioning and mapping system;
extracting the feature points of the preprocessed environmental image, wherein the extracting process comprises the following steps:
equally dividing the preprocessed environment image into a plurality of image sub-regions according to the image size;
based on a FAST algorithm, sequentially extracting feature points of each divided image subregion to obtain feature points corresponding to each image subregion;
calculating a binary descriptor corresponding to each feature point based on a BRIEF algorithm;
establishing a root node under the size of the environment image, then uniformly dividing the root node into four sub-nodes, traversing all feature points on the environment image and counting the number of the feature points in each node area;
if the number of the characteristic points in the node area corresponding to the current node is one, marking the current node as not to be divided; if the number of the characteristic points in the node area corresponding to the current node is zero, deleting the current node; if the number of the feature points in the node area corresponding to the current node is more than one, continuing to divide the current node until the total number of the nodes reaches a set threshold value or the number of the feature points in each node area is one;
when the node stops segmenting, if a node area with the characteristic point number larger than one exists, only the characteristic point with the maximum response value in the node area with the characteristic point number larger than one is reserved, and other characteristic points are deleted;
based on a FAST algorithm, sequentially extracting feature points of each divided image subregion to obtain feature points corresponding to each image subregion, and the method comprises the following steps:
selecting candidate points from the preprocessed environment image;
taking the candidate point as a circle center, comparing all pixel points on a set neighborhood radius, and sequentially comparing the gray value of all the pixel points on the set neighborhood radius with the gray value of the candidate point;
when the absolute value of the gray value difference between a continuous preset number of pixel points and the candidate points is larger than a preset gray threshold value, determining the candidate points as feature points;
taking the candidate point as a circle center, comparing all pixel points on the set neighborhood radius, and sequentially comparing the gray value of all the pixel points on the set neighborhood radius with the gray value of the candidate point, wherein the method comprises the following steps:
firstly, comparing gray values of pixel points in four directions of the candidate point, namely the upper direction, the lower direction, the left direction and the right direction with the gray value of the candidate point in sequence to obtain an absolute value of a gray value difference value between the four pixel points and the candidate point; when three of the absolute values of the gray value difference values between the four pixel points and the candidate point are greater than a preset gray threshold value, sequentially comparing the gray values of other pixel points on a set neighborhood radius with the gray values of the candidate points; otherwise, directly determining that the candidate point is not the feature point;
the binary descriptor corresponding to the feature point is calculated based on the BRIEF algorithm, and the binary descriptor comprises the following steps:
determining a neighborhood corresponding to the current feature point, and acquiring a gray value centroid of the neighborhood;
establishing a plane coordinate system which takes the current characteristic point as an origin point and takes a connecting line between the current characteristic point and the gray value centroid of the neighborhood as an X axis;
selecting point pairs in the neighborhood range based on the established plane coordinate system;
respectively carrying out preset operation on the selected point pairs, and combining operation results of all the point pairs to obtain a descriptor of the current characteristic point; wherein, the operation result of each point pair is 0 or 1;
carrying out linear feature extraction on the preprocessed environmental image, comprising the following steps:
adopting an LSD algorithm to extract linear features of the preprocessed environment image, and adopting an LBD algorithm to calculate an LBD descriptor of each linear feature;
selecting a preset number of element pairs from the calculated LBD descriptor;
respectively comparing the sizes of two elements in the selected element pairs, and combining the comparison results of all the element pairs to obtain a descriptor of the current straight line characteristic; wherein the comparison result of each element pair is 0 or 1;
carrying out linear feature extraction on the preprocessed environment image, and further comprising:
establishing a polar coordinate system in the preprocessed environment image;
obtaining polar coordinate representation of each linear feature according to the polar coordinate system;
and clustering the linear features represented by the polar coordinates by adopting a J-linking algorithm, distributing the same label number to the linear features belonging to the same class, and then carrying out weighted average on the polar coordinates corresponding to the linear features with the same label number to obtain combined linear features so as to restore the same divided linear feature.
2. The method of claim 1, wherein the pre-processing the environmental image comprises:
and according to a preset camera model and parameters, realizing the distortion correction of the environment image and the alignment of the left eye image and the right eye image.
3. A visual synchronized positioning and mapping device based on dotted line features, comprising a processor and a memory; wherein the memory has stored therein at least one instruction that is loaded and executed by the processor to perform the steps of:
acquiring an environment image corresponding to the current environment to be positioned and the map construction environment;
preprocessing the environment image, and performing feature point extraction and linear feature extraction on the preprocessed environment image to acquire feature point information and linear feature information in the environment image;
matching the characteristic information of the adjacent frame environment images to obtain a matching result of the adjacent frame environment images;
based on the matching result of the adjacent frame environment images, a visual synchronous positioning and mapping system is constructed, so that the self-positioning and global mapping of the corresponding equipment are realized through the visual synchronous positioning and mapping system;
extracting the feature points of the preprocessed environmental image, wherein the extracting process comprises the following steps:
equally dividing the preprocessed environment image into a plurality of image sub-regions according to the image size;
based on a FAST algorithm, sequentially extracting feature points of each divided image subregion to obtain feature points corresponding to each image subregion;
calculating a binary descriptor corresponding to each feature point based on a BRIEF algorithm;
establishing a root node under the size of the environmental image, then uniformly dividing the root node into four sub-nodes, traversing all characteristic points on the environmental image and counting the quantity of the characteristic points in each node area;
if the number of the characteristic points in the node area corresponding to the current node is one, marking the current node as not to be segmented; if the number of the characteristic points in the node area corresponding to the current node is zero, deleting the current node; if the number of the feature points in the node area corresponding to the current node is more than one, continuing to divide the current node until the total number of the nodes reaches a set threshold or the number of the feature points in each node area is one;
when the node stops segmenting, if a node area with the characteristic point number larger than one exists, only the characteristic point with the maximum response value in the node area with the characteristic point number larger than one is reserved, and other characteristic points are deleted;
based on a FAST algorithm, sequentially extracting feature points of each divided image subregion to obtain feature points corresponding to each image subregion, and the method comprises the following steps:
selecting candidate points from the preprocessed environmental image;
taking the candidate point as a circle center, comparing all pixel points on a set neighborhood radius, and sequentially comparing the gray value of all the pixel points on the set neighborhood radius with the gray value of the candidate point;
when the absolute value of the gray value difference between a pixel point with a continuous preset number and the candidate point is larger than a preset gray threshold, determining the candidate point as a feature point;
comparing all pixel points on the set neighborhood radius by taking the candidate point as a circle center, and sequentially comparing the gray values of all the pixel points on the set neighborhood radius with the gray values of the candidate point, wherein the comparison comprises the following steps:
firstly, comparing gray values of pixel points in four directions of the candidate point, namely the upper direction, the lower direction, the left direction and the right direction with gray values of the candidate point in sequence to obtain absolute values of gray value difference values between the four pixel points and the candidate point; when three of the absolute values of the gray value difference values between the four pixel points and the candidate point are greater than a preset gray threshold value, sequentially comparing the gray values of other pixel points on a set neighborhood radius with the gray values of the candidate point; otherwise, directly determining that the candidate point is not the feature point;
the binary descriptor corresponding to the feature point is calculated based on the BRIEF algorithm, and the binary descriptor comprises the following steps:
determining a neighborhood corresponding to the current feature point, and acquiring a gray value centroid of the neighborhood;
establishing a plane coordinate system which takes the current characteristic point as an origin point and takes a connecting line between the current characteristic point and the gray value centroid of the neighborhood as an X axis;
selecting point pairs in the neighborhood range based on the established plane coordinate system;
respectively carrying out preset operation on the selected point pairs, and combining operation results of all the point pairs to obtain a descriptor of the current characteristic point; wherein, the operation result of each point pair is 0 or 1;
carrying out linear feature extraction on the preprocessed environmental image, comprising the following steps:
adopting an LSD algorithm to extract linear features of the preprocessed environment image, and adopting an LBD algorithm to calculate an LBD descriptor of each linear feature;
selecting a preset number of element pairs from the calculated LBD descriptor;
respectively comparing the sizes of two elements in the selected element pairs, and combining the comparison results of all the element pairs to obtain a descriptor of the current straight line characteristic; wherein the comparison result of each element pair is 0 or 1;
carrying out linear feature extraction on the preprocessed environment image, and further comprising:
establishing a polar coordinate system in the preprocessed environment image;
obtaining polar coordinate representation of each linear feature according to the polar coordinate system;
and clustering the linear features represented by the polar coordinates by adopting a J-linking algorithm, distributing the same label number to the linear features belonging to the same class, and then carrying out weighted average on the polar coordinates corresponding to the linear features with the same label number to obtain combined linear features so as to restore the same divided linear feature.
CN202010739596.2A 2020-07-28 2020-07-28 Visual synchronous positioning and map building method and device based on point-line characteristics Active CN111899334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739596.2A CN111899334B (en) 2020-07-28 2020-07-28 Visual synchronous positioning and map building method and device based on point-line characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739596.2A CN111899334B (en) 2020-07-28 2020-07-28 Visual synchronous positioning and map building method and device based on point-line characteristics

Publications (2)

Publication Number Publication Date
CN111899334A CN111899334A (en) 2020-11-06
CN111899334B true CN111899334B (en) 2023-04-18

Family

ID=73182647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739596.2A Active CN111899334B (en) 2020-07-28 2020-07-28 Visual synchronous positioning and map building method and device based on point-line characteristics

Country Status (1)

Country Link
CN (1) CN111899334B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528087B (en) * 2020-12-01 2023-06-20 南京邮电大学 Graph vertex parallel recoding method based on large synchronization model in network system
CN112988929A (en) * 2021-02-18 2021-06-18 同济大学 Map matching method, system, medium and terminal based on global cooperative voting
CN112967341B (en) * 2021-02-23 2023-04-25 湖北枫丹白露智慧标识科技有限公司 Indoor visual positioning method, system, equipment and storage medium based on live-action image
CN112862881B (en) * 2021-02-24 2023-02-07 清华大学 Road map construction and fusion method based on crowd-sourced multi-vehicle camera data
CN112883984B (en) * 2021-02-26 2022-12-30 山东大学 Mechanical arm grabbing system and method based on feature matching
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
CN115601420A (en) * 2021-07-07 2023-01-13 北京字跳网络技术有限公司(Cn) Synchronous positioning and mapping initialization method, device and storage medium
CN113688816B (en) * 2021-07-21 2023-06-23 上海工程技术大学 Calculation method of visual odometer for improving ORB feature point extraction
CN113624232A (en) * 2021-07-23 2021-11-09 随州市日瀚通讯科技有限公司 Indoor positioning navigation system and method based on RF (radio frequency) communication
CN115700507B (en) * 2021-07-30 2024-02-13 北京小米移动软件有限公司 Map updating method and device
CN114216461A (en) * 2021-09-29 2022-03-22 杭州图灵视频科技有限公司 Panoramic camera-based indoor positioning method and system for mobile robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion
CN110189390A (en) * 2019-04-09 2019-08-30 南京航空航天大学 A kind of monocular vision SLAM method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037396B2 (en) * 2013-05-23 2015-05-19 Irobot Corporation Simultaneous localization and mapping for a mobile robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
CN110189390A (en) * 2019-04-09 2019-08-30 南京航空航天大学 A kind of monocular vision SLAM method and system
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种点线特征融合的双目同时定位与地图构建方法;蒋林等;《科学技术与工程》;20200428(第12期);全文 *
基于RGB-D传感器的同步定位与建图方法研究;周梦妮等;《机械工程师》;20200310(第03期);第3.1节 *
基于霍夫空间模型匹配的移动机器人全局定位方法;房芳等;《机器人》;20050128(第01期);全文 *

Also Published As

Publication number Publication date
CN111899334A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111899334B (en) Visual synchronous positioning and map building method and device based on point-line characteristics
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN111795704A (en) Method and device for constructing visual point cloud map
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
Lee et al. Place recognition using straight lines for vision-based SLAM
CN103646391A (en) Real-time camera tracking method for dynamically-changed scene
CN111145228A (en) Heterogeneous image registration method based on local contour point and shape feature fusion
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
US10460472B2 (en) System and method for model adaptation
CN112163588A (en) Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment
Son et al. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments
Li et al. Road extraction algorithm based on intrinsic image and vanishing point for unstructured road image
JP2020013560A (en) Information processing device, information processing method, and program
CN111709893A (en) ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
Kraft et al. Efficient RGB-D data processing for feature-based self-localization of mobile robots
CN111402429B (en) Scale reduction and three-dimensional reconstruction method, system, storage medium and equipment
Chen et al. Multi-stage matching approach for mobile platform visual imagery
CN110851978B (en) Camera position optimization method based on visibility
CN112364881A (en) Advanced sampling consistency image matching algorithm
CN110059651B (en) Real-time tracking and registering method for camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant