CN112541423A - Synchronous positioning and map construction method and system - Google Patents
Synchronous positioning and map construction method and system Download PDFInfo
- Publication number
- CN112541423A CN112541423A CN202011427763.6A CN202011427763A CN112541423A CN 112541423 A CN112541423 A CN 112541423A CN 202011427763 A CN202011427763 A CN 202011427763A CN 112541423 A CN112541423 A CN 112541423A
- Authority
- CN
- China
- Prior art keywords
- frame images
- adjacent frame
- matrix
- points
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a synchronous positioning and map building method and system. The invention uses the characteristic point method to preliminarily obtain the pose transformation relation between two frames, and then uses the pose transformation relation as the initial value of the direct method optimization. The ORB characteristics of adjacent frame images are detected, and the ORB characteristics of the adjacent frame images are matched; calculating an essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images; calculating the pose change relation of the adjacent frame images according to the essence matrix; acquiring three-dimensional coordinates of pixel points on a first frame image in adjacent frame images, and re-projecting the pixel points by using the pose change relationship to optimize the photometric error of the pose change relationship; and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track. The method can solve the problem that a local optimal value is easy to fall into in nonlinear optimization, so that the system can have better stability in an application scene with insufficient characteristic points or unstable illumination.
Description
Technical Field
The invention relates to the field of unmanned aerial vehicle positioning And map construction, in particular to a Simultaneous positioning And map construction (SLAM) method And system.
Background
Synchronous positioning and mapping in an unknown environment are basic key capabilities of an intelligent mobile device for realizing autonomous movement, navigation planning can be further performed only on the premise of positioning, and the bottom layer is controlled to drive to realize human-like walking of a power system, so that other work can be performed. Conventional visual SLAM is based on feature points or directly compares pixel differences between adjacent frames.
The feature point method needs a feature extractor with strong repeatability, takes long time for key point extraction, descriptor and matching, and needs correct feature matching to correctly calculate the camera motion. In the environment of texture missing, feature points are easily lost, resulting in failure of the algorithm.
The direct method directly compares pixel differences between adjacent frames without describing the pixels themselves, so it is necessary to remove the influence of light on the gray scales of the pixels. The theoretical basis of the direct method is the gray scale consistency assumption that image pixel information of adjacent frames is obtained under the same light conditions while the camera is assumed to be moving. The gray scale invariance assumes that the gray scale of a pixel set to the same spatial point remains fixed in each image. The assumption of unchanged gray scale is a strong assumption, and the requirements for environmental conditions to meet the assumption are very strict and difficult to meet in the actual motion of the mobile robot. Moreover, due to the different materials of the objects, the pixels may have highlights and shadows in the pixel plane. When the camera automatically adjusts the exposure parameters, the overall brightness of the image automatically changes. The assumption of unchanged gray scale at this time is not true.
Therefore, the conventional visual SLAM method has the following disadvantages:
1. the utilization rate of the feature point information is low, and the requirement for stable image matching tracking is not met.
2. Feature point based SLAM algorithms require rich point features in the environment.
3. The SLAM algorithm based on the direct method is based on the gray scale invariant assumption, is a strong assumption, and is easily influenced by illumination change.
Disclosure of Invention
The invention aims to: aiming at the existing problems, a synchronous positioning and map building method is provided, so that no serious feature loss exists when the method is required under the condition of rapid movement, the dependence on the gray-scale invariant assumption is reduced, and the method has better stability in scenes with insufficient textures.
The technical scheme adopted by the invention is as follows:
a synchronous positioning and mapping method comprises the following steps:
detecting ORB characteristics of adjacent frame images, and matching the ORB characteristics of the adjacent frame images; calculating an essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images; calculating the pose change relation of the adjacent frame images according to the essence matrix; acquiring three-dimensional coordinates of pixel points on a first frame image in adjacent frame images, and re-projecting pixel points on an image plane corresponding to the first frame image onto an image plane corresponding to a second frame image in the adjacent frame images by using the pose change relationship; optimizing photometric errors of pose change relations; and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
The method links the detection and matching of the feature points with the pixel gradient information between the adjacent frame images, is convenient for extracting and matching the information of the adjacent key frames, has the characteristics of simplicity and convenience for calculating the photometric error, and has low dependence on the feature points, difficult frame loss and low dependence on the gray scale invariance.
Further, the calculating the essence matrix according to the corresponding relationship of the feature points between the adjacent frame images includes: and calculating the essential matrixes among a plurality of groups of adjacent frame images according to the corresponding relation of the characteristic points among the adjacent frame images, and determining the used essential matrix from the plurality of groups of essential matrixes.
Further, the essential matrix between a plurality of groups of adjacent frame images is calculated according to the corresponding relation of the characteristic points between the adjacent frame images by adopting a five-point algorithm.
Further, the calculating an essential matrix between a plurality of groups of adjacent frame images according to the corresponding relationship of the feature points between the adjacent frame images, and determining a used essential matrix from the plurality of groups of essential matrices includes: adopting RANSAC algorithm to carry out iteration of the reserved times, randomly sampling five points from a group of corresponding relations in each iteration, calculating a corresponding essential matrix, and checking whether pixel points except for random sampling are interior points; and selecting the essence matrix corresponding to the checked maximum number of the inner points in each iteration as the used essence matrix.
Further, the method further comprises: in the process of matching the ORB characteristics of the adjacent frame images, if the total number or the proportion of the matched characteristic points is lower than a first threshold value, the ORB characteristics of the adjacent frame images are detected again.
Further, the matching the ORB features between the adjacent frame images includes:
calculating the gradient of the image function f (x, y) at the pixel point (x, y):
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
wherein Gx and Gy represent gradients in the x-direction and y-direction, respectively;
and when the deviation of the gradient direction and the gradient amplitude is within a second threshold value, judging that the corresponding pixel points are successfully matched. The pixel gradient calculation of the pixel block improves the pixel information of the image, reduces the dependence on the gray scale invariant hypothesis, reduces the illumination change influence, reduces the dependence on the feature points in the environment, and effectively reduces the frame loss rate of the visual front end in the under-point feature environment.
Further, when the pixel point on the image plane corresponding to the first frame image is re-projected onto the image plane corresponding to the second frame image in the adjacent frame image, if the ORB feature matching fails, the pose change relationship is initialized to the identity matrix.
Further, the photometric error of the optimization pose change relationship includes:
wherein, Tk,k-1Delta I (T) as a relationship of pose changek,k-1,ui) The luminance difference after pixel projection is shown as u is the coordinate of the pixel point and I is the gray value.
Further, after obtaining the positioning track, the method for synchronously positioning and mapping further comprises: performing loop detection by using a bag-of-words model; adding each group of newly input frame images into the factor graph model, adding corresponding pose constraint relations and setting initialization values to form a pose information matrix when each group of frame images is added; and (3) linearization is carried out in the current pose information matrix, the pose information matrix is decomposed by QR, and the relationship between factor nodes is removed by utilizing the sparsification processing mode of the factor graph, so that updated pose information is obtained.
Furthermore, before detecting the ORB features of the adjacent frame images, the adjacent frame images are subjected to distortion removal processing.
In order to solve all or part of the above problems, the present invention further provides a system for synchronous positioning and mapping, comprising: the system comprises a feature detection and tracking unit, an essential matrix estimation unit, a pose change relation calculation unit, a re-projection unit and a mapping unit, wherein:
the feature detection and evaluation unit is configured to: detecting ORB characteristics of adjacent frame images, and matching the ORB characteristics of the adjacent frame images;
the intrinsic matrix estimation unit is configured to: calculating an essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images;
the pose change relationship calculation unit is configured to: calculating the pose change relation of the adjacent frame images according to the essence matrix;
the reprojection unit is configured to: according to the three-dimensional coordinates of the pixel points on the first frame image in the adjacent frame images, utilizing the pose change relation calculated by the pose change relation calculation unit to re-project the pixel points on the image plane corresponding to the first frame image onto the image plane corresponding to the second frame image in the adjacent frame images, and optimizing the luminosity error of the pose change relation;
the mapping unit is configured to: and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
Further, the intrinsic matrix estimation unit calculates an intrinsic matrix between a plurality of groups of adjacent frame images according to the correspondence between the feature points of the adjacent frame images, and determines the used intrinsic matrix from the plurality of groups of intrinsic matrices.
Further, the intrinsic matrix estimation unit is configured to calculate the intrinsic matrix between the sets of adjacent frame images by using a five-step method.
Further, the intrinsic matrix estimation unit performs iteration of a predetermined number of times by using a RANSAC algorithm, randomly samples five points from a group of corresponding relations in each iteration, calculates a corresponding intrinsic matrix, and checks whether pixel points except for random sampling are interior points; and selecting the essence matrix corresponding to the checked maximum number of the inner points in each iteration as the used essence matrix.
Further, the feature detection and tracking unit comprises a feature re-detection module configured to: in the process of matching the ORB characteristics of the adjacent frame images, if the total number or the proportion of the matched characteristic points is lower than a first threshold value, the ORB characteristics of the adjacent frame images are detected again.
Further, the feature detection and tracking unit comprises a feature matching module configured to: matching ORB characteristics of adjacent frame images, comprising:
calculating the gradient of the image function f (x, y) at the pixel point (x, y):
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
wherein Gx and Gy represent gradients in the x-direction and y-direction, respectively;
and when the deviation of the gradient direction and the gradient amplitude is within a second threshold value, judging that the corresponding pixel points are successfully matched.
Further, when the reprojection unit reprojects the pixel points on the image plane corresponding to the first frame image onto the image plane corresponding to the second frame image in the adjacent frame image, if the ORB feature matching fails, the pose change relationship is initialized to the identity matrix.
Further, the reprojection unit includes an error optimization module configured to: optimizing photometric errors of pose change relations, comprising:
wherein, Tk,k-1Delta I (T) as a relationship of pose changek,k-1,ui) The luminance difference after pixel projection is shown as u is the coordinate of the pixel point and I is the gray value.
Further, the synchronized positioning and mapping system further comprises a loop detection and optimization unit configured to: performing loop detection by using a bag-of-words model; adding each group of newly input frame images into the factor graph model, adding corresponding pose constraint relations and setting initialization values to form a pose information matrix when each group of frame images is added; and (3) linearization is carried out in the current pose information matrix, the pose information matrix is decomposed by QR, and the relationship between factor nodes is removed by utilizing the sparsification processing mode of the factor graph, so that updated pose information is obtained.
Furthermore, the synchronous positioning and mapping system also comprises a preprocessing unit, wherein the output end of the preprocessing unit is connected with the input end of the characteristic detection and tracking unit; the pre-processing unit is configured to: and carrying out distortion removal processing on adjacent frame images. .
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention integrates the characteristics of the characteristic point method and the direct method, preliminarily obtains the pose transformation relation between two frames by using the characteristic point method, and then uses the pose transformation relation as the initial value of the direct method optimization, thereby solving the problem that the nonlinear optimization is easy to fall into the local optimal value. The system can have better stability in application scenes with insufficient characteristic points or unstable illumination.
2. The invention links feature point detection and matching with pixel gradient information between adjacent frame images, so that no serious feature loss exists under the condition of rapid motion.
3. The invention effectively reduces the frame loss rate of the visual front end in the under-point characteristic environment and improves the stability of the positioning algorithm.
4. The method and the device improve the pixel information of the image, reduce the dependence on the gray scale invariant assumption and reduce the influence of illumination change on the pixel gradient calculation of the pixel block.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of the SLAM algorithm framework.
Fig. 2 is a calculation flow chart of a synchronous positioning and mapping method.
Fig. 3 is a feature point reprojection diagram.
FIG. 4 is a block diagram of a synchronized positioning and mapping system.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification (including any accompanying claims, abstract) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
Example one
The overall framework used by the visual sensors, whether monocular or binocular cameras, in SLAM is roughly the same, but the front-end data processing scheme is designed differently due to the different data acquisition schemes. As shown in fig. 1, the SLAM algorithm framework is roughly divided into five modules, namely, sensor data processing, front-end visual odometry, back-end filtering and optimization, loop detection and graph building. The visual odometer is an important module for performing initial attitude estimation after sensor data is input, and is used for estimating the motion pose change between adjacent images and restoring local environment information.
For the problem that the feature point utilization rate is low and is not enough to satisfy the stable image matching tracking in the dynamic scene, referring to fig. 2, the embodiment discloses a synchronous positioning and mapping (SLAM) method, including:
A. and detecting ORB characteristics of the adjacent frame images, and matching the ORB characteristics of the adjacent frame images.
The adjacent frame images, that is, two adjacent frame images that are consecutive in sequence, are the first frame image in the front of the sequence and the second frame image in the back of the sequence, and therefore the "first" and "second" herein are not specifically referred to, but are only distinguished in sequence. The ORB characteristics of the image are detected by adopting a FAST characteristic point detection algorithm, and the ORB characteristic detection comprises two steps: detecting FAST angular points, calculating the main direction of the angular points, constructing a Gaussian pyramid, and detecting the angular points on each layer of pyramid image; and performing BRIEF descriptor calculation to describe the extracted area around the corner point. BRIEF is a binary descriptor whose description vector is composed of a number of 0's and 1's, where 0's and 1's encode the magnitude relationship of two randomly chosen pixel values (e.g., p and q) near a keypoint: if p is greater than q, then take 1, otherwise take 0. In some embodiments, 128 pairs of p and q points are randomly selected around the characteristic point FAST by definition, and a 128-dimensional vector composed of 0 and 1 is obtained to describe the combination of the ORB characteristic points. And calculating the direction of the FAST characteristic point, so that the BRIEF descriptor has rotation invariance, and the robustness of characteristic detection is improved.
For the tracking of the features, the gradient of the pixel block is adopted as the tracking feature in the embodiment, so that the pixel information of the image can be improved, the dependence on the feature points in the environment is reduced, the dependence on the gray-scale invariant assumption is reduced, and the influence of the illumination change is reduced.
Defining (x, y) as a point on the image function f (x, y), the gradient of the image function f (x, y) at the point (x, y) is a vector with magnitude and direction, therefore, the embodiment uses the gradient as information tracking after keyword matching. Let Gx and Gy denote the gradient in the x-direction and y-direction, respectively, and the vector of this gradient can be expressed as:
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
on the basis of successful detection and matching of the corresponding FAST corner points, the gradient direction and the gradient amplitude of the pixels in the formula are calculated, and the pixels can be regarded as successfully tracked pixels within a certain deviation range (such as 30%).
Further, when pixel gradient tracking is performed, since the change of the field of view is different at each moment, for example, some environmental information is shifted out of the lens field of view too much, some points are finally lost, and therefore, when the total number of feature points detected for image detection is lower than a certain threshold (for example, 300), re-detection of the features of the image is triggered.
B. And calculating the essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images.
In this embodiment, an essential matrix between a plurality of groups of adjacent frame images is calculated according to the correspondence between feature points between the adjacent frame images, and a used essential matrix is determined from the plurality of groups of essential matrices. In some embodiments, a five-point algorithm is used to compute the essential matrix between sets of adjacent frame images. The calculation process adopting the five-point algorithm comprises the following steps: and (3) adopting a RANSAC algorithm to carry out iteration of the reserved times, wherein in each iteration, five points are randomly sampled from a group of corresponding relations, and a corresponding essential matrix is calculated. Then, the calculating the intrinsic matrices between the sets of adjacent frame images and determining the used intrinsic matrix from the sets of intrinsic matrices includes: adopting RANSAC algorithm to carry out iteration of the reserved times, randomly sampling five points from a group of corresponding relations in each iteration, calculating a corresponding essential matrix, and checking whether pixel points except for random sampling are interior points; and selecting the essence matrix corresponding to the checked maximum number of the inner points in each iteration as the used essence matrix. The reason that a plurality of groups of essential matrixes need to be calculated firstly is that all the corresponding relations are not perfect in the matching and tracking processes of the pixel points, the tracking of the characteristic points is not perfect, the characteristic points inevitably have wrong corresponding relations, the wrong corresponding relations are abnormal values, a plurality of groups of essential matrixes are calculated by adopting random sampling consistency, the finally used essential matrixes are further determined, and the abnormal values can be eliminated.
C. And calculating the pose change relation of the adjacent frame images according to the essence matrix.
For the essence matrix E, through SVD decomposition, the pose change relation [ R | t ] of the second image can be calculated.
E=U∑VT
R=UW-1VT
t^=VW∑VT
Wherein t ^ is an antisymmetric matrix of the vector t,
D. acquiring three-dimensional coordinates of pixel points on a first frame image in adjacent frame images, and re-projecting pixel points on an image plane corresponding to the first frame image onto an image plane corresponding to a second frame image in the adjacent frame images by using the pose change relationship; and optimizing the photometric error of the pose change relation.
Step A-C obtains the pose change relation [ R | t ] between adjacent frames]This is taken as an initial value for the subsequent processing: t isk,k-1=[R|t]. If the above feature tracking fails, then T is addedk,k-1Initialized to an identity matrix. The pose transformation relation between two frames is preliminarily solved by utilizing a characteristic point method, and then the problem that a local optimal value is easy to fall into in nonlinear optimization is solved by using an initial value optimized by a direct method.
As shown in FIG. 3, the first pixel plane I is knownk-1The position of the middle pixel is (u, v) and the depth d, and the pose change relation T is utilizedk,k-1Obtaining the three-dimensional coordinate p of the point in the current framek. Second pixel plane I projected by camera parameterskAnd (c) completing the reprojection of the image planes (u ', v'). And (3) keeping the brightness value of the feature points of the adjacent frames unchanged in a short time to obtain the gradient luminosity error of the pixels of the three-dimensional points around. Pose is continuously optimized to minimize residual error:
E. and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
The relative position relationship of two frame calculations, i.e. the rotation R, can be obtained by the previous stepsk,k-1And translation tk,k-1Because the camera matrix corresponding to the second image is assumed to be zero in the case of the camera matrix of the first image, we can obtain the positioning track by accumulating the relative quantities calculated each time:
Rk=Rk-1Rk,k-1
tk=tk-1+tk-1,Rk,k-1。
newly acquired data at the same position are associated with historical data in front and back, and a constraint relation between poses is established by means of the association, so that pose constraints are eliminated in an optimization system, the positioning accuracy is further improved, and large-scale accumulated errors during long-time running are eliminated. Preferably, the method further comprises: performing loop detection by using a bag-of-words model; adding each group of newly input frame images into the factor graph model, adding corresponding pose constraint relations and setting initialization values to form a pose information matrix when each group of frame images is added; and (3) linearization is carried out in the current pose information matrix, the pose information matrix is decomposed by QR, and the relationship between factor nodes is removed by utilizing the sparsification processing mode of the factor graph, so that updated pose information is obtained.
The method is different from the traditional characteristic point method or the direct method, can be regarded as the combination of the semi-characteristic point method and the semi-direct method, creatively integrates the advantages of the two methods in the overall design of the method, and is different from a pure single algorithm.
Example two
Referring to fig. 2, the present embodiment discloses a method for synchronous positioning and map building (SLAM), and the design idea of the present embodiment is as follows: after a real-time video stream enters a system, distortion removal processing is carried out on two frames at intervals, a FAST feature detection algorithm is used for carrying out primary corner detection, feature purification is carried out through a RANSAC algorithm to remove noise points, and then further registration is carried out according to pixel gradients around feature points obtained through calculation, so that photometric errors are minimized. And (5) obtaining rough estimation of the current pose by utilizing Gaussian Newton method iterative descent. The front end utilizes the sensor to perform data association and closed-loop detection to complete the construction of the graph model, and after the front-end visual odometer organizes the sensor data into the graph model, the graph model needs to be optimized by utilizing graph theory at the back end.
The synchronous positioning and map building method of the embodiment comprises the following steps:
and preprocessing adjacent frame images in the video stream. The purpose of the preprocessing is to remove the influence of the frame image on feature detection. Generally, the preprocessing step includes a distortion removal process, a noise removal process, and the like.
1) And extracting FAST characteristic points from the preprocessed frame images. The detection of the ORB characteristics of the frame image comprises two steps: detecting FAST angular points, calculating the main direction of the angular points, constructing a Gaussian pyramid, and detecting the angular points on each layer of pyramid image; and performing BRIEF descriptor calculation to describe the extracted area around the corner point. BRIEF is a binary descriptor whose description vector is composed of a number of 0's and 1's, where 0's and 1's encode the magnitude relationship of two randomly chosen pixel values (e.g., p and q) near a keypoint: if p is greater than q, then take 1, otherwise take 0. Defining and randomly selecting 128 p and q point pairs around the FAST characteristic point to obtain 128-dimensional vector composed of 0 and 1 to carry out combined description on the ORB characteristic point. And calculating the direction of the FAST characteristic point, so that the BRIEF descriptor has rotation invariance, and the robustness of characteristic detection is improved.
2) And carrying out feature tracking on adjacent frame images. The gradient of the image function f (x, y) at point (x, y) is a vector with size and direction, which is used as information tracking after keyword matching, and is convenient for fast calculation in pixels. Let Gx and Gy denote the gradient in the x-direction and y-direction, respectively, and the vector of this gradient can be expressed as:
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
on the basis of successful detection and matching of the corresponding FAST corner points, the gradient direction and the gradient amplitude of the pixels in the formula are calculated, and the pixels can be regarded as successfully tracked pixels within a certain deviation threshold range (such as 30%). The method improves the pixel information of the image, reduces the dependence on the gray-scale invariant hypothesis, reduces the influence of illumination change, reduces the dependence on the characteristic points in the environment, and effectively reduces the frame loss rate of the visual front end in the under-point characteristic environment.
3) And (5) feature re-detection. When feature point tracking is performed, the system will trigger re-detection as long as the total number of features tracked is below a certain threshold, e.g. 300, since the field of view changes differently at each moment, e.g. some environmental information is shifted too far out of the lens field of view, and eventually some points will be lost.
4) And (5) evaluating an essence matrix. And solving the essential matrix by adopting a five-point algorithm. Given the camera matrix P ═ K R | t]K is the camera parameter transformation matrix, let X be a point on the image, X be the world coordinate corresponding to the point, then X is PX, letNamely, it isTo normalize the image coordinates. Corresponding points of two frame imagesThe corresponding normalized image coordinates and the essential matrix E satisfyAnd selecting multiple points, and solving the corresponding nonlinear equation set to obtain the essential matrix E. Because the basic matrix only has five degrees of freedom, the nonlinear problem is solved by adopting a five-point algorithm, and the required number of points is minimum.
5) And eliminating noise points. With the essential matrix, we can estimate motion accurately only by five feature correspondences between two consecutive frame images. However, in the matching and tracking processes of the pixel points, not all the point correspondences are perfect, the feature tracking algorithm inevitably has several wrong correspondences, and it is necessary to eliminate such abnormal values when performing motion estimation. In one embodiment, a RANSAC (random sample consensus) algorithm is used for noise rejection calculation. In each iteration, five points are randomly sampled from a set of correspondences, a basis matrix is estimated, and then it is checked whether other points are interior points when using this basis matrix. After a predetermined number of iterations, the base matrix is selected for maximum inliers.
6) The rotational-translational information is solved from the essential matrix. For the essential matrix E, the camera external parameter matrix R | t of the second frame image can be calculated by SVD decomposition.
E=U∑VT
R=UW-1VT
t^=VW∑VT
7) reproject and optimize photometric errors.
The step 1) to the step 6) are carried out to obtain the pose relation [ R | t ] between the adjacent frame images]And as initial values for the direct method: t isk,k-1=[R|t]. If the tracing of the feature method fails, T is addedk,k-1Initialized to an identity matrix. Knowing the image plane Ik-1The pixel position of a certain point (three-dimensional point) is (u, v) and the depth d, and the pose T is utilizedk,k-1Obtaining the three-dimensional coordinate p of the point in the current framekThrough camera referenceProjected image plane IkMiddle is (u ', v'), the re-projection is completed, as shown in fig. 3. And (3) keeping the brightness value of the feature point of the adjacent frame image unchanged in a short time to obtain the gradient luminosity error of the pixel of the three-dimensional point around. Pose is continuously optimized to minimize residual error:
8) the relative position relation, namely the rotation relation R of the two frames of images can be calculated through the stepsk,k-1And a translation relation tk,k-1. Since the camera matrix corresponding to the second frame image is assumed to be known from the camera matrix of the first frame image, we can obtain by accumulating the relative quantities calculated each time:
Rk=Rk-1Rk,k-1
tk=tk-1+tk-1,Rk,k-1
9) and performing loop detection by using a bag-of-words model. The characteristics are trained into word bags, and then the word bags are used for comparison, so that the efficiency of loop detection can be improved. Newly acquired data at the same position are associated with historical data in front and back, and a constraint relation between poses is established by means of the association, so that pose constraints are eliminated in an optimization system, the positioning accuracy is further improved, and large-scale accumulated errors during long-time running are eliminated.
10) And (4) creating a factor graph model and performing optimization calculation. Adding each group of obtained observation values into the factor graph model, adding a constraint relation between a new observation value and historical information in the factor graph model, setting an initialization value, then performing linearization in the current existing pose information matrix, then performing QR decomposition on the information matrix, and finally removing the relation between factor nodes by using a factor graph sparsification processing mode to obtain updated pose information.
EXAMPLE III
The embodiment discloses a synchronous positioning and mapping system, as shown in fig. 4, which includes a preprocessing unit, a feature detection and tracking unit, an essential matrix estimation unit, a pose change relation calculation unit, a reprojection unit, and a mapping unit, wherein:
the pre-processing unit is configured to: and carrying out distortion removal processing on adjacent frame images. The output end of the preprocessing unit is connected with the input end of the characteristic detection and tracking unit, and the original file processed by the characteristic detection and tracking unit is the result of the distortion removal of the preprocessing unit.
The feature detection and tracking unit is configured to: and detecting ORB characteristics of the adjacent frame images, and matching the ORB characteristics of the adjacent frame images.
The feature detection and tracking unit includes a feature detection module and a feature matching module. The feature detection module is configured to detect ORB features of the frame images, the feature detection module detecting ORB features of adjacent frame images as a result of a relationship calculation involving between adjacent frames. ORB signature detection has two steps: detecting FAST angular points, calculating the main direction of the angular points, constructing a Gaussian pyramid, and detecting the angular points on each layer of pyramid image; and performing BRIEF descriptor calculation to describe the extracted area around the corner point. BRIEF is a binary descriptor whose description vector is composed of a number of 0's and 1's, where 0's and 1's encode the magnitude relationship of two randomly chosen pixel values (e.g., p and q) near a keypoint: if p is greater than q, then take 1, otherwise take 0. In some embodiments, 128 pairs of p and q points are randomly selected around the characteristic point FAST by definition, and a 128-dimensional vector composed of 0 and 1 is obtained to describe the combination of the ORB characteristic points. And calculating the direction of the FAST characteristic point, so that the BRIEF descriptor has rotation invariance, and the robustness of characteristic detection is improved.
The feature matching module is configured to: matching ORB characteristics of adjacent frame images, comprising:
calculating the gradient of the image function f (x, y) at the pixel point (x, y):
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
wherein Gx and Gy represent gradients in the x-direction and y-direction, respectively;
and when the deviation of the gradient direction and the gradient amplitude is within a second threshold value, judging that the corresponding pixel points are successfully matched.
Further, the feature detection and tracking unit further comprises a feature re-detection module configured to: in the process of matching the ORB features of the adjacent frame images, if the total number or proportion of the matched feature points is lower than a first threshold (for example, the proportion is used as the threshold, for example, 30%), the ORB features of the adjacent frame images are detected again.
The essential matrix estimation unit is configured to: and calculating the essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images. Specifically, in order to reduce matching errors caused by imperfect feature matching algorithm, the essential matrix estimation unit calculates essential matrices between a plurality of groups of adjacent frame images according to the corresponding relationship of feature points between the adjacent frame images, and determines the used essential matrix from the plurality of groups of essential matrices.
In some embodiments, because of the imperfection of the feature tracking algorithm, in order to avoid the influence of the feature matching error on the feature tracking, the intrinsic matrix estimation unit performs iteration of a predetermined number of times by using a RANSAC algorithm, randomly sampling five points from a set of corresponding relations in each iteration, calculating a corresponding intrinsic matrix, and checking whether pixel points except for random sampling are interior points; and selecting the essence matrix corresponding to the checked maximum number of the inner points in each iteration as the used essence matrix.
The pose change relationship calculation unit is configured to: and calculating the pose change relation of the adjacent frame images according to the essence matrix.
For the essential matrix E, the camera matrix [ R | t ] for the second image can be computed by SVD decomposition.
E=U∑VT
R=UW-1VT
t^=VW∑VT
Wherein t ^ is an antisymmetric matrix of the vector t,
the reprojection unit is configured to: and according to the three-dimensional coordinates of the pixel points on the first frame image in the adjacent frame images, utilizing the pose change relation calculated by the pose change relation calculation unit to re-project the pixel points on the image plane corresponding to the first frame image onto the image plane corresponding to the second frame image in the adjacent frame images so as to optimize the luminosity error of the pose change relation.
When the reprojection unit reprojects the pixel points on the image plane corresponding to the first frame image to the image plane corresponding to the second frame image in the adjacent frame image, if ORB feature matching fails, the pose change relationship is initialized to be an identity matrix.
The reprojection unit includes an error optimization module configured to: optimizing photometric errors of pose change relations, comprising:
wherein, Tk,k-1Delta I (T) as a relationship of pose changek,k-1,ui) And the luminance difference after pixel projection is obtained, u is the coordinate of the pixel point in the x-axis direction, and I is a gray value.
Knowing the image plane Ik-1The pixel position of a certain point (three-dimensional point) is (u, v) and the depth d, and the pose T is utilizedk,k-1Obtaining the three-dimensional coordinate p of the point in the current framekImage plane I projected by camera parameterskMiddle (u ',') to complete the reprojection, as shown in FIG. 3. The brightness value of the feature point of the adjacent frames is not changed in a short time, and the three-dimensional point around the three-dimensional point is obtainedPixel gradient photometric error. Pose is continuously optimized to minimize residual errors.
The mapping unit is configured to: and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
The previous units can obtain the relative position relationship calculated by two frames, namely the rotation Rk,k-1And translation kk,k-1Since the camera matrix corresponding to the second image is assumed to be zero in the case of the camera matrix corresponding to the first image, we can accumulate the relative quantities calculated each time:
Rk=Rk-1Rk,k-1
tk=tk-1+tk-1,Rk,k-1。
in some embodiments, the synchronized positioning and mapping system further comprises a loop detection and optimization unit configured to: performing loop detection by using a bag-of-words model; adding each group of newly input frame images into the factor graph model, adding corresponding pose constraint relations and setting initialization values to form a pose information matrix when each group of frame images is added; and (3) linearization is carried out in the current pose information matrix, the pose information matrix is decomposed by QR, and the relationship between factor nodes is removed by utilizing the sparsification processing mode of the factor graph, so that updated pose information is obtained.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed.
Claims (12)
1. A synchronous positioning and map building method is characterized by comprising the following steps:
detecting ORB characteristics of adjacent frame images, and matching the ORB characteristics of the adjacent frame images;
calculating an essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images;
calculating the pose change relation of the adjacent frame images according to the essence matrix;
acquiring three-dimensional coordinates of pixel points on a first frame image in adjacent frame images, and re-projecting pixel points on an image plane corresponding to the first frame image onto an image plane corresponding to a second frame image in the adjacent frame images by using the pose change relationship; optimizing photometric errors of pose change relations;
and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
2. The method of claim 1, wherein the calculating the essence matrix according to the corresponding relationship of the feature points between the adjacent frame images comprises:
and calculating the essential matrixes among a plurality of groups of adjacent frame images according to the corresponding relation of the characteristic points among the adjacent frame images, and determining the used essential matrix from the plurality of groups of essential matrixes.
3. The method according to claim 2, wherein the calculating the essential matrix between the plurality of sets of adjacent frame images is performed by a five-point algorithm according to the corresponding relationship of the feature points between the adjacent frame images.
4. The method according to claim 3, wherein the step of calculating the intrinsic matrix between a plurality of groups of adjacent frame images according to the corresponding relationship of the feature points between the adjacent frame images, and determining the intrinsic matrix to be used from the plurality of groups of intrinsic matrices comprises:
adopting RANSAC algorithm to carry out iteration of the reserved times, randomly sampling five points from a group of corresponding relations in each iteration, calculating a corresponding essential matrix, and checking whether pixel points except for random sampling are interior points;
and selecting the essence matrix corresponding to the checked maximum number of the inner points in each iteration as the used essence matrix.
5. The synchronized positioning and mapping method of claim 1, further comprising: in the process of matching the ORB characteristics of the adjacent frame images, if the total number or the proportion of the matched characteristic points is lower than a first threshold value, the ORB characteristics of the adjacent frame images are detected again.
6. The synchronized localization and mapping method of claim 1 or 5, wherein the matching of ORB features between adjacent frame images comprises:
calculating the gradient of the image function f (x, y) at the pixel point (x, y):
the gradient direction is as follows:
Φ(x,y)=tan-1(Gx/Gy)
the gradient amplitude is:
wherein Gx and Gy represent gradients in the x-direction and y-direction, respectively;
and when the deviation of the gradient direction and the gradient amplitude is within a second threshold value, judging that the corresponding pixel points are successfully matched.
7. The synchronous localization and mapping method of claim 1, wherein when a pixel point on an image plane corresponding to a first frame image is re-projected onto an image plane corresponding to a second frame image in an adjacent frame image, if ORB feature matching fails, the pose change relationship is initialized to an identity matrix.
8. The synchronous localization and mapping method of claim 1, wherein the optimizing photometric errors of pose change relationships comprises:
9. The synchronized positioning and mapping method of claim 1, wherein after obtaining the positioning track, the synchronized positioning and mapping method further comprises:
performing loop detection by using a bag-of-words model;
adding each group of newly input frame images into the factor graph model, adding corresponding pose constraint relations and setting initialization values to form a pose information matrix when each group of frame images is added;
and (3) linearization is carried out in the current pose information matrix, the pose information matrix is decomposed by QR, and the relationship between factor nodes is removed by utilizing the sparsification processing mode of the factor graph, so that updated pose information is obtained.
10. The method of claim 1, wherein the adjacent frame images are de-distorted before the ORB features of the adjacent frame images are detected.
11. A synchronized positioning and mapping system, comprising: the system comprises a feature detection and tracking unit, an essential matrix estimation unit, a pose change relation calculation unit, a re-projection unit and a mapping unit, wherein:
the feature detection and evaluation unit is configured to: detecting ORB characteristics of adjacent frame images, and matching the ORB characteristics of the adjacent frame images;
the intrinsic matrix estimation unit is configured to: calculating an essential matrix according to the corresponding relation of the characteristic points between the adjacent frame images;
the pose change relationship calculation unit is configured to: calculating the pose change relation of the adjacent frame images according to the essence matrix;
the reprojection unit is configured to: according to the three-dimensional coordinates of the pixel points on the first frame image in the adjacent frame images, utilizing the pose change relation calculated by the pose change relation calculation unit to re-project the pixel points on the image plane corresponding to the first frame image onto the image plane corresponding to the second frame image in the adjacent frame images, and optimizing the luminosity error of the pose change relation;
the mapping unit is configured to: and accumulating the relative quantity between every two adjacent frame images according to the pose change relation between every two adjacent frame images to obtain the positioning track.
12. The synchronized positioning and mapping system of claim 11, wherein the feature detection and tracking unit includes a feature re-detection module configured to: in the process of matching the ORB characteristics of the adjacent frame images, if the total number or the proportion of the matched characteristic points is lower than a first threshold value, the ORB characteristics of the adjacent frame images are detected again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011427763.6A CN112541423A (en) | 2020-12-09 | 2020-12-09 | Synchronous positioning and map construction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011427763.6A CN112541423A (en) | 2020-12-09 | 2020-12-09 | Synchronous positioning and map construction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112541423A true CN112541423A (en) | 2021-03-23 |
Family
ID=75019644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011427763.6A Pending CN112541423A (en) | 2020-12-09 | 2020-12-09 | Synchronous positioning and map construction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541423A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801077A (en) * | 2021-04-15 | 2021-05-14 | 智道网联科技(北京)有限公司 | Method for SLAM initialization of autonomous vehicles and related device |
CN113284176A (en) * | 2021-06-04 | 2021-08-20 | 深圳积木易搭科技技术有限公司 | Online matching optimization method combining geometry and texture and three-dimensional scanning system |
CN113808169A (en) * | 2021-09-18 | 2021-12-17 | 南京航空航天大学 | ORB-SLAM-based large-scale equipment structure surface detection path planning method |
CN114972514A (en) * | 2022-05-30 | 2022-08-30 | 歌尔股份有限公司 | SLAM positioning method, device, electronic equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN107481315A (en) * | 2017-06-29 | 2017-12-15 | 重庆邮电大学 | A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms |
CN108537848A (en) * | 2018-04-19 | 2018-09-14 | 北京工业大学 | A kind of two-stage pose optimal estimating method rebuild towards indoor scene |
CN109509211A (en) * | 2018-09-28 | 2019-03-22 | 北京大学 | Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology |
CN110044354A (en) * | 2019-03-28 | 2019-07-23 | 东南大学 | A kind of binocular vision indoor positioning and build drawing method and device |
CN111461998A (en) * | 2020-03-11 | 2020-07-28 | 中国科学院深圳先进技术研究院 | Environment reconstruction method and device |
CN111968129A (en) * | 2020-07-15 | 2020-11-20 | 上海交通大学 | Instant positioning and map construction system and method with semantic perception |
-
2020
- 2020-12-09 CN CN202011427763.6A patent/CN112541423A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105856230A (en) * | 2016-05-06 | 2016-08-17 | 简燕梅 | ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot |
CN107481315A (en) * | 2017-06-29 | 2017-12-15 | 重庆邮电大学 | A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms |
CN108537848A (en) * | 2018-04-19 | 2018-09-14 | 北京工业大学 | A kind of two-stage pose optimal estimating method rebuild towards indoor scene |
CN109509211A (en) * | 2018-09-28 | 2019-03-22 | 北京大学 | Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology |
CN110044354A (en) * | 2019-03-28 | 2019-07-23 | 东南大学 | A kind of binocular vision indoor positioning and build drawing method and device |
CN111461998A (en) * | 2020-03-11 | 2020-07-28 | 中国科学院深圳先进技术研究院 | Environment reconstruction method and device |
CN111968129A (en) * | 2020-07-15 | 2020-11-20 | 上海交通大学 | Instant positioning and map construction system and method with semantic perception |
Non-Patent Citations (12)
Title |
---|
任桢: "图优化的移动机器人SLAM算法研究", 《中国优秀硕士学位论文全文数据库》 * |
任桢: "图优化的移动机器人SLAM算法研究", 《中国优秀硕士学位论文全文数据库》, 15 September 2019 (2019-09-15), pages 26 * |
李同等: "基于ORB词袋模型的SLAM回环检测研究", 《信息通信》 * |
李同等: "基于ORB词袋模型的SLAM回环检测研究", 《信息通信》, no. 10, 15 October 2017 (2017-10-15) * |
梁明杰等: "基于图优化的同时定位与地图创建综述", 《机器人》 * |
梁明杰等: "基于图优化的同时定位与地图创建综述", 《机器人》, no. 04, 15 July 2013 (2013-07-15) * |
段震灏等: "基于GC-RANSAC算法的单目视觉同时定位与地图构建", 《长春理工大学学报(自然科学版)》 * |
段震灏等: "基于GC-RANSAC算法的单目视觉同时定位与地图构建", 《长春理工大学学报(自然科学版)》, no. 01, 15 February 2020 (2020-02-15) * |
艾青林等: "基于ORB关键帧匹配算法的机器人SLAM实现", 《机电工程》 * |
艾青林等: "基于ORB关键帧匹配算法的机器人SLAM实现", 《机电工程》, vol. 3, no. 05, 20 May 2016 (2016-05-20), pages 3 - 4 * |
蒋林等: "一种点线特征融合的双目同时定位与地图构建方法", 《科学技术与工程》 * |
蒋林等: "一种点线特征融合的双目同时定位与地图构建方法", 《科学技术与工程》, no. 12, 28 April 2020 (2020-04-28) * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801077A (en) * | 2021-04-15 | 2021-05-14 | 智道网联科技(北京)有限公司 | Method for SLAM initialization of autonomous vehicles and related device |
CN113284176A (en) * | 2021-06-04 | 2021-08-20 | 深圳积木易搭科技技术有限公司 | Online matching optimization method combining geometry and texture and three-dimensional scanning system |
CN113284176B (en) * | 2021-06-04 | 2022-08-16 | 深圳积木易搭科技技术有限公司 | Online matching optimization method combining geometry and texture and three-dimensional scanning system |
CN113808169A (en) * | 2021-09-18 | 2021-12-17 | 南京航空航天大学 | ORB-SLAM-based large-scale equipment structure surface detection path planning method |
CN113808169B (en) * | 2021-09-18 | 2024-04-26 | 南京航空航天大学 | ORB-SLAM-based large equipment structure surface detection path planning method |
CN114972514A (en) * | 2022-05-30 | 2022-08-30 | 歌尔股份有限公司 | SLAM positioning method, device, electronic equipment and readable storage medium |
CN114972514B (en) * | 2022-05-30 | 2024-07-02 | 歌尔股份有限公司 | SLAM positioning method, SLAM positioning device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN109345588B (en) | Tag-based six-degree-of-freedom attitude estimation method | |
CN112734852B (en) | Robot mapping method and device and computing equipment | |
CN111210463B (en) | Virtual wide-view visual odometer method and system based on feature point auxiliary matching | |
Kang et al. | Detection and tracking of moving objects from a moving platform in presence of strong parallax | |
CN112541423A (en) | Synchronous positioning and map construction method and system | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
CN111462207A (en) | RGB-D simultaneous positioning and map creation method integrating direct method and feature method | |
Liu et al. | Direct visual odometry for a fisheye-stereo camera | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN110570474B (en) | Pose estimation method and system of depth camera | |
CN111882602B (en) | Visual odometer implementation method based on ORB feature points and GMS matching filter | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
CN113744315B (en) | Semi-direct vision odometer based on binocular vision | |
CN114234967A (en) | Hexapod robot positioning method based on multi-sensor fusion | |
CN116128966A (en) | Semantic positioning method based on environmental object | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
Yuan et al. | A method of vision-based state estimation of an unmanned helicopter | |
Jaekel et al. | Robust multi-stereo visual-inertial odometry | |
Richardson et al. | PAS: visual odometry with perspective alignment search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210323 |
|
RJ01 | Rejection of invention patent application after publication |