CN109974743B - Visual odometer based on GMS feature matching and sliding window pose graph optimization - Google Patents

Visual odometer based on GMS feature matching and sliding window pose graph optimization Download PDF

Info

Publication number
CN109974743B
CN109974743B CN201910195323.3A CN201910195323A CN109974743B CN 109974743 B CN109974743 B CN 109974743B CN 201910195323 A CN201910195323 A CN 201910195323A CN 109974743 B CN109974743 B CN 109974743B
Authority
CN
China
Prior art keywords
frame
feature
matching
pose
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910195323.3A
Other languages
Chinese (zh)
Other versions
CN109974743A (en
Inventor
陈佩
谢晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201910195323.3A priority Critical patent/CN109974743B/en
Publication of CN109974743A publication Critical patent/CN109974743A/en
Application granted granted Critical
Publication of CN109974743B publication Critical patent/CN109974743B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The invention belongs to the field of computer vision, and particularly relates to an RGB-D vision odometer based on GMS feature matching and sliding window pose graph optimization. The method adopts GMS (grid motion statistics) algorithm to replace the commonly used distance threshold value + RANSAC (random sample consensus) algorithm in the prior art scheme for carrying out error matching rejection, can still screen out a sufficient number of correct matching point pairs when the relative motion between images is large and the brightness change is large, and improves the robustness of the system; the invention adopts the sliding window pose graph optimization technology to reduce the accumulated error of pose estimation, has higher real-time performance compared with the prior art scheme by maintaining a local map or designing a more complex objective function, and can ensure the accuracy of the visual odometer.

Description

Visual odometer based on GMS feature matching and sliding window pose graph optimization
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an RGB-D vision odometer based on GMS feature matching and sliding window pose graph optimization.
Background
The vision odometer is used for estimating the pose of the robot in real time by analyzing a related image sequence through a machine vision technology, can overcome the defects of the traditional odometer, can perform positioning more accurately, and can operate in the environment which cannot be covered by a Global Positioning System (GPS), such as an indoor environment, interstellar exploration and the like. Visual odometers have gained wide attention and application in the field of mobile robot positioning and navigation.
Currently, two mainstream methods for visual odometry are the feature point method and the direct method, respectively. The feature point method estimates the relative pose between image frames mainly through three steps, i.e., feature extraction, feature matching, and minimization of Reprojection Error (Reprojection Error). The characteristic point method is taken as the earliest emerging solution of the visual odometer and is always considered as the mainstream method of the visual odometer, the method is stable in operation, insensitive to dynamic objects and a relatively mature solution at present, but the method also has certain problems. The characteristic point extracting and matching steps of the characteristic point method have the problems of time consumption and error matching. When the image has motion blur, poor lighting conditions, and a lot of repeated textures or lacks textures, the accuracy of the feature point method is greatly influenced. The principle of the direct method is based on an assumption called luminance invariance, which assumes that corresponding pixels on two frame images should have the same luminance value. Based on this assumption, luminance values of pixel points are directly used by a camera model to construct Photometric errors, and inter-frame poses are estimated by minimizing Photometric errors (Photometric errors). The direct method can be divided into dense and semi-dense methods according to the number of used pixel points, wherein the dense direct method uses all the pixel points on the image to calculate the luminosity error, so the calculation amount is huge. The semi-dense direct method only uses pixel points with certain gradient information to calculate the photometric error, and the direct method has certain real-time performance while the accuracy of the relative pose estimation is kept. The direct method can obtain a robust and accurate pose estimation result when the camera moves relatively less, and meanwhile, the image information is fully utilized, so that the method can still keep better accuracy under the conditions that the image has motion blur, repeated textures and texture loss. The main problem with the direct method is that the assumption of constant brightness is a more compelling assumption that can be considered valid if the brightness difference is small, but is likely to be invalid if the brightness difference is large, in which case the accuracy of the visual odometer of the direct method is greatly reduced.
When the visual mileage timing is realized by using a characteristic point method or a direct method, the relative pose estimation between every two image sequence frames is not simply carried out generally, and the accumulative error is reduced by some technical means. These techniques mainly include maintaining local maps and designing more complex photometric error calculation methods. The maintenance of the local map requires operations of inserting new map points and deleting old map points to the map, and the two operations increase the calculation amount and reduce the real-time performance of the visual odometer. The accumulated error can be effectively reduced by designing a more complex photometric error calculation method, but more calculation amount is needed when the photometric error is minimized, and the real-time performance of the system is reduced.
Disclosure of Invention
In order to overcome at least one defect in the prior art, the invention provides the RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization, the GMS (grid motion statistics) algorithm is adopted for carrying out mismatching elimination, a sufficient number of correct matching point pairs can be screened out when the relative motion between images is large and the brightness change is large, and the robustness of the system is improved; the sliding window pose graph optimization technology is adopted to reduce accumulated errors of pose estimation, and the method has higher instantaneity.
In order to solve the technical problems, the invention adopts the technical scheme that: an RGB-D vision odometer based on GMS feature matching and sliding window pose graph optimization comprises the following steps:
step 1, reading a first frame RGB image as a reference frame through an RGB-D camera, reading a first frame depth image as depth information of the reference frame, extracting feature points of the reference frame and calculating an ORB feature descriptor; the extracted feature points are pixel point positions of FAST (FAST segment test feature) corner points in the image. The ORB feature compares the brightness value of a corner with the brightness of 128 pixels around the corner, marks the brightness of the corner as 1, and otherwise marks the brightness of the corner as 0, and finally generates a 128-dimensional binary vector as a feature descriptor of the corner.
And 2, reading the next frame of RGB image as the current frame, reading the next frame of depth image as the depth information of the current frame, extracting feature points of the current frame and calculating an ORB feature descriptor.
Step 3, performing primary feature matching on feature points extracted from the reference frame and the current frame; and using the Hamming distance as the measure of the similarity of the two feature points, calculating the Hamming distance of each feature point on the reference frame and all the feature points on the current frame one by one, and selecting the feature point with the minimum Hamming distance as a matching point to generate a pair of matching point pairs.
Step 4, eliminating error matching of the characteristic matching point pair obtained in the step S3 through GMS (grid motion statistics) algorithm; the GMS (grid motion statistics) algorithm proposes an assumption based on the smoothness of the motion: a feature point p on the first frame image1The matching point on the second frame image is p2If the match is a correct match, then p is used1The matching points of the feature points in the 3 x 3 mesh as the center all fall on the second frame image with a large probability of p2In a 3 x 3 grid at the center. Based on the assumption, the two frames of images are subjected to grid division and are matched in the corresponding grid areaCounting the number of matching points, if the number of matching points is larger than a threshold value T0This considers the matching pair to be a correct matching pair, and vice versa a wrong matching pair, where T0The calculation formula of (a) is as follows:
Figure GDA0002733802720000031
wherein n is the average number of the feature points in each grid; in the scheme, the alpha value is 6; feature matching based on GMS (grid motion statistics) algorithm, as one of the key techniques of the present invention, has the following effects:
1. compared with a common RANSAC (random sample consensus) algorithm, the GMS (grid motion statistics) can still screen out a sufficient number of correct matching pairs under the conditions that inter-frame motion is relatively large and inter-frame brightness change is large, so that the accuracy of subsequent pose calculation is ensured to a certain extent;
2. the algorithm eliminates the mismatching based on the statistical theory, and has high instantaneity.
Step 5, obtaining a two-dimensional-two-dimensional matching point pair of the reference frame and the current frame through step 4, projecting the feature points screened in step S4 in the reference frame into a three-dimensional space by using a camera projection model and depth information of the reference frame to obtain three-dimensional space coordinates of the feature points, and converting the two-dimensional-two-dimensional matching point pair into a three-dimensional-two-dimensional matching point pair; the calculation formula of the camera projection model is as follows:
P=dK-1p
wherein P is the pixel coordinate of the feature point, K is the camera internal reference, d is the depth of the feature point, and P is the three-dimensional space coordinate of the feature point.
Step 6, minimizing the reprojection error to obtain a primary inter-frame relative pose; the objective function to minimize the reprojection error is:
*=argmin|(π(T(P;))-p)|2
wherein, the relative pose between the reference frame to be estimated and the current frame is T, the pose transformation from the reference frame to the current frame is represented by T, and the projection model of the camera, namely the transformation of projecting the three-dimensional space to the image, is represented by pi.
Step 7, minimizing the brightness error, and taking the preliminary inter-frame relative pose obtained in the step S6 as an initial value of iterative optimization to obtain a second sub-optimal inter-frame relative pose; the objective function to minimize the luminance error is:
*=argmin|I2(π(P;))-I2(p)|2”。
wherein, I2(x) represents the brightness value of the pixel point of the current frame.
Step 8, optimizing a sliding window pose graph, namely selecting poses of the current frame and frames in front of the current frame as poses to be subjected to iterative optimization in a window; the pose optimization problem is expressed by adopting a Graph, wherein the peak is the pose of each frame, the side is the relative pose between two frames, and the error calculation formula is as follows:
Figure GDA0002733802720000041
wherein, TijRepresenting relative motion from frame j to frame i, Ti,TjRespectively representing the poses of the ith frame and the jth frame; for image frames outside the window, they are still kept in the Graph, but when iteration is performed, the poses of the frames are marginalized and not updated.
As another key technical point of the scheme, the sliding window optimization has the following effects:
1. global information is provided for the pose calculation of the current frame, the accumulated error can be effectively reduced, and the accuracy of the visual odometer is improved;
2. for the pose of the image frame outside the window, the method is used for keeping the image frame in the Graph (Graph), but performing marginalization and no iterative update on the image frame, so that the pose Graph can be optimized and kept under a fixed size scale, the iteration times are reduced, and the real-time performance of the visual odometer is improved.
And S9, taking the current frame as a reference frame and the depth information of the current frame as the depth information of the reference frame, and returning to the step S2.
Compared with the prior art, the beneficial effects are:
1. the method adopts the structureless relative pose estimation between every two frames, does not need to establish a local map and maintain the local map, and improves the real-time performance of the visual odometer;
2. feature matching based on GMS (grid motion statistics) algorithm is adopted, so that a sufficient number of matching point pairs can be screened out for use in subsequent pose calculation under the conditions of large inter-frame motion and large brightness change, and the robustness of the visual odometer is improved;
3. and the pose graph is optimized through a sliding window, the estimated pose is subjected to nonlinear optimization, the accuracy of the visual odometer is improved, meanwhile, the scale of the pose graph optimization is restrained through the size of the window, the iteration times of the nonlinear optimization process are reduced, and the instantaneity of the visual odometer is ensured.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a sliding window pose graph optimization in an embodiment of the present invention.
FIG. 3 is a schematic structural view of a Graph employed in the present invention.
Detailed Description
The drawings are for illustration purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
Example 1:
as shown in fig. 1, an RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization includes the following steps:
step 1, reading a first frame RGB image as a reference frame through an RGB-D camera, reading a first frame depth image as depth information of the reference frame, extracting feature points of the reference frame and calculating an ORB feature descriptor; the extracted feature points are pixel point positions of FAST (FAST segment test feature) corner points in the image. The ORB feature compares the brightness value of a corner with the brightness of 128 pixels around the corner, marks the brightness of the corner as 1, and otherwise marks the brightness of the corner as 0, and finally generates a 128-dimensional binary vector as a feature descriptor of the corner.
And 2, reading the next frame of RGB image as the current frame, reading the next frame of depth image as the depth information of the current frame, extracting feature points of the current frame and calculating an ORB feature descriptor.
Step 3, performing primary feature matching on feature points extracted from the reference frame and the current frame; and using the Hamming distance as the measure of the similarity of the two feature points, calculating the Hamming distance of each feature point on the reference frame and all the feature points on the current frame one by one, and selecting the feature point with the minimum Hamming distance as a matching point to generate a pair of matching point pairs.
Step 4, eliminating error matching of the characteristic matching point pair obtained in the step S3 through GMS (grid motion statistics) algorithm; the GMS (grid motion statistics) algorithm proposes an assumption based on the smoothness of the motion: a feature point P on the first frame image1The matching point on the second frame image is P2If the match is a correct match, then P is used1The matching points of the feature points in the 3 x 3 mesh as the center all fall on the second frame image with a large probability of P2In a 3 x 3 grid at the center. Based on the hypothesis, the two frames of images are subjected to grid division, the number of matching points in the corresponding grid area is counted, and if the number of matching points is larger than a threshold value T0This considers the matching pair to be a correct matching pair, and vice versa a wrong matching pair, where T0The calculation formula of (a) is as follows:
Figure GDA0002733802720000051
wherein n is the average number of the feature points in each grid; in this case, the value of alpha is 6.
Step 5, obtaining a two-dimensional-two-dimensional matching point pair of the reference frame and the current frame through step 4, projecting the feature points screened in step S4 in the reference frame into a three-dimensional space by using a camera projection model and depth information of the reference frame to obtain three-dimensional space coordinates of the feature points, and converting the two-dimensional-two-dimensional matching point pair into a three-dimensional-two-dimensional matching point pair; the calculation formula of the camera projection model is as follows:
P=dK-1p
wherein P is the pixel coordinate of the feature point, K is the camera internal reference, d is the depth of the feature point, and P is the three-dimensional space coordinate of the feature point.
And 6, minimizing the reprojection error, wherein the objective function of minimizing the reprojection error is as follows:
*=argmin|(π(T(P;))-p)|2
the method comprises the following steps that a relative pose between a reference frame to be estimated and a current frame is determined, T represents pose transformation from the reference frame to the current frame, and pi represents a projection model of a camera, namely transformation of projecting a three-dimensional space onto an image;
and establishing a reprojection error least square objective function according to the formula, and performing iterative optimization by adopting an LM (Rinderberg-Marquardt) iterative algorithm to obtain a preliminary inter-frame relative pose.
And 7, minimizing the brightness error, wherein the objective function of minimizing the brightness error is as follows:
*=argmin|I2(π(P;))-I2(p)|2
wherein, I2(x) represents the brightness value of the pixel point of the current frame; p is the pixel coordinate of the feature point; p is a three-dimensional space coordinate of the characteristic point;
and (3) establishing a brightness error least square objective function according to the formula, taking the initial pose obtained in the step (6) as an initial value of iterative optimization in the step, and performing iterative optimization by adopting an LM (Rivinberger-Marquardt) iterative algorithm to obtain a second suboptimal inter-frame relative pose.
Step 8, as shown in fig. 2, optimizing a sliding window pose graph, and selecting poses of 10 frames in total of the current frame and 9 frames in front of the current frame as poses to be subjected to iterative optimization in a window; as shown in fig. 3, the pose optimization problem is expressed by using a Graph diagram, where a vertex is the pose of each frame, an edge is the relative pose between two frames, and the error calculation formula is as follows:
Figure GDA0002733802720000061
wherein, TijRepresenting relative motion from frame j to frame i, Ti,TjRespectively representing the poses of the ith frame and the jth frame; for image frames outside the window, they are still kept in the Graph, but when iteration is performed, the poses of the frames are marginalized and not updated.
Establishing a pose graph of window optimization by using the g2o library, setting a block solver, a linear equation solver, an iterative optimization algorithm and iteration times, and calling a solver optimization interface to optimize the pose graph after initialization.
And 9, storing the relative poses of the reference frame and the current frame after pose graph optimization.
And step 10, taking the current frame as a reference frame, and returning to the step 2.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. An RGB-D vision odometer based on GMS feature matching and sliding window pose graph optimization is characterized by comprising the following steps:
s1, reading a first frame RGB image as a reference frame through an RGB-D camera, reading a first frame depth image as depth information of the reference frame, extracting feature points of the reference frame and calculating an ORB feature descriptor;
s2, reading a next frame of RGB image as a current frame, reading a next frame of depth image as depth information of the current frame, extracting feature points of the current frame and calculating an ORB feature descriptor;
s3, performing primary feature matching on feature points extracted from the reference frame and the current frame;
s4, rejecting error matching through a GMS algorithm for the feature matching point pair obtained in the step S3;
s5, obtaining a two-dimensional and two-dimensional matching point pair of the reference frame and the current frame through the step S4, projecting the feature points screened in the step S4 in the reference frame into a three-dimensional space by using a camera projection model and depth information of the reference frame to obtain three-dimensional space coordinates of the feature points, and converting the two-dimensional and two-dimensional matching point pair into a three-dimensional and two-dimensional matching point pair;
s6, minimizing a reprojection error, and obtaining a primary inter-frame relative pose through iterative optimization;
s7, minimizing the brightness error, and taking the preliminary inter-frame relative pose obtained in the step S6 as an initial value of iterative optimization to obtain a second suboptimal inter-frame relative pose;
s8, optimizing a sliding window pose graph, and selecting poses of the current frame and frames in front of the current frame as poses to be subjected to iterative optimization in a window; and representing a pose optimization problem by adopting a Graph, wherein the peak is the pose of each frame, the edge is the relative pose between two frames, and the error calculation formula is as follows:
Figure FDA0002765521250000011
wherein, TijRepresenting relative motion from frame j to frame i, Ti,TjRespectively representing the poses of the ith frame and the jth frame; the pose of the image frames outside the windows is kept in the Graph, but during iteration, the pose of the image frames outside the windows is marginalized and is not updated;
and S9, taking the current frame as a reference frame and the depth information of the current frame as the depth information of the reference frame, and returning to the step S2.
2. The RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization according to claim 1, wherein the feature points extracted in step S1 are pixel point positions of FAST segment test feature FAST corner points in the image; and comparing the brightness value of the corner with the brightness of 128 pixel points around the corner, marking the brightness of the corner as 1, and otherwise, marking the brightness of the corner as 0, and finally generating a 128-dimensional binary vector as a feature descriptor of the corner.
3. The RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization according to claim 2, wherein step S3 specifically comprises: and using the Hamming distance as the measure of the similarity of the two feature points, calculating the Hamming distance of each feature point on the reference frame and all the feature points on the current frame one by one, and selecting the feature point with the minimum Hamming distance as a matching point to generate a pair of matching point pairs.
4. The RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization of claim 3, wherein GMS algorithm proposes an assumption based on smoothness of motion: a feature point P on the first frame image1The matching point on the second frame image is P2If the match is a correct match, then use P1The matching points of the feature points in the 3 x 3 mesh as the center all fall on the second frame image with a large probability of P2In a central 3 x 3 grid; based on the hypothesis, the two frames of images are subjected to grid division, the number of matching points in the corresponding grid area is counted, and if the number of matching points is larger than a threshold value T0If the characteristic matching point pair is a correct matching point pair, otherwise, the characteristic matching point pair is an incorrect matching point pair, wherein T0The calculation formula of (a) is as follows:
Figure FDA0002765521250000021
wherein n is the average number of the feature points in each grid; alpha is a customizable parameter.
5. The RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization of claim 4, wherein the calculation formula of the camera projection model is as follows:
P=dK-1p
wherein P is the pixel coordinate of the feature point, K is the camera internal reference, d is the depth of the feature point, and P is the three-dimensional space coordinate of the feature point.
6. The RGB-D visual odometer based on GMS feature matching and sliding window pose graph optimization of claim 4, wherein the objective function for minimizing reprojection error is:
*=argmin|(π(T(P;))-p)|2
the method comprises the following steps that a relative pose between a reference frame to be estimated and a current frame is determined, T represents pose transformation from the reference frame to the current frame, and pi represents a projection model of a camera, namely transformation of projecting a three-dimensional space onto an image; p is the pixel coordinate of the feature point; and P is the three-dimensional space coordinate of the characteristic point.
CN201910195323.3A 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization Expired - Fee Related CN109974743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910195323.3A CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910195323.3A CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Publications (2)

Publication Number Publication Date
CN109974743A CN109974743A (en) 2019-07-05
CN109974743B true CN109974743B (en) 2021-01-01

Family

ID=67078903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910195323.3A Expired - Fee Related CN109974743B (en) 2019-03-14 2019-03-14 Visual odometer based on GMS feature matching and sliding window pose graph optimization

Country Status (1)

Country Link
CN (1) CN109974743B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838145B (en) * 2019-10-09 2020-08-18 西安理工大学 Visual positioning and mapping method for indoor dynamic scene
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111144441B (en) * 2019-12-03 2023-08-08 东南大学 DSO photometric parameter estimation method and device based on feature matching
CN111161318A (en) * 2019-12-30 2020-05-15 广东工业大学 Dynamic scene SLAM method based on YOLO algorithm and GMS feature matching
CN111462190B (en) * 2020-04-20 2023-11-17 海信集团有限公司 Intelligent refrigerator and food material input method
CN112418288B (en) * 2020-11-17 2023-02-03 武汉大学 GMS and motion detection-based dynamic vision SLAM method
US11899469B2 (en) 2021-08-24 2024-02-13 Honeywell International Inc. Method and system of integrity monitoring for visual odometry
CN115115708B (en) * 2022-08-22 2023-01-17 荣耀终端有限公司 Image pose calculation method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105938619A (en) * 2016-04-11 2016-09-14 中国矿业大学 Visual odometer realization method based on fusion of RGB and depth information
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN107025668B (en) * 2017-03-30 2020-08-18 华南理工大学 Design method of visual odometer based on depth camera
US10416681B2 (en) * 2017-07-12 2019-09-17 Mitsubishi Electric Research Laboratories, Inc. Barcode: global binary patterns for fast visual inference
CN108537848B (en) * 2018-04-19 2021-10-15 北京工业大学 Two-stage pose optimization estimation method for indoor scene reconstruction

Also Published As

Publication number Publication date
CN109974743A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN110108258B (en) Monocular vision odometer positioning method
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN103646391A (en) Real-time camera tracking method for dynamically-changed scene
US11367195B2 (en) Image segmentation method, image segmentation apparatus, image segmentation device
CN103854283A (en) Mobile augmented reality tracking registration method based on online study
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN111982103B (en) Point-line comprehensive visual inertial odometer method with optimized weight
CN110136174B (en) Target object tracking method and device
CN112652020B (en) Visual SLAM method based on AdaLAM algorithm
CN114677323A (en) Semantic vision SLAM positioning method based on target detection in indoor dynamic scene
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
Xu et al. Crosspatch-based rolling label expansion for dense stereo matching
CN113362377B (en) VO weighted optimization method based on monocular camera
CN111950599B (en) Dense visual odometer method for fusing edge information in dynamic environment
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
KR101766823B1 (en) Robust visual odometry system and method to irregular illumination changes
CN116894876A (en) 6-DOF positioning method based on real-time image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210101

CF01 Termination of patent right due to non-payment of annual fee