CN112489083A - Image feature point tracking matching method based on ORB-SLAM algorithm - Google Patents

Image feature point tracking matching method based on ORB-SLAM algorithm Download PDF

Info

Publication number
CN112489083A
CN112489083A CN202011418605.4A CN202011418605A CN112489083A CN 112489083 A CN112489083 A CN 112489083A CN 202011418605 A CN202011418605 A CN 202011418605A CN 112489083 A CN112489083 A CN 112489083A
Authority
CN
China
Prior art keywords
frame image
orb
feature point
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011418605.4A
Other languages
Chinese (zh)
Other versions
CN112489083B (en
Inventor
钟心亮
朱世强
顾建军
李特
姜峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202011418605.4A priority Critical patent/CN112489083B/en
Publication of CN112489083A publication Critical patent/CN112489083A/en
Application granted granted Critical
Publication of CN112489083B publication Critical patent/CN112489083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image feature point tracking and matching method based on an ORB-SLAM algorithm, which comprises the following steps: 1) extracting ORB characteristic points and descriptors from the key frames and performing uniform distribution processing on the quadtree; 2) predicting the positions of the feature points based on uniform acceleration motion for the new frame; 3) accurately solving the positions of the feature points of the sparse optical flow method of the multilayer pyramid; 4) carrying out reverse sparse optical flow tracking and eliminating error matching; 5) robust RANSAC is carried out on the feature matching obtained in the step 4 to remove outliers; 6) and solving the 6D pose according to the residual matching points and judging whether the current frame is a key frame. The method only needs to extract ORB characteristic points and descriptors from the key frames, and the time-consuming descriptors do not need to be calculated for tracking between the key frames.

Description

Image feature point tracking matching method based on ORB-SLAM algorithm
Technical Field
The invention relates to the technical field of computer vision, in particular to an image feature point tracking and matching method based on an ORB-SLAM algorithm.
Background
In recent years, with the continuous development of computer vision, SLAM technology is widely used in various fields, virtual reality, augmented reality, robotics, unmanned planes, and the like. With the continuous development of computer hardware, the real-time processing of visual information becomes possible, and the real-time positioning and map construction by using the visual information greatly reduces the price of intelligent robot products while improving the information acquisition amount.
The visual SLAM mainly comprises a front-end visual odometer and a rear-end optimization and loop detection module, wherein the front-end visual odometer mainly performs matching tracking by extracting feature points and descriptors of images and calculates relative pose between frames, the rear-end optimization optimizes the pose by combining more historical information, and the loop detection module is mainly used for eliminating accumulated errors and improving the accuracy of positioning and drawing. The ORB-SLAM is used as a benchmark of the visual SLAM, and is stable, rich in interface and gradually taken as a benchmark SLAM system by people.
However, ORB-SLAM has a relatively high demand on computing resources and is not suitable for low-end processors.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an image feature point tracking and matching method based on an ORB-SLAM algorithm, which improves the calculation speed of camera tracking and attitude estimation and improves the tracking and matching precision of the method.
The purpose of the invention is realized by the following technical scheme: an image feature point tracking matching method based on an ORB-SLAM algorithm comprises the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) according to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the calculated position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is more than 1.5pixel, removing the corresponding ORB feature point in the third frame image and the fourth frame image, and reserving a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
Further, step (1) comprises the following sub-steps:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scaling
Figure 604611DEST_PATH_IMAGE001
Wherein, in the step (A),
Figure 600380DEST_PATH_IMAGE002
represents the image of the frame in question,
Figure 848958DEST_PATH_IMAGE004
is an index of n and is,
Figure 837643DEST_PATH_IMAGE005
will be
Figure 120857DEST_PATH_IMAGE007
The sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing moment
Figure 786062DEST_PATH_IMAGE008
Calculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
Figure 521937DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 314312DEST_PATH_IMAGE010
represent an image in
Figure 452033DEST_PATH_IMAGE011
The gray value of (b), wherein p represents a first index and takes the value of 0 or 1, q represents a second index and takes the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b)
Figure 789604DEST_PATH_IMAGE012
Figure 871830DEST_PATH_IMAGE013
And rotating the pair of points, comparing each pixelThe gray value of the point pair is large or small if
Figure 77683DEST_PATH_IMAGE014
If the length of the ORB feature descriptor is equal to 256, the binary string is 1, otherwise, the binary string is 0, and finally the ORB feature descriptor is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
Further, the step (2) comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image
Figure 443811DEST_PATH_IMAGE015
Figure 342497DEST_PATH_IMAGE016
Figure 177598DEST_PATH_IMAGE017
Figure 921563DEST_PATH_IMAGE018
And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as
Figure 909242DEST_PATH_IMAGE019
Figure 713250DEST_PATH_IMAGE021
Figure 35647DEST_PATH_IMAGE023
And satisfies the following conditions:
Figure 583303DEST_PATH_IMAGE024
(2.2) pose according to the first frame image
Figure 941601DEST_PATH_IMAGE015
And speed of the fourth frame image
Figure 182090DEST_PATH_IMAGE026
Estimating the pose of the fourth frame image as
Figure 726204DEST_PATH_IMAGE027
Then, a certain ORB feature point position of the fourth frame image is predicted
Figure 343130DEST_PATH_IMAGE028
Figure 164455DEST_PATH_IMAGE029
Wherein the content of the first and second substances,
Figure 451211DEST_PATH_IMAGE030
is a 3D point in the third frame image corresponding to the ORB feature point,
Figure 357987DEST_PATH_IMAGE031
k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
Further, the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
The invention has the beneficial effects that: because of 30Hz image sequence and small motion amount of two adjacent frames, the invention only extracts ORB characteristic points and descriptors aiming at key frames, and accurately tracks the position of the ORB characteristic points through forward and backward optical flows based on uniform acceleration motion model prediction between the key frames.
Drawings
FIG. 1 is a flowchart of an image feature point tracking matching method based on ORB-SLAM algorithm according to the present invention;
FIG. 2 is a flow chart of the present invention for feature extraction based on ORB-SLAM algorithm;
FIG. 3 is a schematic diagram of feature point prediction based on a uniform acceleration motion model according to the present invention;
FIG. 4 is a schematic diagram of feature point tracking for forward optical flow tracking and backward optical flow elimination according to the present invention.
Detailed Description
The principles and aspects of the present invention will be further explained with reference to the drawings and the embodiments, which are described in detail herein for the purpose of illustration only and are not intended to be limiting.
Fig. 1 is a flowchart of an image feature point tracking and matching method based on the ORB-SLAM algorithm, which specifically includes the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images and performing uniform distribution processing on the quadtree, wherein the flow is shown in FIG. 2:
(1.1) carrying out histogram equalization processing on the frame image, wherein the purpose is to ensure that the frame image is not too bright or too dark and the information is complete, then extracting FAST characteristic points, because the FAST characteristic points do not have scale invariance and rotation invariance, setting a scale factor s of the frame image and the number n of pyramid layers in processing the scale invariance, reducing the frame image into n images according to the scale factor s, and carrying out the image scaling processing
Figure 372080DEST_PATH_IMAGE032
Wherein, in the step (A),
Figure 313491DEST_PATH_IMAGE002
represents the image of the frame in question,
Figure 269683DEST_PATH_IMAGE004
is an index of n and is,
Figure 663755DEST_PATH_IMAGE033
will be
Figure 215960DEST_PATH_IMAGE007
The sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) in dealing with rotational invariance, passing moments
Figure 621664DEST_PATH_IMAGE034
To calculate the center of mass of the FAST feature point image block with r as radius
Figure 640436DEST_PATH_IMAGE035
Coordinates to centroid of said FAST feature points
Figure 521804DEST_PATH_IMAGE035
The formed vector represents the direction of the FAST feature point, wherein the moments of the image block are represented as:
Figure 143278DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 793703DEST_PATH_IMAGE037
represent an image in
Figure 826118DEST_PATH_IMAGE038
Wherein p represents a first index and takes a value of 0 or 1, q represents a second index and takes a value of 0 or 1.
Figure 619948DEST_PATH_IMAGE040
The direction of the FAST feature point is:
Figure 390458DEST_PATH_IMAGE041
(1.3) using a quad-tree algorithm to enable the FAST feature points to be uniformly distributed, so that a quad-tree is constructed for the frame image of the FAST feature points obtained in the step (1.1), when the number of the FAST feature points in the nodes is equal to 1, each sub-node is not divided downwards, if the number of the nodes is more than 1, the quad-tree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, from around the keypoint
Figure 95240DEST_PATH_IMAGE042
Random selection within a neighborhood ofaTo pixel point (b)
Figure 951200DEST_PATH_IMAGE012
Figure 914477DEST_PATH_IMAGE013
And rotating the pair of points
Figure 273914DEST_PATH_IMAGE044
Degree, comparing the gray value of each pixel point pair, if
Figure 913712DEST_PATH_IMAGE014
If so, the binary string is 1, otherwise, the binary string is 0, and the final generation length is
Figure 991389DEST_PATH_IMAGE045
The ORB feature descriptor of (a); wherein S is the side length of the neighborhood, and the value is 31 pixels.
(2) In order to calculate the motion between the frame images, the camera is modeled as uniform acceleration motion, as shown in fig. 3, and the position of the ORB feature point in the step (1) is predicted based on the uniform acceleration motion for the fourth frame image, the method comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image
Figure 758357DEST_PATH_IMAGE015
Figure 237880DEST_PATH_IMAGE016
Figure 940257DEST_PATH_IMAGE017
Figure 380596DEST_PATH_IMAGE018
And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as
Figure 561042DEST_PATH_IMAGE019
Figure 754126DEST_PATH_IMAGE021
Figure 627404DEST_PATH_IMAGE023
Then, there are:
Figure 59434DEST_PATH_IMAGE046
the velocity of the fourth frame image due to modeling the camera as a uniform acceleration motion
Figure 777992DEST_PATH_IMAGE019
If a relationship can be established with the poses of the first three frames of images and the amount of motion between the two frames of images should be consistent, then:
Figure 356741DEST_PATH_IMAGE047
wherein
Figure 135341DEST_PATH_IMAGE049
The incremental operation of representing the velocity vector can be obtained by left-right expansion
Figure 550273DEST_PATH_IMAGE050
And an
Figure 72521DEST_PATH_IMAGE051
Thus, there are:
Figure 505776DEST_PATH_IMAGE052
(2.2) pose according to the first frame image
Figure 720857DEST_PATH_IMAGE015
And speed of the fourth frame image
Figure 856041DEST_PATH_IMAGE026
Estimating the pose of the fourth frame image as
Figure 181980DEST_PATH_IMAGE027
Then, a certain ORB feature point position of the fourth frame image is predicted
Figure 469742DEST_PATH_IMAGE028
Figure 324566DEST_PATH_IMAGE029
Wherein the content of the first and second substances,
Figure 714090DEST_PATH_IMAGE030
is a 3D point in the third frame image corresponding to the ORB feature point,
Figure 578140DEST_PATH_IMAGE031
k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) carrying out reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), and calculatingThe position of the corresponding ORB characteristic point in the third frame image is obtained, and when the Euclidean distance between the position of the corresponding ORB characteristic point in the third frame image and the actual position of the corresponding ORB characteristic point in the third frame image is calculated to be
Figure 720409DEST_PATH_IMAGE053
Removing the corresponding ORB feature points in the third frame image and the fourth frame image, and keeping the Euclidean distance between the corresponding ORB feature points on the third frame image and the fourth frame image smaller than
Figure 542871DEST_PATH_IMAGE053
The matched point pairs of (1). A schematic diagram of step 3 and step 4 of removing outliers through forward optical flow tracking and backward optical flow verification is described in FIG. 4.
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed. The key frame satisfies: the motion translation distance between the current frame image and the previous frame image is larger than 1m, the rotation angle is larger than 5 degrees, and the correct matching point pair in the step (5) is adopted
Figure 387068DEST_PATH_IMAGE054
The method and the traditional method are both used for tracking and matching the image feature points, 8 public data sets on the TUM data set are adopted, the algorithm is operated on a desktop computer with Intel Core i7-8700@3.20GHz, real-time positioning and map construction precision of each data set is recorded, and the precision and the operation speed are compared with those of table 1, so that the method for tracking and matching the image feature points greatly accelerates the operation time while ensuring the precision of the algorithm, and has the improvement effect of about 2 times.
Table 1: the effect comparison of the tracking matching method of the invention and the traditional method
Figure DEST_PATH_IMAGE055
The above are merely preferred embodiments of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.

Claims (4)

1. An image feature point tracking matching method based on an ORB-SLAM algorithm is characterized in that: the method comprises the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) according to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is calculated to be 1.5pixel images and the corresponding ORB feature point in the fourth frame image is removed, keeping a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
2. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (1) comprises the sub-steps of:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scaling
Figure 294266DEST_PATH_IMAGE001
Wherein, in the step (A),
Figure 291041DEST_PATH_IMAGE002
represents the image of the frame in question,
Figure 18826DEST_PATH_IMAGE004
is an index of n and is,
Figure 117363DEST_PATH_IMAGE005
will be
Figure 323216DEST_PATH_IMAGE007
The sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing moment
Figure 581022DEST_PATH_IMAGE008
Calculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
Figure 604342DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 175130DEST_PATH_IMAGE010
represent an image in
Figure 184674DEST_PATH_IMAGE011
The gray value of (b), wherein p represents a first index and takes the value of 0 or 1, q represents a second index and takes the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b)
Figure 421621DEST_PATH_IMAGE012
Figure 225628DEST_PATH_IMAGE013
And rotating the pair of pixel points, comparing the gray scale value of each pixel point pair if
Figure 688971DEST_PATH_IMAGE014
If the binary string is 1, otherwise, the binary string is 0, and finally a descriptor with the length of 256ORB is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
3. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (2) comprises the sub-steps of:
(2.1) setting the motion of the freedom degrees of the camera as uniform acceleration motion to obtain a first frame image, a second frame image and a third frame imageImage, posture of fourth frame image
Figure 111993DEST_PATH_IMAGE015
Figure 937867DEST_PATH_IMAGE016
Figure 912776DEST_PATH_IMAGE017
Figure 971736DEST_PATH_IMAGE018
And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as
Figure 323083DEST_PATH_IMAGE019
Figure 269043DEST_PATH_IMAGE021
Figure 680432DEST_PATH_IMAGE023
And satisfies the following conditions:
Figure 462575DEST_PATH_IMAGE024
(2.2) pose according to the first frame image
Figure 617613DEST_PATH_IMAGE015
And speed of the fourth frame image
Figure 559024DEST_PATH_IMAGE026
Estimating the pose of the fourth frame image as
Figure 369DEST_PATH_IMAGE027
Then, a certain ORB feature point position of the fourth frame image is predicted
Figure 394442DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Wherein the content of the first and second substances,
Figure 930334DEST_PATH_IMAGE030
is a 3D point in the third frame image corresponding to the ORB feature point,
Figure 850886DEST_PATH_IMAGE031
k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
4. The ORB-SLAM algorithm-based image feature point tracking and matching method of claim 1, wherein: the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
CN202011418605.4A 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm Active CN112489083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011418605.4A CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011418605.4A CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Publications (2)

Publication Number Publication Date
CN112489083A true CN112489083A (en) 2021-03-12
CN112489083B CN112489083B (en) 2022-10-04

Family

ID=74940371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011418605.4A Active CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Country Status (1)

Country Link
CN (1) CN112489083B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052750A (en) * 2021-03-31 2021-06-29 广东工业大学 Accelerator and accelerator for task tracking in VSLAM system
CN113103232A (en) * 2021-04-12 2021-07-13 电子科技大学 Intelligent equipment self-adaptive motion control method based on feature distribution matching
CN113284232A (en) * 2021-06-10 2021-08-20 西北工业大学 Optical flow tracking method based on quadtree
CN114372510A (en) * 2021-12-15 2022-04-19 北京工业大学 Interframe matching slam method based on image region segmentation
CN115919461A (en) * 2022-12-12 2023-04-07 之江实验室 SLAM-based surgical navigation method
CN117274620A (en) * 2023-11-23 2023-12-22 东华理工大学南昌校区 Visual SLAM method based on self-adaptive uniform division feature point extraction
CN117893693A (en) * 2024-03-15 2024-04-16 南昌航空大学 Dense SLAM three-dimensional scene reconstruction method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751465A (en) * 2015-03-31 2015-07-01 中国科学技术大学 ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN106780557A (en) * 2016-12-23 2017-05-31 南京邮电大学 A kind of motion target tracking method based on optical flow method and crucial point feature
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751465A (en) * 2015-03-31 2015-07-01 中国科学技术大学 ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN106780557A (en) * 2016-12-23 2017-05-31 南京邮电大学 A kind of motion target tracking method based on optical flow method and crucial point feature
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙延奎等: "分层分区域管理的实时图像跟踪算法", 《计算机辅助设计与图形学学报》 *
孙新成等: "基于视觉与惯性组合信息的图像特征提取与匹配", 《机械设计与制造工程》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052750A (en) * 2021-03-31 2021-06-29 广东工业大学 Accelerator and accelerator for task tracking in VSLAM system
CN113103232A (en) * 2021-04-12 2021-07-13 电子科技大学 Intelligent equipment self-adaptive motion control method based on feature distribution matching
CN113103232B (en) * 2021-04-12 2022-05-20 电子科技大学 Intelligent equipment self-adaptive motion control method based on feature distribution matching
CN113284232A (en) * 2021-06-10 2021-08-20 西北工业大学 Optical flow tracking method based on quadtree
CN114372510A (en) * 2021-12-15 2022-04-19 北京工业大学 Interframe matching slam method based on image region segmentation
CN115919461A (en) * 2022-12-12 2023-04-07 之江实验室 SLAM-based surgical navigation method
CN115919461B (en) * 2022-12-12 2023-08-08 之江实验室 SLAM-based surgical navigation method
CN117274620A (en) * 2023-11-23 2023-12-22 东华理工大学南昌校区 Visual SLAM method based on self-adaptive uniform division feature point extraction
CN117274620B (en) * 2023-11-23 2024-02-06 东华理工大学南昌校区 Visual SLAM method based on self-adaptive uniform division feature point extraction
CN117893693A (en) * 2024-03-15 2024-04-16 南昌航空大学 Dense SLAM three-dimensional scene reconstruction method and device
CN117893693B (en) * 2024-03-15 2024-05-28 南昌航空大学 Dense SLAM three-dimensional scene reconstruction method and device

Also Published As

Publication number Publication date
CN112489083B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN112489083B (en) Image feature point tracking matching method based on ORB-SLAM algorithm
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
Ban et al. Monocular visual odometry based on depth and optical flow using deep learning
CN111310631B (en) Target tracking method and system for rotor operation flying robot
CN111797688A (en) Visual SLAM method based on optical flow and semantic segmentation
Chen et al. Using FTOC to track shuttlecock for the badminton robot
CN110210426B (en) Method for estimating hand posture from single color image based on attention mechanism
Xu et al. GraspCNN: Real-time grasp detection using a new oriented diameter circle representation
CN111797692B (en) Depth image gesture estimation method based on semi-supervised learning
CN111709301B (en) Curling ball motion state estimation method
CN108961385B (en) SLAM composition method and device
CN110889901A (en) Large-scene sparse point cloud BA optimization method based on distributed system
Fernandez-Labrador et al. Panoroom: From the sphere to the 3d layout
CN116097307A (en) Image processing method and related equipment
He et al. Stereo RGB and deeper LiDAR-based network for 3D object detection in autonomous driving
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN112861808B (en) Dynamic gesture recognition method, device, computer equipment and readable storage medium
Yu et al. SKGNet: Robotic grasp detection with selective kernel convolution
CN116862984A (en) Space pose estimation method of camera
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment
Li et al. Few-shot meta-learning on point cloud for semantic segmentation
CN115063715A (en) ORB-SLAM3 loop detection acceleration method based on gray level histogram
Li et al. A novel two-pathway encoder-decoder network for 3D face reconstruction
CN114187360A (en) Head pose estimation method based on deep learning and quaternion
CN115115698A (en) Pose estimation method of equipment and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant