CN112489083B - Image feature point tracking matching method based on ORB-SLAM algorithm - Google Patents

Image feature point tracking matching method based on ORB-SLAM algorithm Download PDF

Info

Publication number
CN112489083B
CN112489083B CN202011418605.4A CN202011418605A CN112489083B CN 112489083 B CN112489083 B CN 112489083B CN 202011418605 A CN202011418605 A CN 202011418605A CN 112489083 B CN112489083 B CN 112489083B
Authority
CN
China
Prior art keywords
frame image
orb
image
frame
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011418605.4A
Other languages
Chinese (zh)
Other versions
CN112489083A (en
Inventor
钟心亮
朱世强
顾建军
李特
姜峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202011418605.4A priority Critical patent/CN112489083B/en
Publication of CN112489083A publication Critical patent/CN112489083A/en
Application granted granted Critical
Publication of CN112489083B publication Critical patent/CN112489083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image feature point tracking and matching method based on an ORB-SLAM algorithm, which comprises the following steps: 1) Extracting ORB characteristic points and descriptors from the key frames and performing uniform distribution processing on the quadtree; 2) Predicting the positions of the feature points based on uniform acceleration motion for the new frame; 3) Accurately solving the positions of the characteristic points of the sparse optical flow method of the multilayer pyramid; 4) Carrying out reverse sparse optical flow tracking and eliminating error matching; 5) Robust RANSAC is carried out on the feature matching obtained in the step 4 to remove outliers; 6) And solving the 6D pose according to the residual matching points and judging whether the current frame is a key frame. The method only needs to extract ORB feature points and descriptors from the key frames, and the time-consuming descriptors do not need to be calculated for tracking between the key frames.

Description

Image feature point tracking matching method based on ORB-SLAM algorithm
Technical Field
The invention relates to the technical field of computer vision, in particular to an image feature point tracking and matching method based on an ORB-SLAM algorithm.
Background
In recent years, with the continuous development of computer vision, SLAM technology is widely used in various fields, virtual reality, augmented reality, robotics, unmanned planes, and the like. With the continuous development of computer hardware, the real-time processing of visual information becomes possible, and the real-time positioning and map construction by using the visual information greatly reduces the price of intelligent robot products while improving the information acquisition amount.
The visual SLAM mainly comprises a front-end visual odometer and a rear-end optimization and loop detection module, wherein the front-end visual odometer mainly performs matching tracking by extracting feature points and descriptors of images and calculates relative pose between frames, the rear-end optimization optimizes the pose by combining more historical information, and the loop detection module is mainly used for eliminating accumulated errors and improving the accuracy of positioning and drawing. The ORB-SLAM is used as a benchmark of the visual SLAM, and is stable, rich in interface and gradually taken as a benchmark SLAM system by people.
However, ORB-SLAM has a relatively high demand on computing resources and is not suitable for low-end processors.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an image feature point tracking and matching method based on an ORB-SLAM algorithm, which improves the calculation speed of camera tracking and attitude estimation and improves the tracking and matching precision of the method.
The purpose of the invention is realized by the following technical scheme: an image feature point tracking matching method based on an ORB-SLAM algorithm comprises the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) Predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the calculated position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is more than 1.5pixel, removing the corresponding ORB feature point in the third frame image and the fourth frame image, and reserving a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) And (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
Further, step (1) comprises the following sub-steps:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scaling
Figure 604611DEST_PATH_IMAGE001
Wherein
Figure 600380DEST_PATH_IMAGE002
represents the image of the frame in question,
Figure 848958DEST_PATH_IMAGE004
is an index of n and is,
Figure 837643DEST_PATH_IMAGE005
will be
Figure 120857DEST_PATH_IMAGE007
The sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing moment
Figure 786062DEST_PATH_IMAGE008
Calculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
Figure 521937DEST_PATH_IMAGE009
wherein,
Figure 314312DEST_PATH_IMAGE010
represent an image in
Figure 452033DEST_PATH_IMAGE011
The gray value of (1), wherein p represents a first index, the value of 0 or 1,q represents a second index, and the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b)
Figure 789604DEST_PATH_IMAGE012
Figure 871830DEST_PATH_IMAGE013
And rotating the pair of pixel points, comparing the gray scale value of each pixel point pair if
Figure 77683DEST_PATH_IMAGE014
If the binary string is 1, otherwise, the binary string is 0, and finally an ORB feature descriptor with the length of 256 is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
Further, the step (2) comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image
Figure 443811DEST_PATH_IMAGE015
Figure 342497DEST_PATH_IMAGE016
Figure 177598DEST_PATH_IMAGE017
Figure 921563DEST_PATH_IMAGE018
And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as
Figure 909242DEST_PATH_IMAGE019
Figure 713250DEST_PATH_IMAGE021
Figure 35647DEST_PATH_IMAGE023
And satisfies the following conditions:
Figure 583303DEST_PATH_IMAGE024
(2.2) pose according to the first frame image
Figure 941601DEST_PATH_IMAGE015
And speed of the fourth frame image
Figure 182090DEST_PATH_IMAGE026
Estimating the pose of the fourth frame image as
Figure 726204DEST_PATH_IMAGE027
Then, a certain ORB feature point position of the fourth frame image is predicted
Figure 343130DEST_PATH_IMAGE028
Figure 164455DEST_PATH_IMAGE029
Wherein,
Figure 451211DEST_PATH_IMAGE030
is a 3D point in the third frame image corresponding to the ORB feature point,
Figure 357987DEST_PATH_IMAGE031
k is the internal reference moment of the cameraArray, w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
Further, the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
The invention has the beneficial effects that: because of 30Hz image sequence and small motion amount of two adjacent frames, the invention only extracts ORB characteristic points and descriptors aiming at key frames, and accurately tracks the position of the ORB characteristic points through forward and backward optical flows based on uniform acceleration motion model prediction between the key frames.
Drawings
FIG. 1 is a flow chart of the image feature point tracking and matching method based on ORB-SLAM algorithm of the present invention;
FIG. 2 is a flow chart of the present invention for feature extraction based on ORB-SLAM algorithm;
FIG. 3 is a schematic diagram of feature point prediction based on a uniform acceleration motion model according to the present invention;
FIG. 4 is a schematic diagram of feature point tracking for forward optical flow tracking and backward optical flow elimination according to the present invention.
Detailed Description
The principles and aspects of the present invention will be further explained with reference to the drawings and the embodiments, which are described in detail herein for the purpose of illustration only and are not intended to be limiting.
Fig. 1 is a flowchart of an image feature point tracking and matching method based on the ORB-SLAM algorithm, which specifically includes the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images and performing uniform distribution processing on the quadtree, wherein the flow is shown in FIG. 2:
(1.1) carrying out histogram equalization processing on the frame image, wherein the purpose is to ensure that the frame image is not too bright or too dark and the information is complete, then extracting FAST characteristic points, because the FAST characteristic points do not have scale invariance and rotation invariance, setting a scale factor s of the frame image and the number n of pyramid layers in processing the scale invariance, reducing the frame image into n images according to the scale factor s, and carrying out the image scaling processing
Figure 372080DEST_PATH_IMAGE032
Wherein
Figure 313491DEST_PATH_IMAGE002
represents the image of the frame in question,
Figure 269683DEST_PATH_IMAGE004
is an index of n and is,
Figure 663755DEST_PATH_IMAGE033
will be
Figure 215960DEST_PATH_IMAGE007
Taking the total of the extracted feature points of the frame images as FAST feature points of the frame images;
(1.2) in dealing with rotational invariance, passing moments
Figure 621664DEST_PATH_IMAGE034
To calculate the center of mass of the FAST feature point image block with r as radius
Figure 640436DEST_PATH_IMAGE035
Coordinates to centroid of said FAST feature points
Figure 521804DEST_PATH_IMAGE035
The formed vector represents the direction of the FAST feature points, wherein the moments of the image blocks are represented as:
Figure 143278DEST_PATH_IMAGE036
wherein,
Figure 793703DEST_PATH_IMAGE037
represent an image in
Figure 826118DEST_PATH_IMAGE038
Where p represents the first index, a value of 0 or 1,q represents the second index, and a value of 0 or 1.
Figure 619948DEST_PATH_IMAGE040
The direction of the FAST feature point is:
Figure 390458DEST_PATH_IMAGE041
(1.3) using a quad-tree algorithm to enable FAST characteristic points to be uniformly distributed, so that a quad-tree is constructed for the frame image of the FAST characteristic points obtained in the step (1.1), for each sub-node, when the number of the FAST characteristic points in the node is equal to 1, downward division is not performed, if the number of the nodes is greater than 1, the quad-tree is continuously divided downward until all the nodes only contain one characteristic point or the number of the divided nodes at the moment meets the requirement of the number of the characteristic points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, from around the keypoint
Figure 95240DEST_PATH_IMAGE042
Random selection within a neighborhood ofaTo pixel point (b)
Figure 951200DEST_PATH_IMAGE012
Figure 914477DEST_PATH_IMAGE013
And rotating the pair of points
Figure 273914DEST_PATH_IMAGE044
Degree, comparing the gray value of each pixel point pair if
Figure 913712DEST_PATH_IMAGE014
If so, the binary string is 1, otherwise, the binary string is 0, and the final generation length is
Figure 991389DEST_PATH_IMAGE045
The ORB feature descriptor of (a); wherein S is the side length of the neighborhood, and the value is 31 pixels.
(2) In order to calculate the motion between the frame images, the camera is modeled as uniform acceleration motion, as shown in fig. 3, and the position of the ORB feature point in the step (1) is predicted based on the uniform acceleration motion for the fourth frame image, the method comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image
Figure 758357DEST_PATH_IMAGE015
Figure 237880DEST_PATH_IMAGE016
Figure 940257DEST_PATH_IMAGE017
Figure 380596DEST_PATH_IMAGE018
And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as
Figure 561042DEST_PATH_IMAGE019
Figure 754126DEST_PATH_IMAGE021
Figure 627404DEST_PATH_IMAGE023
Then, there are:
Figure 59434DEST_PATH_IMAGE046
since the camera is modeled as uniformly accelerated motion, the velocity of the fourth frame image
Figure 777992DEST_PATH_IMAGE019
If a relationship can be established with the poses of the first three frames of images and the amount of motion between the two frames of images should be consistent, then:
Figure 356741DEST_PATH_IMAGE047
wherein
Figure 135341DEST_PATH_IMAGE049
The incremental operation of the velocity vector is expressed and expanded left and right
Figure 550273DEST_PATH_IMAGE050
And an
Figure 72521DEST_PATH_IMAGE051
Thus, there are:
Figure 505776DEST_PATH_IMAGE052
(2.2) pose according to the first frame image
Figure 720857DEST_PATH_IMAGE015
And speed of the fourth frame image
Figure 856041DEST_PATH_IMAGE026
Estimating the pose of the fourth frame image as
Figure 181980DEST_PATH_IMAGE027
Then, a certain ORB feature point position of the fourth frame image is predicted
Figure 469742DEST_PATH_IMAGE028
Figure 324566DEST_PATH_IMAGE029
Wherein,
Figure 714090DEST_PATH_IMAGE030
is a 3D point in the third frame image corresponding to the ORB feature point,
Figure 578140DEST_PATH_IMAGE031
k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is calculated to be
Figure 720409DEST_PATH_IMAGE053
Removing the corresponding ORB feature points in the third frame image and the fourth frame image, and keeping the Euclidean distance between the corresponding ORB feature points on the third frame image and the fourth frame image smaller than
Figure 542871DEST_PATH_IMAGE053
The matched point pairs of (1). A schematic diagram of step 3 and step 4 forward optical flow tracking and reverse optical flow verification outlier elimination is described in FIG. 4.
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) Solving the 6D pose according to the remaining matching point pairs and judging the fourth poseAnd (4) judging whether the frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame image, and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed. The key frame satisfies: the motion translation distance between the current frame image and the previous frame image is larger than 1m, the rotation angle is larger than 5 degrees, and the correct matching point pair in the step (5) is adopted
Figure 387068DEST_PATH_IMAGE054
The method and the traditional method are both used for tracking and matching the image feature points, 8 public data sets on the TUM data set are adopted, the algorithm is operated on a desktop computer of Intel Core i7-8700@3.20GHz, real-time positioning and map construction precision of each data set is recorded, and the precision and the running speed of the algorithm are compared with those in table 1, so that the method for tracking and matching the image feature points greatly accelerates the running time while ensuring the precision of the algorithm, and has about 2 times of improvement effect.
Table 1: the effect comparison of the tracking matching method of the invention and the traditional method
Figure DEST_PATH_IMAGE055
The above are merely preferred embodiments of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.

Claims (4)

1. An image feature point tracking matching method based on an ORB-SLAM algorithm is characterized in that: the method comprises the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images, performing uniform distribution processing on the quadtree,
(2) Predicting the position of ORB characteristic points in the step (1) based on uniform accelerated motion for the fourth frame image,
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB characteristic point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB characteristic point in the third frame image, eliminating the corresponding ORB characteristic point in the fourth frame image and the corresponding image when the Euclidean distance between the position of the corresponding ORB characteristic point in the third frame image and the actual position of the corresponding ORB characteristic point in the third frame image is calculated to be 1.5pixel images, and keeping the matching point pair of which the Euclidean distance of the corresponding ORB characteristic point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) And (5) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all image frames in the video is completed.
2. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (1) comprises the sub-steps of:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scaling
Figure FDA0003789286810000011
Wherein, I represents a frame image, k is an index of n, values are 1,2, …, n, and the sum of extracted feature points of n images is used as a FAST feature point of the frame image;
(1.2) passing moment m pq To calculate the radius of the FAST feature point by rA centroid of the image block, a vector formed by coordinates of the FAST feature points to the centroid represents a direction of the FAST feature points, wherein moments of the image block are represented as:
m pq =∑ x,y∈r x p y q I(x,y),p,q={0,1},
wherein, I (x, y) represents the gray value of the image at (x, y), wherein p represents a first index, the value is 0 or 1,q represents a second index, and the value is 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary strings for the remaining FAST feature points, and randomly selecting a-pair pixel points (u) from the S multiplied by S neighborhood around the remaining FAST feature points i ,v i ) I =1,2, …, a, and pixel point (u) i ,v i ) Rotate, compare the gray value of each pixel point pair, if I (u) i )>I(v i ) Then u is i The binary string is 1, otherwise, the binary string is 0, and finally an ORB descriptor with the length of 256 is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
3. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (2) comprises the sub-steps of:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image
Figure FDA0003789286810000021
And the speed of the corresponding postures of the fourth frame image, the third frame image and the second frame image is represented as v c ,v c-1 ,v c-2 And satisfies the following conditions:
Figure FDA0003789286810000022
(2.2) based on the pose of the first frame image
Figure FDA0003789286810000023
And velocity v of the fourth frame image c Estimating the pose of the fourth frame image as
Figure FDA0003789286810000024
Then predicting a certain ORB feature point position of the fourth frame image
Figure FDA0003789286810000025
Figure FDA0003789286810000026
Wherein, P = [ X Y Z1 =] T Is a 3D point in the third frame image corresponding to the ORB feature point,
Figure FDA0003789286810000027
k is a camera internal reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
4. The ORB-SLAM algorithm-based image feature point tracking and matching method of claim 1, wherein: the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
CN202011418605.4A 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm Active CN112489083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011418605.4A CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011418605.4A CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Publications (2)

Publication Number Publication Date
CN112489083A CN112489083A (en) 2021-03-12
CN112489083B true CN112489083B (en) 2022-10-04

Family

ID=74940371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011418605.4A Active CN112489083B (en) 2020-12-07 2020-12-07 Image feature point tracking matching method based on ORB-SLAM algorithm

Country Status (1)

Country Link
CN (1) CN112489083B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052750A (en) * 2021-03-31 2021-06-29 广东工业大学 Accelerator and accelerator for task tracking in VSLAM system
CN113103232B (en) * 2021-04-12 2022-05-20 电子科技大学 Intelligent equipment self-adaptive motion control method based on feature distribution matching
CN113284232B (en) * 2021-06-10 2023-05-26 西北工业大学 Optical flow tracking method based on quadtree
CN114372510A (en) * 2021-12-15 2022-04-19 北京工业大学 Interframe matching slam method based on image region segmentation
CN114280323A (en) * 2021-12-24 2022-04-05 凌云光技术股份有限公司 Measuring equipment, system and method for vector velocity of railway vehicle
CN115919461B (en) * 2022-12-12 2023-08-08 之江实验室 SLAM-based surgical navigation method
CN117274620B (en) * 2023-11-23 2024-02-06 东华理工大学南昌校区 Visual SLAM method based on self-adaptive uniform division feature point extraction
CN117893693B (en) * 2024-03-15 2024-05-28 南昌航空大学 Dense SLAM three-dimensional scene reconstruction method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751465A (en) * 2015-03-31 2015-07-01 中国科学技术大学 ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN106780557B (en) * 2016-12-23 2020-06-09 南京邮电大学 Moving object tracking method based on optical flow method and key point features
CN108615247B (en) * 2018-04-27 2021-09-14 深圳市腾讯计算机系统有限公司 Method, device and equipment for relocating camera attitude tracking process and storage medium
CN109509211B (en) * 2018-09-28 2021-11-16 北京大学 Feature point extraction and matching method and system in simultaneous positioning and mapping technology
CN109631855B (en) * 2019-01-25 2020-12-08 西安电子科技大学 ORB-SLAM-based high-precision vehicle positioning method

Also Published As

Publication number Publication date
CN112489083A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112489083B (en) Image feature point tracking matching method based on ORB-SLAM algorithm
CN110222580B (en) Human hand three-dimensional attitude estimation method and device based on three-dimensional point cloud
CN111797688A (en) Visual SLAM method based on optical flow and semantic segmentation
CN111310631B (en) Target tracking method and system for rotor operation flying robot
Tu et al. Consistent 3d hand reconstruction in video via self-supervised learning
CN108597009A (en) A method of objective detection is carried out based on direction angle information
Chen et al. Using FTOC to track shuttlecock for the badminton robot
CN108961385B (en) SLAM composition method and device
CN111797692B (en) Depth image gesture estimation method based on semi-supervised learning
CN111709301A (en) Method for estimating motion state of curling ball
CN112258557B (en) Visual tracking method based on space attention feature aggregation
Fernandez-Labrador et al. Panoroom: From the sphere to the 3d layout
CN115115698A (en) Pose estimation method of equipment and related equipment
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN116097307A (en) Image processing method and related equipment
Kang et al. Yolo-6d+: single shot 6d pose estimation using privileged silhouette information
CN113793472B (en) Image type fire detector pose estimation method based on feature depth aggregation network
CN106023256B (en) State observation method towards augmented reality auxiliary maintaining System planes intended particle filter tracking
Li et al. Few-shot meta-learning on point cloud for semantic segmentation
CN114187360B (en) Head pose estimation method based on deep learning and quaternion
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment
CN115063715A (en) ORB-SLAM3 loop detection acceleration method based on gray level histogram
Song et al. Self-supervised learning of visual odometry
CN112344936A (en) Semantic SLAM-based mobile robot automatic navigation and target recognition algorithm
Pei et al. Loop closure in 2d lidar and rgb-d slam

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant