CN112489083A - Image feature point tracking matching method based on ORB-SLAM algorithm - Google Patents
Image feature point tracking matching method based on ORB-SLAM algorithm Download PDFInfo
- Publication number
- CN112489083A CN112489083A CN202011418605.4A CN202011418605A CN112489083A CN 112489083 A CN112489083 A CN 112489083A CN 202011418605 A CN202011418605 A CN 202011418605A CN 112489083 A CN112489083 A CN 112489083A
- Authority
- CN
- China
- Prior art keywords
- frame image
- orb
- feature point
- frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image feature point tracking and matching method based on an ORB-SLAM algorithm, which comprises the following steps: 1) extracting ORB characteristic points and descriptors from the key frames and performing uniform distribution processing on the quadtree; 2) predicting the positions of the feature points based on uniform acceleration motion for the new frame; 3) accurately solving the positions of the feature points of the sparse optical flow method of the multilayer pyramid; 4) carrying out reverse sparse optical flow tracking and eliminating error matching; 5) robust RANSAC is carried out on the feature matching obtained in the step 4 to remove outliers; 6) and solving the 6D pose according to the residual matching points and judging whether the current frame is a key frame. The method only needs to extract ORB characteristic points and descriptors from the key frames, and the time-consuming descriptors do not need to be calculated for tracking between the key frames.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to an image feature point tracking and matching method based on an ORB-SLAM algorithm.
Background
In recent years, with the continuous development of computer vision, SLAM technology is widely used in various fields, virtual reality, augmented reality, robotics, unmanned planes, and the like. With the continuous development of computer hardware, the real-time processing of visual information becomes possible, and the real-time positioning and map construction by using the visual information greatly reduces the price of intelligent robot products while improving the information acquisition amount.
The visual SLAM mainly comprises a front-end visual odometer and a rear-end optimization and loop detection module, wherein the front-end visual odometer mainly performs matching tracking by extracting feature points and descriptors of images and calculates relative pose between frames, the rear-end optimization optimizes the pose by combining more historical information, and the loop detection module is mainly used for eliminating accumulated errors and improving the accuracy of positioning and drawing. The ORB-SLAM is used as a benchmark of the visual SLAM, and is stable, rich in interface and gradually taken as a benchmark SLAM system by people.
However, ORB-SLAM has a relatively high demand on computing resources and is not suitable for low-end processors.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an image feature point tracking and matching method based on an ORB-SLAM algorithm, which improves the calculation speed of camera tracking and attitude estimation and improves the tracking and matching precision of the method.
The purpose of the invention is realized by the following technical scheme: an image feature point tracking matching method based on an ORB-SLAM algorithm comprises the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) according to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the calculated position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is more than 1.5pixel, removing the corresponding ORB feature point in the third frame image and the fourth frame image, and reserving a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
Further, step (1) comprises the following sub-steps:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scalingWherein, in the step (A),represents the image of the frame in question,is an index of n and is,will beThe sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing momentCalculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
wherein the content of the first and second substances,represent an image inThe gray value of (b), wherein p represents a first index and takes the value of 0 or 1, q represents a second index and takes the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b))And rotating the pair of points, comparing each pixelThe gray value of the point pair is large or small ifIf the length of the ORB feature descriptor is equal to 256, the binary string is 1, otherwise, the binary string is 0, and finally the ORB feature descriptor is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
Further, the step (2) comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image,,,And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as,,And satisfies the following conditions:
(2.2) pose according to the first frame imageAnd speed of the fourth frame imageEstimating the pose of the fourth frame image asThen, a certain ORB feature point position of the fourth frame image is predicted:
Wherein the content of the first and second substances,is a 3D point in the third frame image corresponding to the ORB feature point,k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
Further, the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
The invention has the beneficial effects that: because of 30Hz image sequence and small motion amount of two adjacent frames, the invention only extracts ORB characteristic points and descriptors aiming at key frames, and accurately tracks the position of the ORB characteristic points through forward and backward optical flows based on uniform acceleration motion model prediction between the key frames.
Drawings
FIG. 1 is a flowchart of an image feature point tracking matching method based on ORB-SLAM algorithm according to the present invention;
FIG. 2 is a flow chart of the present invention for feature extraction based on ORB-SLAM algorithm;
FIG. 3 is a schematic diagram of feature point prediction based on a uniform acceleration motion model according to the present invention;
FIG. 4 is a schematic diagram of feature point tracking for forward optical flow tracking and backward optical flow elimination according to the present invention.
Detailed Description
The principles and aspects of the present invention will be further explained with reference to the drawings and the embodiments, which are described in detail herein for the purpose of illustration only and are not intended to be limiting.
Fig. 1 is a flowchart of an image feature point tracking and matching method based on the ORB-SLAM algorithm, which specifically includes the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images and performing uniform distribution processing on the quadtree, wherein the flow is shown in FIG. 2:
(1.1) carrying out histogram equalization processing on the frame image, wherein the purpose is to ensure that the frame image is not too bright or too dark and the information is complete, then extracting FAST characteristic points, because the FAST characteristic points do not have scale invariance and rotation invariance, setting a scale factor s of the frame image and the number n of pyramid layers in processing the scale invariance, reducing the frame image into n images according to the scale factor s, and carrying out the image scaling processingWherein, in the step (A),represents the image of the frame in question,is an index of n and is,will beThe sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) in dealing with rotational invariance, passing momentsTo calculate the center of mass of the FAST feature point image block with r as radiusCoordinates to centroid of said FAST feature pointsThe formed vector represents the direction of the FAST feature point, wherein the moments of the image block are represented as:
wherein the content of the first and second substances,represent an image inWherein p represents a first index and takes a value of 0 or 1, q represents a second index and takes a value of 0 or 1.
The direction of the FAST feature point is:
(1.3) using a quad-tree algorithm to enable the FAST feature points to be uniformly distributed, so that a quad-tree is constructed for the frame image of the FAST feature points obtained in the step (1.1), when the number of the FAST feature points in the nodes is equal to 1, each sub-node is not divided downwards, if the number of the nodes is more than 1, the quad-tree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, from around the keypointRandom selection within a neighborhood ofaTo pixel point (b))And rotating the pair of pointsDegree, comparing the gray value of each pixel point pair, ifIf so, the binary string is 1, otherwise, the binary string is 0, and the final generation length isThe ORB feature descriptor of (a); wherein S is the side length of the neighborhood, and the value is 31 pixels.
(2) In order to calculate the motion between the frame images, the camera is modeled as uniform acceleration motion, as shown in fig. 3, and the position of the ORB feature point in the step (1) is predicted based on the uniform acceleration motion for the fourth frame image, the method comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image,,,And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as,,Then, there are:。
the velocity of the fourth frame image due to modeling the camera as a uniform acceleration motionIf a relationship can be established with the poses of the first three frames of images and the amount of motion between the two frames of images should be consistent, then:whereinThe incremental operation of representing the velocity vector can be obtained by left-right expansionAnd anThus, there are:
(2.2) pose according to the first frame imageAnd speed of the fourth frame imageEstimating the pose of the fourth frame image asThen, a certain ORB feature point position of the fourth frame image is predicted:
Wherein the content of the first and second substances,is a 3D point in the third frame image corresponding to the ORB feature point,k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) carrying out reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), and calculatingThe position of the corresponding ORB characteristic point in the third frame image is obtained, and when the Euclidean distance between the position of the corresponding ORB characteristic point in the third frame image and the actual position of the corresponding ORB characteristic point in the third frame image is calculated to beRemoving the corresponding ORB feature points in the third frame image and the fourth frame image, and keeping the Euclidean distance between the corresponding ORB feature points on the third frame image and the fourth frame image smaller thanThe matched point pairs of (1). A schematic diagram of step 3 and step 4 of removing outliers through forward optical flow tracking and backward optical flow verification is described in FIG. 4.
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed. The key frame satisfies: the motion translation distance between the current frame image and the previous frame image is larger than 1m, the rotation angle is larger than 5 degrees, and the correct matching point pair in the step (5) is adopted
The method and the traditional method are both used for tracking and matching the image feature points, 8 public data sets on the TUM data set are adopted, the algorithm is operated on a desktop computer with Intel Core i7-8700@3.20GHz, real-time positioning and map construction precision of each data set is recorded, and the precision and the operation speed are compared with those of table 1, so that the method for tracking and matching the image feature points greatly accelerates the operation time while ensuring the precision of the algorithm, and has the improvement effect of about 2 times.
Table 1: the effect comparison of the tracking matching method of the invention and the traditional method
The above are merely preferred embodiments of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.
Claims (4)
1. An image feature point tracking matching method based on an ORB-SLAM algorithm is characterized in that: the method comprises the following steps:
(1) dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) according to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is calculated to be 1.5pixel images and the corresponding ORB feature point in the fourth frame image is removed, keeping a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) and (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
2. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (1) comprises the sub-steps of:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scalingWherein, in the step (A),represents the image of the frame in question,is an index of n and is,will beThe sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing momentCalculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
wherein the content of the first and second substances,represent an image inThe gray value of (b), wherein p represents a first index and takes the value of 0 or 1, q represents a second index and takes the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b))And rotating the pair of pixel points, comparing the gray scale value of each pixel point pair ifIf the binary string is 1, otherwise, the binary string is 0, and finally a descriptor with the length of 256ORB is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
3. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (2) comprises the sub-steps of:
(2.1) setting the motion of the freedom degrees of the camera as uniform acceleration motion to obtain a first frame image, a second frame image and a third frame imageImage, posture of fourth frame image,,,And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as,,And satisfies the following conditions:
(2.2) pose according to the first frame imageAnd speed of the fourth frame imageEstimating the pose of the fourth frame image asThen, a certain ORB feature point position of the fourth frame image is predicted:
Wherein the content of the first and second substances,is a 3D point in the third frame image corresponding to the ORB feature point,k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
4. The ORB-SLAM algorithm-based image feature point tracking and matching method of claim 1, wherein: the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011418605.4A CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011418605.4A CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489083A true CN112489083A (en) | 2021-03-12 |
CN112489083B CN112489083B (en) | 2022-10-04 |
Family
ID=74940371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011418605.4A Active CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489083B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052750A (en) * | 2021-03-31 | 2021-06-29 | 广东工业大学 | Accelerator and accelerator for task tracking in VSLAM system |
CN113103232A (en) * | 2021-04-12 | 2021-07-13 | 电子科技大学 | Intelligent equipment self-adaptive motion control method based on feature distribution matching |
CN113284232A (en) * | 2021-06-10 | 2021-08-20 | 西北工业大学 | Optical flow tracking method based on quadtree |
CN114372510A (en) * | 2021-12-15 | 2022-04-19 | 北京工业大学 | Interframe matching slam method based on image region segmentation |
CN115919461A (en) * | 2022-12-12 | 2023-04-07 | 之江实验室 | SLAM-based surgical navigation method |
CN117274620A (en) * | 2023-11-23 | 2023-12-22 | 东华理工大学南昌校区 | Visual SLAM method based on self-adaptive uniform division feature point extraction |
CN117893693A (en) * | 2024-03-15 | 2024-04-16 | 南昌航空大学 | Dense SLAM three-dimensional scene reconstruction method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
CN106780557A (en) * | 2016-12-23 | 2017-05-31 | 南京邮电大学 | A kind of motion target tracking method based on optical flow method and crucial point feature |
CN109509211A (en) * | 2018-09-28 | 2019-03-22 | 北京大学 | Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology |
CN109631855A (en) * | 2019-01-25 | 2019-04-16 | 西安电子科技大学 | High-precision vehicle positioning method based on ORB-SLAM |
WO2019205853A1 (en) * | 2018-04-27 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium |
-
2020
- 2020-12-07 CN CN202011418605.4A patent/CN112489083B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
CN106780557A (en) * | 2016-12-23 | 2017-05-31 | 南京邮电大学 | A kind of motion target tracking method based on optical flow method and crucial point feature |
WO2019205853A1 (en) * | 2018-04-27 | 2019-10-31 | 腾讯科技(深圳)有限公司 | Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium |
CN109509211A (en) * | 2018-09-28 | 2019-03-22 | 北京大学 | Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology |
CN109631855A (en) * | 2019-01-25 | 2019-04-16 | 西安电子科技大学 | High-precision vehicle positioning method based on ORB-SLAM |
Non-Patent Citations (2)
Title |
---|
孙延奎等: "分层分区域管理的实时图像跟踪算法", 《计算机辅助设计与图形学学报》 * |
孙新成等: "基于视觉与惯性组合信息的图像特征提取与匹配", 《机械设计与制造工程》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052750A (en) * | 2021-03-31 | 2021-06-29 | 广东工业大学 | Accelerator and accelerator for task tracking in VSLAM system |
CN113103232A (en) * | 2021-04-12 | 2021-07-13 | 电子科技大学 | Intelligent equipment self-adaptive motion control method based on feature distribution matching |
CN113103232B (en) * | 2021-04-12 | 2022-05-20 | 电子科技大学 | Intelligent equipment self-adaptive motion control method based on feature distribution matching |
CN113284232A (en) * | 2021-06-10 | 2021-08-20 | 西北工业大学 | Optical flow tracking method based on quadtree |
CN114372510A (en) * | 2021-12-15 | 2022-04-19 | 北京工业大学 | Interframe matching slam method based on image region segmentation |
CN115919461A (en) * | 2022-12-12 | 2023-04-07 | 之江实验室 | SLAM-based surgical navigation method |
CN115919461B (en) * | 2022-12-12 | 2023-08-08 | 之江实验室 | SLAM-based surgical navigation method |
CN117274620A (en) * | 2023-11-23 | 2023-12-22 | 东华理工大学南昌校区 | Visual SLAM method based on self-adaptive uniform division feature point extraction |
CN117274620B (en) * | 2023-11-23 | 2024-02-06 | 东华理工大学南昌校区 | Visual SLAM method based on self-adaptive uniform division feature point extraction |
CN117893693A (en) * | 2024-03-15 | 2024-04-16 | 南昌航空大学 | Dense SLAM three-dimensional scene reconstruction method and device |
CN117893693B (en) * | 2024-03-15 | 2024-05-28 | 南昌航空大学 | Dense SLAM three-dimensional scene reconstruction method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112489083B (en) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489083B (en) | Image feature point tracking matching method based on ORB-SLAM algorithm | |
CN109387204B (en) | Mobile robot synchronous positioning and composition method facing indoor dynamic environment | |
Ban et al. | Monocular visual odometry based on depth and optical flow using deep learning | |
CN111310631B (en) | Target tracking method and system for rotor operation flying robot | |
CN111797688A (en) | Visual SLAM method based on optical flow and semantic segmentation | |
Chen et al. | Using FTOC to track shuttlecock for the badminton robot | |
CN110210426B (en) | Method for estimating hand posture from single color image based on attention mechanism | |
Xu et al. | GraspCNN: Real-time grasp detection using a new oriented diameter circle representation | |
CN111797692B (en) | Depth image gesture estimation method based on semi-supervised learning | |
CN111709301B (en) | Curling ball motion state estimation method | |
CN108961385B (en) | SLAM composition method and device | |
CN110889901A (en) | Large-scene sparse point cloud BA optimization method based on distributed system | |
Fernandez-Labrador et al. | Panoroom: From the sphere to the 3d layout | |
CN116097307A (en) | Image processing method and related equipment | |
He et al. | Stereo RGB and deeper LiDAR-based network for 3D object detection in autonomous driving | |
CN115018999A (en) | Multi-robot-cooperation dense point cloud map construction method and device | |
CN112861808B (en) | Dynamic gesture recognition method, device, computer equipment and readable storage medium | |
Yu et al. | SKGNet: Robotic grasp detection with selective kernel convolution | |
CN116862984A (en) | Space pose estimation method of camera | |
CN113487713B (en) | Point cloud feature extraction method and device and electronic equipment | |
Li et al. | Few-shot meta-learning on point cloud for semantic segmentation | |
CN115063715A (en) | ORB-SLAM3 loop detection acceleration method based on gray level histogram | |
Li et al. | A novel two-pathway encoder-decoder network for 3D face reconstruction | |
CN114187360A (en) | Head pose estimation method based on deep learning and quaternion | |
CN115115698A (en) | Pose estimation method of equipment and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |