CN112489083B - Image feature point tracking matching method based on ORB-SLAM algorithm - Google Patents
Image feature point tracking matching method based on ORB-SLAM algorithm Download PDFInfo
- Publication number
- CN112489083B CN112489083B CN202011418605.4A CN202011418605A CN112489083B CN 112489083 B CN112489083 B CN 112489083B CN 202011418605 A CN202011418605 A CN 202011418605A CN 112489083 B CN112489083 B CN 112489083B
- Authority
- CN
- China
- Prior art keywords
- frame image
- orb
- image
- frame
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000003287 optical effect Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000001133 acceleration Effects 0.000 claims abstract description 9
- 238000009827 uniform distribution Methods 0.000 claims abstract description 4
- 230000036544 posture Effects 0.000 claims description 6
- 241000287196 Asthenes Species 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 241001504469 Anthus Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image feature point tracking and matching method based on an ORB-SLAM algorithm, which comprises the following steps: 1) Extracting ORB characteristic points and descriptors from the key frames and performing uniform distribution processing on the quadtree; 2) Predicting the positions of the feature points based on uniform acceleration motion for the new frame; 3) Accurately solving the positions of the characteristic points of the sparse optical flow method of the multilayer pyramid; 4) Carrying out reverse sparse optical flow tracking and eliminating error matching; 5) Robust RANSAC is carried out on the feature matching obtained in the step 4 to remove outliers; 6) And solving the 6D pose according to the residual matching points and judging whether the current frame is a key frame. The method only needs to extract ORB feature points and descriptors from the key frames, and the time-consuming descriptors do not need to be calculated for tracking between the key frames.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to an image feature point tracking and matching method based on an ORB-SLAM algorithm.
Background
In recent years, with the continuous development of computer vision, SLAM technology is widely used in various fields, virtual reality, augmented reality, robotics, unmanned planes, and the like. With the continuous development of computer hardware, the real-time processing of visual information becomes possible, and the real-time positioning and map construction by using the visual information greatly reduces the price of intelligent robot products while improving the information acquisition amount.
The visual SLAM mainly comprises a front-end visual odometer and a rear-end optimization and loop detection module, wherein the front-end visual odometer mainly performs matching tracking by extracting feature points and descriptors of images and calculates relative pose between frames, the rear-end optimization optimizes the pose by combining more historical information, and the loop detection module is mainly used for eliminating accumulated errors and improving the accuracy of positioning and drawing. The ORB-SLAM is used as a benchmark of the visual SLAM, and is stable, rich in interface and gradually taken as a benchmark SLAM system by people.
However, ORB-SLAM has a relatively high demand on computing resources and is not suitable for low-end processors.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an image feature point tracking and matching method based on an ORB-SLAM algorithm, which improves the calculation speed of camera tracking and attitude estimation and improves the tracking and matching precision of the method.
The purpose of the invention is realized by the following technical scheme: an image feature point tracking matching method based on an ORB-SLAM algorithm comprises the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors and performing uniform distribution processing on a quadtree for the first three frame images,
(2) Predicting the position of the ORB feature point in the step (1) based on uniform acceleration motion for the fourth frame image,
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the calculated position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is more than 1.5pixel, removing the corresponding ORB feature point in the third frame image and the fourth frame image, and reserving a matching point pair of which the Euclidean distance of the corresponding ORB feature point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) And (4) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed.
Further, step (1) comprises the following sub-steps:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scalingWhereinrepresents the image of the frame in question,is an index of n and is,will beThe sum of the extracted feature points of the images is used as FAST feature points of the frame images;
(1.2) passing momentCalculating the center of mass of an image block with r as the radius of the FAST feature point, wherein a vector formed by coordinates of the FAST feature point to the center of mass represents the direction of the FAST feature point, wherein the moment of the image block is represented as:
wherein,represent an image inThe gray value of (1), wherein p represents a first index, the value of 0 or 1,q represents a second index, and the value of 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, randomly selecting from neighborhood around the key pointaTo pixel point (b))And rotating the pair of pixel points, comparing the gray scale value of each pixel point pair ifIf the binary string is 1, otherwise, the binary string is 0, and finally an ORB feature descriptor with the length of 256 is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
Further, the step (2) comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image,,,And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as,,And satisfies the following conditions:
(2.2) pose according to the first frame imageAnd speed of the fourth frame imageEstimating the pose of the fourth frame image asThen, a certain ORB feature point position of the fourth frame image is predicted:
Wherein,is a 3D point in the third frame image corresponding to the ORB feature point,k is the internal reference moment of the cameraArray, w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
Further, the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
The invention has the beneficial effects that: because of 30Hz image sequence and small motion amount of two adjacent frames, the invention only extracts ORB characteristic points and descriptors aiming at key frames, and accurately tracks the position of the ORB characteristic points through forward and backward optical flows based on uniform acceleration motion model prediction between the key frames.
Drawings
FIG. 1 is a flow chart of the image feature point tracking and matching method based on ORB-SLAM algorithm of the present invention;
FIG. 2 is a flow chart of the present invention for feature extraction based on ORB-SLAM algorithm;
FIG. 3 is a schematic diagram of feature point prediction based on a uniform acceleration motion model according to the present invention;
FIG. 4 is a schematic diagram of feature point tracking for forward optical flow tracking and backward optical flow elimination according to the present invention.
Detailed Description
The principles and aspects of the present invention will be further explained with reference to the drawings and the embodiments, which are described in detail herein for the purpose of illustration only and are not intended to be limiting.
Fig. 1 is a flowchart of an image feature point tracking and matching method based on the ORB-SLAM algorithm, which specifically includes the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images and performing uniform distribution processing on the quadtree, wherein the flow is shown in FIG. 2:
(1.1) carrying out histogram equalization processing on the frame image, wherein the purpose is to ensure that the frame image is not too bright or too dark and the information is complete, then extracting FAST characteristic points, because the FAST characteristic points do not have scale invariance and rotation invariance, setting a scale factor s of the frame image and the number n of pyramid layers in processing the scale invariance, reducing the frame image into n images according to the scale factor s, and carrying out the image scaling processingWhereinrepresents the image of the frame in question,is an index of n and is,will beTaking the total of the extracted feature points of the frame images as FAST feature points of the frame images;
(1.2) in dealing with rotational invariance, passing momentsTo calculate the center of mass of the FAST feature point image block with r as radiusCoordinates to centroid of said FAST feature pointsThe formed vector represents the direction of the FAST feature points, wherein the moments of the image blocks are represented as:
wherein,represent an image inWhere p represents the first index, a value of 0 or 1,q represents the second index, and a value of 0 or 1.
The direction of the FAST feature point is:
(1.3) using a quad-tree algorithm to enable FAST characteristic points to be uniformly distributed, so that a quad-tree is constructed for the frame image of the FAST characteristic points obtained in the step (1.1), for each sub-node, when the number of the FAST characteristic points in the node is equal to 1, downward division is not performed, if the number of the nodes is greater than 1, the quad-tree is continuously divided downward until all the nodes only contain one characteristic point or the number of the divided nodes at the moment meets the requirement of the number of the characteristic points;
(1.4) extracting feature descriptors of BRIEF binary string for the remaining FAST feature points, from around the keypointRandom selection within a neighborhood ofaTo pixel point (b))And rotating the pair of pointsDegree, comparing the gray value of each pixel point pair ifIf so, the binary string is 1, otherwise, the binary string is 0, and the final generation length isThe ORB feature descriptor of (a); wherein S is the side length of the neighborhood, and the value is 31 pixels.
(2) In order to calculate the motion between the frame images, the camera is modeled as uniform acceleration motion, as shown in fig. 3, and the position of the ORB feature point in the step (1) is predicted based on the uniform acceleration motion for the fourth frame image, the method comprises the following sub-steps:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame image,,,And the speed of the corresponding posture of the fourth frame image, the third frame image and the second frame image is expressed as,,Then, there are:。
since the camera is modeled as uniformly accelerated motion, the velocity of the fourth frame imageIf a relationship can be established with the poses of the first three frames of images and the amount of motion between the two frames of images should be consistent, then:whereinThe incremental operation of the velocity vector is expressed and expanded left and rightAnd anThus, there are:
(2.2) pose according to the first frame imageAnd speed of the fourth frame imageEstimating the pose of the fourth frame image asThen, a certain ORB feature point position of the fourth frame image is predicted:
Wherein,is a 3D point in the third frame image corresponding to the ORB feature point,k is a camera reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB feature point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB feature point in the third frame image, and when the Euclidean distance between the position of the corresponding ORB feature point in the third frame image and the actual position of the corresponding ORB feature point in the third frame image is calculated to beRemoving the corresponding ORB feature points in the third frame image and the fourth frame image, and keeping the Euclidean distance between the corresponding ORB feature points on the third frame image and the fourth frame image smaller thanThe matched point pairs of (1). A schematic diagram of step 3 and step 4 forward optical flow tracking and reverse optical flow verification outlier elimination is described in FIG. 4.
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) Solving the 6D pose according to the remaining matching point pairs and judging the fourth poseAnd (4) judging whether the frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame image, and repeating the steps (2) - (6) until the tracking matching of all the image frames in the video is completed. The key frame satisfies: the motion translation distance between the current frame image and the previous frame image is larger than 1m, the rotation angle is larger than 5 degrees, and the correct matching point pair in the step (5) is adopted
The method and the traditional method are both used for tracking and matching the image feature points, 8 public data sets on the TUM data set are adopted, the algorithm is operated on a desktop computer of Intel Core i7-8700@3.20GHz, real-time positioning and map construction precision of each data set is recorded, and the precision and the running speed of the algorithm are compared with those in table 1, so that the method for tracking and matching the image feature points greatly accelerates the running time while ensuring the precision of the algorithm, and has about 2 times of improvement effect.
Table 1: the effect comparison of the tracking matching method of the invention and the traditional method
The above are merely preferred embodiments of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the technical scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present invention.
Claims (4)
1. An image feature point tracking matching method based on an ORB-SLAM algorithm is characterized in that: the method comprises the following steps:
(1) Dividing the video into frame images, respectively extracting ORB feature descriptors from the first three frame images, performing uniform distribution processing on the quadtree,
(2) Predicting the position of ORB characteristic points in the step (1) based on uniform accelerated motion for the fourth frame image,
(3) According to the ORB feature point position on the third frame image and the ORB feature point position predicted in the step (2), solving the motion increment of the corresponding ORB feature point on the third frame image and the fourth frame image according to a sparse optical flow method, and obtaining the accurate position of the ORB feature point on the fourth frame image;
(4) Performing reverse sparse optical flow tracking on the accurate position of the ORB characteristic point on the fourth frame image obtained in the step (3), calculating the position of the corresponding ORB characteristic point in the third frame image, eliminating the corresponding ORB characteristic point in the fourth frame image and the corresponding image when the Euclidean distance between the position of the corresponding ORB characteristic point in the third frame image and the actual position of the corresponding ORB characteristic point in the third frame image is calculated to be 1.5pixel images, and keeping the matching point pair of which the Euclidean distance of the corresponding ORB characteristic point on the third frame image and the fourth frame image is less than 1.5 pixel;
(5) Carrying out robust RANSAC estimation on the matching point pairs reserved in the step (4), and removing outliers to leave correct matching point pairs;
(6) And (5) solving the 6D pose according to the remaining matching point pairs and judging whether the fourth frame is a key frame, if the fourth frame is the key frame, extracting ORB feature points and descriptors, otherwise, inputting the next frame of image and repeating the steps (2) - (6) until the tracking matching of all image frames in the video is completed.
2. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (1) comprises the sub-steps of:
(1.1) carrying out histogram equalization processing on the frame image, then setting a scale factor s of the frame image and the number n of pyramid layers, reducing the frame image into n images according to the scale factor s, and obtaining the image after scalingWherein, I represents a frame image, k is an index of n, values are 1,2, …, n, and the sum of extracted feature points of n images is used as a FAST feature point of the frame image;
(1.2) passing moment m pq To calculate the radius of the FAST feature point by rA centroid of the image block, a vector formed by coordinates of the FAST feature points to the centroid represents a direction of the FAST feature points, wherein moments of the image block are represented as:
m pq =∑ x,y∈r x p y q I(x,y),p,q={0,1},
wherein, I (x, y) represents the gray value of the image at (x, y), wherein p represents a first index, the value is 0 or 1,q represents a second index, and the value is 0 or 1;
(1.3) constructing a quadtree for the frame image of the FAST feature points obtained in the step (1.1), wherein for each sub-node, when the number of FAST feature points in the node is equal to 1, the sub-node is not divided downwards, and if the number of the nodes is greater than 1, the quadtree is continuously divided downwards until all the nodes only contain one feature point or the number of the divided nodes meets the requirement of the number of the feature points;
(1.4) extracting feature descriptors of BRIEF binary strings for the remaining FAST feature points, and randomly selecting a-pair pixel points (u) from the S multiplied by S neighborhood around the remaining FAST feature points i ,v i ) I =1,2, …, a, and pixel point (u) i ,v i ) Rotate, compare the gray value of each pixel point pair, if I (u) i )>I(v i ) Then u is i The binary string is 1, otherwise, the binary string is 0, and finally an ORB descriptor with the length of 256 is generated; wherein S is the side length of the neighborhood, and the value is 31 pixels.
3. The ORB-SLAM algorithm-based image feature point tracking matching method as claimed in claim 1, wherein step (2) comprises the sub-steps of:
(2.1) setting the motion of the freedom degree of the camera as uniform acceleration motion to obtain the postures of the first frame image, the second frame image, the third frame image and the fourth frame imageAnd the speed of the corresponding postures of the fourth frame image, the third frame image and the second frame image is represented as v c ,v c-1 ,v c-2 And satisfies the following conditions:
(2.2) based on the pose of the first frame imageAnd velocity v of the fourth frame image c Estimating the pose of the fourth frame image asThen predicting a certain ORB feature point position of the fourth frame image
Wherein, P = [ X Y Z1 =] T Is a 3D point in the third frame image corresponding to the ORB feature point,k is a camera internal reference matrix, and w is a scale factor;
and (2.3) repeating the step (2.2) to predict the positions of all ORB feature points on the fourth frame image.
4. The ORB-SLAM algorithm-based image feature point tracking and matching method of claim 1, wherein: the key frame satisfies: and (3) the motion translation distance between the current frame image and the previous frame image is more than 1m, the rotation angle is more than 5 degrees, and the correct matching point pair in the step (5) is less than 150.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011418605.4A CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011418605.4A CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489083A CN112489083A (en) | 2021-03-12 |
CN112489083B true CN112489083B (en) | 2022-10-04 |
Family
ID=74940371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011418605.4A Active CN112489083B (en) | 2020-12-07 | 2020-12-07 | Image feature point tracking matching method based on ORB-SLAM algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489083B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113052750A (en) * | 2021-03-31 | 2021-06-29 | 广东工业大学 | Accelerator and accelerator for task tracking in VSLAM system |
CN113103232B (en) * | 2021-04-12 | 2022-05-20 | 电子科技大学 | Intelligent equipment self-adaptive motion control method based on feature distribution matching |
CN113284232B (en) * | 2021-06-10 | 2023-05-26 | 西北工业大学 | Optical flow tracking method based on quadtree |
CN114372510A (en) * | 2021-12-15 | 2022-04-19 | 北京工业大学 | Interframe matching slam method based on image region segmentation |
CN114280323A (en) * | 2021-12-24 | 2022-04-05 | 凌云光技术股份有限公司 | Measuring equipment, system and method for vector velocity of railway vehicle |
CN115919461B (en) * | 2022-12-12 | 2023-08-08 | 之江实验室 | SLAM-based surgical navigation method |
CN117274620B (en) * | 2023-11-23 | 2024-02-06 | 东华理工大学南昌校区 | Visual SLAM method based on self-adaptive uniform division feature point extraction |
CN117893693B (en) * | 2024-03-15 | 2024-05-28 | 南昌航空大学 | Dense SLAM three-dimensional scene reconstruction method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
CN106780557B (en) * | 2016-12-23 | 2020-06-09 | 南京邮电大学 | Moving object tracking method based on optical flow method and key point features |
CN108615247B (en) * | 2018-04-27 | 2021-09-14 | 深圳市腾讯计算机系统有限公司 | Method, device and equipment for relocating camera attitude tracking process and storage medium |
CN109509211B (en) * | 2018-09-28 | 2021-11-16 | 北京大学 | Feature point extraction and matching method and system in simultaneous positioning and mapping technology |
CN109631855B (en) * | 2019-01-25 | 2020-12-08 | 西安电子科技大学 | ORB-SLAM-based high-precision vehicle positioning method |
-
2020
- 2020-12-07 CN CN202011418605.4A patent/CN112489083B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112489083A (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489083B (en) | Image feature point tracking matching method based on ORB-SLAM algorithm | |
CN110222580B (en) | Human hand three-dimensional attitude estimation method and device based on three-dimensional point cloud | |
CN111797688A (en) | Visual SLAM method based on optical flow and semantic segmentation | |
CN111310631B (en) | Target tracking method and system for rotor operation flying robot | |
Tu et al. | Consistent 3d hand reconstruction in video via self-supervised learning | |
CN108597009A (en) | A method of objective detection is carried out based on direction angle information | |
Chen et al. | Using FTOC to track shuttlecock for the badminton robot | |
CN108961385B (en) | SLAM composition method and device | |
CN111797692B (en) | Depth image gesture estimation method based on semi-supervised learning | |
CN111709301A (en) | Method for estimating motion state of curling ball | |
CN112258557B (en) | Visual tracking method based on space attention feature aggregation | |
Fernandez-Labrador et al. | Panoroom: From the sphere to the 3d layout | |
CN115115698A (en) | Pose estimation method of equipment and related equipment | |
CN115018999A (en) | Multi-robot-cooperation dense point cloud map construction method and device | |
CN116097307A (en) | Image processing method and related equipment | |
Kang et al. | Yolo-6d+: single shot 6d pose estimation using privileged silhouette information | |
CN113793472B (en) | Image type fire detector pose estimation method based on feature depth aggregation network | |
CN106023256B (en) | State observation method towards augmented reality auxiliary maintaining System planes intended particle filter tracking | |
Li et al. | Few-shot meta-learning on point cloud for semantic segmentation | |
CN114187360B (en) | Head pose estimation method based on deep learning and quaternion | |
CN113487713B (en) | Point cloud feature extraction method and device and electronic equipment | |
CN115063715A (en) | ORB-SLAM3 loop detection acceleration method based on gray level histogram | |
Song et al. | Self-supervised learning of visual odometry | |
CN112344936A (en) | Semantic SLAM-based mobile robot automatic navigation and target recognition algorithm | |
Pei et al. | Loop closure in 2d lidar and rgb-d slam |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |