CN111260709B - Ground-assisted visual odometer method for dynamic environment - Google Patents

Ground-assisted visual odometer method for dynamic environment Download PDF

Info

Publication number
CN111260709B
CN111260709B CN202010043799.8A CN202010043799A CN111260709B CN 111260709 B CN111260709 B CN 111260709B CN 202010043799 A CN202010043799 A CN 202010043799A CN 111260709 B CN111260709 B CN 111260709B
Authority
CN
China
Prior art keywords
ground
point
image
moment
point pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010043799.8A
Other languages
Chinese (zh)
Other versions
CN111260709A (en
Inventor
何再兴
杨勤峰
赵昕玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010043799.8A priority Critical patent/CN111260709B/en
Publication of CN111260709A publication Critical patent/CN111260709A/en
Application granted granted Critical
Publication of CN111260709B publication Critical patent/CN111260709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a ground-assisted visual odometer method for a dynamic environment. The depth camera is fixedly arranged on the ground mobile robot and shoots to obtain images at adjacent moments; extracting a main plane from the three-dimensional point cloud corresponding to the depth image, and calculating ground likelihood parameters to completely combine the main plane; calculating an initial pose according to the matching point pairs of the ground area, counting probability distribution, setting a dynamic threshold, distinguishing the matching point pairs of the non-ground area, and combining all static point pairs to optimize the pose. The method can quickly detect the ground from the point cloud corresponding to the depth image under the condition that the ground area is small, does not depend on a fixed threshold value to distinguish outer points, removes the interference of dynamic angular points, and more accurately estimates the motion track of the robot.

Description

Ground-assisted visual odometer method for dynamic environment
Technical Field
The invention belongs to a visual odometer method in the field of visual odometers, and particularly relates to a ground-assisted visual odometer method for a dynamic environment.
Background
The technical fields of robots and unmanned driving include technologies of environment perception, state estimation, planning control and the like. Due to the low cost and miniaturization of vision sensors, vision state estimation technology is becoming a hot problem in robotics.
Visual Odometer (VO) is an important part of Visual state estimation, and can be divided into a feature point method and a direct method, and the function of the VO is to estimate relative pose according to two adjacent frames of images. The existing methods are all based on the assumption of static environment, however, when a dynamic object appears in a scene, the associated image features or pixels include both the static environment and the dynamic object. At present, no mature and unified visual odometry method can be applied in a dynamic environment. Meanwhile, the ground is available prior information in the working environment of the wheeled mobile robot, however, the current ground detection methods based on monocular vision or depth sensors all require that the number of ground points is sufficient compared with the total number of scene points, due to the occlusion of dynamic objects or static obstacles existing in the scene, the ground is mostly discontinuous and segmented in the image, and the number of ground points is usually insufficient.
Disclosure of Invention
The invention provides a ground-assisted visual odometer method facing a dynamic environment, aiming at solving the problem that the existing algorithm has poor precision in the dynamic environment.
The technical scheme adopted by the invention comprises the following steps:
step one, images of adjacent moments shot by a depth camera are obtained
The method comprises the steps that a depth camera is fixedly installed on a ground mobile robot, the optical axis of the depth camera points to the right front of the robot, and gray level images and depth images of the right front of the robot at the previous moment and the current moment are collected through the depth camera;
step two, detecting the complete ground
Extracting all main planes from the depth images at the previous moment and the current moment, and combining the main planes belonging to the ground part into a complete ground as a ground area; except for the ground area, is a non-ground area.
Step three, estimating the initial pose
Extracting angular points from the ground area of the gray image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs which are used as ground point pairs and belong to a static angular point set, and then calculating by using an n-point perspective (PnP) method to obtain an initial relative pose of a current moment coordinate system in a previous moment coordinate system; the camera coordinate system is a three-dimensional coordinate system with a camera optical center as an origin, a z-axis pointing to the front of the camera, an x-axis pointing to the right side, and a y-axis pointing to the lower side.
In the third step, the coordinates of the same corner point in the gray scale image at the previous moment and the current moment form a pair of associated point pairs.
Step four, screening angular points
Fitting according to the reprojection errors of all ground point pairs to set a dynamic threshold, then extracting angular points from the non-ground area of the gray-scale image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs serving as non-ground point pairs and being stored in a static angular point set, then reprojecting the angular points at the current moment to the image at the previous moment by adopting the initial relative pose obtained in the third step to obtain a reprojection error, and then screening according to the dynamic threshold;
fifthly, estimating and optimizing the pose
Combining the ground point pairs and the static point pairs of the non-ground point pairs, and calculating by using an n-point perspective (PnP) method to obtain the final relative pose of the optimized current-time camera coordinate system in the previous-time camera coordinate system;
and step six, repeating the step two to the step five for two adjacent frames of images, calculating to obtain the final relative pose at each moment, removing the interference of the dynamic angular points, and realizing the visual odometer under the dynamic environment by taking the joint of the final relative poses at each moment as a result.
In the second step, the detection of the complete ground is processed in the following way:
1) extracting a main plane and calculating a ground likelihood parameter:
generating a point cloud according to a depth image, extracting a plurality of main planes possibly existing in the point cloud by adopting a condensation hierarchical clustering plane detection (PEAC) algorithm to obtain corresponding areas of the main planes in a gray level image, and then calculating a ground likelihood parameter err of the following formula according to the mass center and a normal equation of each main plane, wherein the ground likelihood parameter err is expressed as:
Figure BDA0002368652080000021
wherein, thetaaAn acute angle representing the normal of the principal plane and the ideal ground normal; (c)u,cv)TRepresenting two-dimensional pixel coordinates of the mass center of the point cloud corresponding to the main plane on the image; cols and rows represent the width and height of the grayscale image, respectively; alpha is a weight;
2) merging the main planes:
selecting a main plane with the minimum ground likelihood parameter as a seed plane, traversing each point in the other main planes, calculating the vertical distance from each point to the seed plane, if the vertical distance of each point is smaller than a preset distance threshold, merging the main plane into the seed plane until a plane completely obtained by merging is used as a ground area, and expressing as follows:
dp=Πs·p<Tdis
wherein p represents the currently determined point, ΠsDenotes the normal to the seed surface, TdisDenotes a distance threshold, dpThe perpendicular distance of point p to the seed plane is shown.
In the fourth step, the angular point screening specifically comprises:
1) setting a dynamic threshold value:
calculating the reprojection errors of all the ground point pairs, fitting the reprojection errors of all the ground point pairs by adopting normal distribution, and taking 2 times of standard deviation as a dynamic threshold TadpI.e. Tadpσ denotes the standard deviation obtained after fitting of a normal distribution;
2) screening: extracting angular points from the non-ground area of the gray image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs which are used as non-ground point pairs and also belong to a static angular point set;
and then, re-projecting the angular point at the next moment onto the image at the previous moment to obtain a re-projection error by adopting the initial relative pose obtained in the third step, and then screening according to a dynamic threshold value: if the angle point p at the later moment is re-projected to the image at the previous moment, the re-projection error e (p) obtained by the image at the previous moment is less than or equal to the dynamic threshold TadpThen, the corner p is classified as a static corner set; if the angle point p at the later moment is re-projected to the image at the previous moment, the re-projection error e (p) is larger than the dynamic threshold TadpThen the corner p is not attributed to the set of static corners.
In the third step, the coordinates of the same corner point in the gray scale image at the previous moment and the current moment form a pair of associated point pairs.
In the fourth step, re-projection errors of all ground point pairs are calculated, specifically, the angular point at the next moment is re-projected to the projection point on the gray-scale image at the previous moment, and the pixel distance between the projected projection point and the angular point is used as the re-projection error.
The invention has the beneficial effects that:
1. the complete ground detection method adopted by the invention can quickly detect the ground from the point cloud corresponding to the depth image, compared with other methods, the method does not require that the ground area occupies most of the image, and the method can successfully detect even if the number of pixels in the ground area is small.
2. The dynamic corner detection method adopted by the invention sets the dynamic threshold according to the probability distribution of the reprojection error of the corner points of the ground area, and then distinguishes the corner points of the non-ground area, and is more robust to different sensors and motion noise compared with the method of setting the fixed threshold and randomly sampling outlier rejection.
Drawings
FIG. 1 is a gray scale image and a depth image of two adjacent frames;
FIG. 2 is a view of all major planes detected;
FIG. 3 is the completed ground after consolidation;
FIG. 4 is a representation of the reprojection errors for all corner points;
FIG. 5 is a graph of a frequency histogram and normal distribution fit of reprojection errors;
fig. 6 is a diagram of the result of distinguishing between dynamic corner points and static corner points.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings.
The specific embodiment and the implementation process of the invention are as follows:
step one, images of adjacent moments shot by a depth camera are obtained. Fixedly installing a depth camera on a ground mobile robot, enabling an optical axis to point to the right front of the robot, and acquiring a gray image and a depth image at the previous moment and the current moment, as shown in figure 1;
and step two, detecting the complete ground.
1) Extracting a main plane and calculating a ground likelihood parameter:
generating a point cloud according to the depth image, and extracting a plurality of possible main planes in the point cloud by adopting a (PEAC) algorithm to obtain corresponding areas of the main planes in the gray level image, as shown in FIG. 2; then, the ground likelihood parameter err of the following formula is calculated according to the centroid of each principal plane and the normal equation, and is expressed as:
Figure BDA0002368652080000041
wherein, thetaaAn acute angle representing the normal of the principal plane and the ideal ground normal; the ideal ground normal refers to a unit vector that is oriented perpendicular to the ground. (c)u,cv)TRepresenting two-dimensional pixel coordinates of the mass center of the point cloud corresponding to the main plane on the image; cols and rows represent the width and height of the grayscale image, respectively; alpha is a weight;
2) merging the main planes:
selecting a main plane with the minimum ground likelihood parameter as a seed plane, traversing each point in the other main planes, calculating the vertical distance from each point to the seed plane, if the vertical distance of each point is smaller than a preset distance threshold, merging the main plane into the seed plane until a plane completely obtained by merging is used as a ground area, and expressing as follows:
dp=Πs·p<Tdis
wherein p represents the currently determined point, ΠsDenotes the normal to the seed surface, TdisDenotes a distance threshold, dpThe perpendicular distance of point p to the seed plane is shown.
The merged completed ground is shown in fig. 3.
And step three, estimating an initial pose. And extracting angular points from the ground area of the gray image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs which are used as ground point pairs and are classified into a static angular point set, and forming a pair of associated point pairs by coordinates of the same angular point in the gray image at the previous moment and the gray image at the current moment. Then, an n-point perspective (PnP) method is used for calculating to obtain an initial relative pose of a camera coordinate system at the previous moment in a camera coordinate system at the current moment; the camera coordinate system is a three-dimensional coordinate system with a camera optical center as an origin, a z-axis pointing to the front of the camera, an x-axis pointing to the right side, and a y-axis pointing to the lower side.
And step four, screening corner points. Fitting according to the reprojection errors of all ground point pairs to set a dynamic threshold, then extracting corner points from the non-ground area of the gray-scale image at the previous moment, tracking the corner points by adopting an LK optical flow method to obtain associated point pairs serving as non-ground point pairs and being stored in a static corner point set, then reprojecting the corner points at the current moment to the image at the previous moment by adopting the initial relative pose obtained in the third step, as shown in FIG. 4, a white line segment represents the size and the direction of the reprojection errors, and screening according to the dynamic threshold after the reprojection errors are obtained;
1) setting a dynamic threshold value:
calculating the reprojection errors of all ground point pairs, specifically reprojecting the angular point at the next moment to the projection point on the gray-scale image at the previous moment, and taking the pixel distance between the projected projection point and the angular point as the reprojection error; fitting the reprojection errors of all ground point pairs by normal distribution, wherein a fitting curve is shown in FIG. 5, and a standard deviation of 2 times is taken as a dynamic threshold TadpI.e. Tadpσ denotes the standard deviation obtained after fitting of a normal distribution; in the specific implementation, the confidence interval that the static corner points except the ground point pairs are correctly screened is 95.4%.
2) Screening: extracting angular points from the non-ground area of the gray image at the previous moment, tracking the angular points by an LK optical flow method to obtain associated point pairs which are used as non-ground point pairs and also belong to a static angular point set,
and then, re-projecting the angular point at the next moment onto the image at the previous moment to obtain a re-projection error by adopting the initial relative pose obtained in the third step, taking the projected pixel distance as the re-projection error, and then screening according to a dynamic threshold value:
if the angle point p at the later moment is re-projected to the image at the previous moment, the re-projection error e (p) obtained by the image at the previous moment is less than or equal to the dynamic threshold TadpThen, the angular point p is taken as a non-ground point pair;
if the latter is the caseThe reprojection error e (p) obtained by reprojecting the angular point p of the moment to the image of the previous moment is greater than the dynamic threshold TadpThen the corner point p is not a non-ground point pair.
The non-ground point pairs and the non-ground point pairs are static angular points, and the rest are dynamic angular points.
As shown in fig. 6, the static corner points of the non-ground area are represented by green circles, and the red circles represent dynamic corner points.
And fifthly, estimating and optimizing the pose. And combining the ground point pairs and the static point pairs of the non-ground point pairs, and calculating by using an n-point perspective (PnP) method to obtain the final relative pose of the optimized current-time camera coordinate system in the previous-time camera coordinate system.
And step six, repeating the step two to the step five for two adjacent frames of images, calculating to obtain the final relative pose at each moment, removing the interference of the dynamic angular points, and realizing the visual odometer under the dynamic environment by taking the joint of the final relative poses at each moment as a result.
Therefore, the ground can be quickly detected from the point cloud corresponding to the depth image under the condition that the ground area is small, the outer points are distinguished without depending on a fixed threshold value, the interference of dynamic angular points is removed, and the motion track of the robot is more accurately estimated.

Claims (5)

1. A dynamic environment-oriented ground-assisted visual odometry method, comprising the steps of:
step one, images of adjacent moments shot by a depth camera are obtained
Fixedly installing a depth camera on a ground mobile robot, enabling an optical axis of the depth camera to point to the right front of the robot, and acquiring a gray image and a depth image at the previous moment and the current moment through the depth camera;
step two, detecting the complete ground
Extracting all main planes from the depth images at the previous moment and the current moment, and combining the main planes belonging to the ground part into a complete ground as a ground area;
step three, estimating the initial pose
Extracting angular points from the ground area of the gray image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs which are used as ground point pairs and are stored in a static angular point set, and then calculating by using an n-point perspective method to obtain an initial relative pose of a current moment coordinate system in a previous moment coordinate system;
step four, screening angular points
Fitting according to the reprojection errors of all ground point pairs to set a dynamic threshold, then extracting angular points from the non-ground area of the gray-scale image at the previous moment, tracking the angular points by adopting an LK optical flow method to obtain associated point pairs serving as non-ground point pairs and being stored in a static angular point set, then reprojecting the angular points at the current moment to the image at the previous moment by adopting the initial relative pose obtained in the third step to obtain a reprojection error, and then screening according to the dynamic threshold;
fifthly, estimating and optimizing the pose
Combining the ground point pairs and the static point pairs of the non-ground point pairs, and calculating by using an n-point perspective method to obtain the final relative pose of the optimized current-time camera coordinate system in the previous-time camera coordinate system;
and step six, repeating the step two to the step five for two adjacent frames of images, calculating to obtain the final relative pose at each moment, and connecting the final relative poses at each moment as a result to realize the visual odometer in the dynamic environment.
2. A dynamic environment-oriented ground-assisted visual odometry method according to claim 1, characterized in that: in the second step, the detection of the complete ground is processed in the following way:
1) extracting a main plane and calculating a ground likelihood parameter:
generating a point cloud according to a depth image, extracting a plurality of main planes possibly existing in the point cloud by adopting an agglomeration hierarchical clustering plane detection algorithm to obtain corresponding areas of the main planes in a gray level image, and then calculating a ground likelihood parameter err of the following formula according to the mass center and a normal equation of each main plane, wherein the ground likelihood parameter err is expressed as follows:
Figure FDA0002368652070000021
wherein, thetaaAn acute angle representing the normal of the principal plane and the ideal ground normal; (c)u,cv)TRepresenting two-dimensional pixel coordinates of the mass center of the point cloud corresponding to the main plane on the image; cols and rows represent the width and height of the grayscale image, respectively; a is a weight;
2) merging the main planes:
selecting a main plane with the minimum ground likelihood parameter as a seed plane, traversing each point in the other main planes, calculating the vertical distance from each point to the seed plane, if the vertical distance of each point is smaller than a preset distance threshold, merging the main plane into the seed plane until a plane completely obtained by merging is used as a ground area, and expressing as follows:
dp=Πs·p<Tdis
wherein p represents the currently determined point, ΠsDenotes the normal to the seed surface, TdisDenotes a distance threshold, dpThe perpendicular distance of point p to the seed plane is shown.
3. A dynamic environment-oriented ground-assisted visual odometry method according to claim 1, characterized in that: in the fourth step, the angular point screening specifically comprises:
1) setting a dynamic threshold value:
calculating the reprojection errors of all the ground point pairs, fitting the reprojection errors of all the ground point pairs by adopting normal distribution, and taking 2 times of standard deviation as a dynamic threshold TadpI.e. Tadpσ denotes the standard deviation obtained after fitting of a normal distribution;
2) screening: extracting angular points from the non-ground area of the gray image at the previous moment, and tracking the angular points by adopting an LK optical flow method to obtain associated point pairs as non-ground point pairs;
and then, re-projecting the angular point at the next moment onto the image at the previous moment to obtain a re-projection error by adopting the initial relative pose obtained in the third step, and then screening according to a dynamic threshold value:
if the angle point p at the later moment is re-projected to the image at the previous moment, the re-projection error e (p) obtained by the image at the previous moment is less than or equal to the dynamic threshold TadpThen, the corner p is classified as a static corner set;
if the angle point p at the later moment is re-projected to the image at the previous moment, the re-projection error e (p) is larger than the dynamic threshold TadpThen the corner p is not attributed to the set of static corners.
4. A dynamic environment-oriented ground-assisted visual odometry method according to claim 1, characterized in that: in the third step, the coordinates of the same corner point in the gray scale image at the previous moment and the current moment form a pair of associated point pairs.
5. A dynamic environment-oriented ground-assisted visual odometry method according to claim 1, characterized in that: in the fourth step, re-projection errors of all ground point pairs are calculated, specifically, the angular point at the next moment is re-projected to the projection point on the gray-scale image at the previous moment, and the pixel distance between the projected projection point and the angular point is used as the re-projection error.
CN202010043799.8A 2020-01-15 2020-01-15 Ground-assisted visual odometer method for dynamic environment Active CN111260709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010043799.8A CN111260709B (en) 2020-01-15 2020-01-15 Ground-assisted visual odometer method for dynamic environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043799.8A CN111260709B (en) 2020-01-15 2020-01-15 Ground-assisted visual odometer method for dynamic environment

Publications (2)

Publication Number Publication Date
CN111260709A CN111260709A (en) 2020-06-09
CN111260709B true CN111260709B (en) 2022-04-19

Family

ID=70948940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010043799.8A Active CN111260709B (en) 2020-01-15 2020-01-15 Ground-assisted visual odometer method for dynamic environment

Country Status (1)

Country Link
CN (1) CN111260709B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420590B (en) * 2021-05-13 2022-12-06 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018133119A1 (en) * 2017-01-23 2018-07-26 中国科学院自动化研究所 Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN108460779A (en) * 2018-02-12 2018-08-28 浙江大学 A kind of mobile robot image vision localization method under dynamic environment
CN108776989A (en) * 2018-06-08 2018-11-09 北京航空航天大学 Low texture plane scene reconstruction method based on sparse SLAM frames
CN108955718A (en) * 2018-04-10 2018-12-07 中国科学院深圳先进技术研究院 A kind of visual odometry and its localization method, robot and storage medium
CN110058602A (en) * 2019-03-27 2019-07-26 天津大学 Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10422648B2 (en) * 2017-10-17 2019-09-24 AI Incorporated Methods for finding the perimeter of a place using observed coordinates

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018133119A1 (en) * 2017-01-23 2018-07-26 中国科学院自动化研究所 Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN108460779A (en) * 2018-02-12 2018-08-28 浙江大学 A kind of mobile robot image vision localization method under dynamic environment
CN108955718A (en) * 2018-04-10 2018-12-07 中国科学院深圳先进技术研究院 A kind of visual odometry and its localization method, robot and storage medium
CN108776989A (en) * 2018-06-08 2018-11-09 北京航空航天大学 Low texture plane scene reconstruction method based on sparse SLAM frames
CN110058602A (en) * 2019-03-27 2019-07-26 天津大学 Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Fast Plane Extraction in Organized Point Clouds Using Agglomerative Hierarchical Clustering";Chen Feng.et al;《ResearchGate》;20140630;全文 *
"Real-time Depth Enhanced Monocular Odometry";Ji Zhang.et al;《IEEE》;20141106;全文 *
"基于双向重投影的双目视觉里程计";张涛等;《中国惯性技术学报》;20181231;第26卷(第6期);全文 *

Also Published As

Publication number Publication date
CN111260709A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
Veľas et al. Calibration of rgb camera with velodyne lidar
US9465997B2 (en) System and method for detection and tracking of moving objects
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
EP3070430B1 (en) Moving body position estimation device and moving body position estimation method
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
US20210103299A1 (en) Obstacle avoidance method and device and movable platform
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
JP2009288885A (en) Lane detection device, lane detection method and lane detection program
CN111915651B (en) Visual pose real-time estimation method based on digital image map and feature point tracking
Luo et al. Multisensor integrated stair recognition and parameters measurement system for dynamic stair climbing robots
Wang et al. An improved ArUco marker for monocular vision ranging
CN112509054A (en) Dynamic calibration method for external parameters of camera
CN117523461B (en) Moving target tracking and positioning method based on airborne monocular camera
CN111260709B (en) Ground-assisted visual odometer method for dynamic environment
Cavestany et al. Improved 3D sparse maps for high-performance SFM with low-cost omnidirectional robots
CN111260725B (en) Dynamic environment-oriented wheel speed meter-assisted visual odometer method
Fucen et al. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
Cigla et al. Image-based visual perception and representation for collision avoidance
Jiang et al. Icp stereo visual odometry for wheeled vehicles based on a 1dof motion prior
CN115797405A (en) Multi-lens self-adaptive tracking method based on vehicle wheel base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant