CN111260725B - Dynamic environment-oriented wheel speed meter-assisted visual odometer method - Google Patents

Dynamic environment-oriented wheel speed meter-assisted visual odometer method Download PDF

Info

Publication number
CN111260725B
CN111260725B CN202010043797.9A CN202010043797A CN111260725B CN 111260725 B CN111260725 B CN 111260725B CN 202010043797 A CN202010043797 A CN 202010043797A CN 111260725 B CN111260725 B CN 111260725B
Authority
CN
China
Prior art keywords
moment
wheel speed
pose
point
speed meter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010043797.9A
Other languages
Chinese (zh)
Other versions
CN111260725A (en
Inventor
何再兴
杨勤峰
赵昕玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010043797.9A priority Critical patent/CN111260725B/en
Publication of CN111260725A publication Critical patent/CN111260725A/en
Application granted granted Critical
Publication of CN111260725B publication Critical patent/CN111260725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a wheel speed meter auxiliary visual odometer method facing a dynamic environment. The depth camera is fixedly arranged on the wheeled robot and shoots to obtain images at adjacent moments; and calculating the predicted pose of the current moment according to the wheel speed meter readings of the previous moment and the next moment, projecting the map point of the feature point of the previous moment to the gray image of the current moment according to the predicted pose, searching and tracking nearby a re-projection pixel, calculating a re-projection error, comparing with a pixel distance threshold value to obtain an effective feature point, and performing pose optimization calculation again according to all the effective feature points. The invention can detect invalid characteristic points in the environment by using the wheel speed meter commonly equipped in the wheeled robot, reserve effective static characteristic points and position the robot more robustly.

Description

Dynamic environment-oriented wheel speed meter-assisted visual odometer method
Technical Field
The invention belongs to a visual odometer method in the field of visual odometers, and particularly relates to a wheel speed meter auxiliary visual odometer method for a dynamic environment.
Background
The existing visual odometry methods all assume that the feature points in the image are motionless, but when a moving object appears in the scene, the results of the visual odometry are interfered. Meanwhile, a wheel speed meter is often assembled on the wheeled robot, and how to utilize the wheel speed meter to help the visual odometer to realize more robust positioning in a dynamic environment has important significance.
Disclosure of Invention
The invention provides a wheel speed meter auxiliary visual odometer method facing a dynamic environment, aiming at solving the problem that the existing algorithm is poor in robustness in the dynamic environment.
The technical scheme adopted by the invention comprises the following steps:
step one, images of adjacent moments shot by a depth camera are obtained
Fixedly mounting a depth camera on a wheeled robot, mounting wheel speed meters on wheels of the wheeled robot, acquiring gray images and depth images at the previous moment and the current moment, and reading readings of the wheel speed meters at the current moment on the wheeled robot;
step two, predicting the pose of the camera
Predicting the pose of the wheeled robot at the current moment according to the uniform motion condition and the wheel speed meter reading at the previous moment and the current moment, wherein the pose comprises the relation between the position and the pose;
step three, removing invalid characteristic points
Extracting feature points from the gray image at the previous moment, taking a three-dimensional space coordinate corresponding to the feature points as a map point at the previous moment, projecting the map point at the previous moment onto the gray image at the current moment according to the predicted pose of the wheeled robot to serve as a projection point, tracking the extracted corner points of the gray image at the previous moment in a fixed size area range around the projection point by adopting an LK optical flow method to obtain tracking points, and calculating a reprojection error between the projection point and the tracking points; carrying out threshold segmentation by using the size of the reprojection error to obtain effective characteristic points;
step four, pose optimization
Performing pose optimization calculation according to all the effective feature points, and obtaining the pose and the position of the camera at the current moment by processing through a PnP algorithm;
and fifthly, repeating the second step to the fourth step for every two adjacent frames of images to eliminate the interference of the dynamic characteristic points and then calculate the pose to obtain the pose at each moment, and realizing the visual odometer in the dynamic environment by taking the joint of the poses at each moment as a result.
In the second step, the prediction of the pose at the current moment is obtained by adopting the following processing modes:
Figure BDA0002368649990000021
wherein, ω isiAnd ωjAngular velocity measurements, v, of the wheel speed meter representing a previous time and a current timeiAnd vjRepresenting wheel speed measurements at a previous time and a current time,
Figure BDA0002368649990000022
and
Figure BDA0002368649990000023
representing the camera pose and position at the last moment,
Figure BDA0002368649990000024
and
Figure BDA0002368649990000025
representing the camera pose and position at the current time, and Δ t represents the time interval between the last time and the current time.
In the third step, the map point at the previous moment is projected to the gray level image at the current moment for tracking, and threshold segmentation is carried out according to the size of the reprojection error, and the processing is carried out in the following mode:
1) projecting the map point at the previous moment to the gray level image at the current moment:
Figure BDA0002368649990000026
wherein, PiThe feature point x representing the previous timei3D position, x 'of corresponding map point'jRepresenting projected pixel coordinates, K representing the camera's intrinsic parameter matrix, s representing the depth of the map point in the jth frame
Then x 'on the gray image at the current moment by adopting an LK optical flow method'jNearby for xiTracking is carried out to obtain xj
2) Calculating projected point x'jAnd tracking point xjThe pixel distance d between the two pixels is used as a re-projection error, and the size of the re-projection error is compared with a preset pixel distance threshold value: if the reprojection error d is smaller than the pixel distance threshold Tset,d<TsetTaking the tracking point as an effective characteristic point; if the reprojection error d is greater than or equal to the pixel distance threshold Tset,d<TsetThen the tracking point is not taken as a valid feature point.
The invention has the beneficial effects that:
1. compared with other methods, the method does not require prior knowledge of a dynamic moving object, and can realize a robust visual odometer only by a common wheel speed meter in the wheeled robot.
Drawings
FIG. 1 is a system flow diagram;
fig. 2 is a schematic diagram of detection of static valid feature points.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the embodiment of the present invention and the implementation process thereof are as follows:
step one, images of adjacent moments shot by a depth camera are obtained. The depth camera is fixedly installed on the wheeled robot, a wheel speed meter is installed on a wheel of the wheeled robot, the gray level image and the depth image of the previous moment and the current moment are collected, and the reading of the wheel speed meter of the current moment on the wheeled robot is read.
And step two, predicting the pose of the camera. And predicting the pose of the wheeled robot at the current moment according to the uniform motion condition and the wheel speed meter reading at the previous moment and the current moment, wherein the pose comprises the relation between the position and the pose. The motion model of uniform motion is as follows, and the prediction of the pose at the current moment is obtained by adopting the following method:
Figure BDA0002368649990000031
wherein, ω isiAnd ωjAngular velocity measurements, v, of the wheel speed meter representing a previous time and a current timeiAnd vjRepresenting wheel speed measurements at a previous time and a current time,
Figure BDA0002368649990000032
and
Figure BDA0002368649990000033
representing the camera pose and position at the last moment,
Figure BDA0002368649990000034
and
Figure BDA0002368649990000035
representing the camera pose and position at the current time, and Δ t represents the time interval between the last time and the current time.
And step three, removing the motion characteristic points. Extracting angular points of the gray image at the previous moment, taking three-dimensional space coordinates corresponding to the angular points as map points at the previous moment, projecting the map points at the previous moment onto the gray image at the current moment according to the predicted pose of the wheeled robot to serve as projection points, tracking the angular points extracted from the gray image at the previous moment in a fixed size area range around the projection points by adopting an LK optical flow method to obtain tracking points, and calculating a reprojection error between the projection points and the tracking points; and carrying out threshold segmentation by using the size of the reprojection error to obtain effective characteristic points.
As shown in fig. 2, the following are specific:
1) projecting the map point at the previous moment to the gray level image at the current moment:
Figure BDA0002368649990000036
wherein, PiThe feature point x representing the previous timei3D position, x 'of corresponding map point'jThe coordinates of the pixels after projection are represented,
then x 'on the gray image at the current moment by adopting an LK optical flow method'jNearby for xiTracking is carried out to obtain xj
2) Calculating projected point x'jAnd tracking point xjThe pixel distance d between the two pixels is used as a re-projection error, and the size of the re-projection error is compared with a preset pixel distance threshold value: heavy weight ofThe projection error d is less than the pixel distance threshold Tset,d<TsetTaking the tracking point as an effective characteristic point; if the reprojection error d is greater than or equal to the pixel distance threshold Tset,d<TsetThen the tracking point is not taken as a valid feature point.
And step four, pose optimization. And performing pose optimization calculation according to all the effective feature points, and obtaining the pose and the position of the camera at the current moment by processing through a PnP algorithm again.
And fifthly, repeating the second step to the fourth step for two adjacent frames of images to eliminate the interference of the dynamic characteristic points and then calculate the pose to obtain the pose at each moment, and realizing the visual odometer in the dynamic environment by taking the joint of the poses at each moment as a result.
Therefore, the wheel speed meter commonly equipped for the wheeled robot can be used for detecting the dynamic characteristic points in the environment, the effective static characteristic points are reserved, and the robot is positioned more robustly.

Claims (3)

1. A wheel speed meter assisted visual odometer method oriented to a dynamic environment is characterized by comprising the following steps:
step one, images of adjacent moments shot by a depth camera are obtained
Fixedly mounting a depth camera on a wheeled robot, mounting wheel speed meters on wheels of the wheeled robot, acquiring gray images and depth images at the previous moment and the current moment, and reading readings of the wheel speed meters at the current moment on the wheeled robot;
step two, predicting the pose of the camera
Predicting the pose of the wheeled robot at the current moment according to the wheel speed meter readings at the previous moment and the current moment according to the uniform motion condition;
step three, removing invalid characteristic points
Extracting feature points from the gray image at the previous moment, taking a three-dimensional space coordinate corresponding to the feature points as a map point at the previous moment, projecting the map point at the previous moment onto the gray image at the current moment according to the predicted pose of the wheeled robot to serve as a projection point, tracking the extracted corner points of the gray image at the previous moment in a fixed size area range around the projection point by adopting an LK optical flow method to obtain tracking points, and calculating a reprojection error between the projection point and the tracking points; carrying out threshold segmentation by using the size of the reprojection error to obtain effective characteristic points;
step four, pose optimization
Performing pose optimization calculation according to all the effective feature points, and obtaining the pose and the position of the camera at the current moment by processing through a PnP algorithm;
and fifthly, repeating the second step to the fourth step for every two adjacent frames of images to obtain the position and posture of each moment, and using the position and posture joint of each moment as a result to realize the visual odometer in the dynamic environment.
2. The dynamic environment-oriented wheel speed meter-assisted visual odometry method of claim 1, wherein: in the second step, the prediction of the pose at the current moment is obtained by adopting the following processing modes:
Figure FDA0002368649980000011
wherein, ω isiAnd ωjAngular velocity measurements, v, of the wheel speed meter representing a previous time and a current timeiAnd vjRepresenting wheel speed measurements at a previous time and a current time,
Figure FDA0002368649980000012
and
Figure FDA0002368649980000013
representing the camera pose and position at the last moment,
Figure FDA0002368649980000014
and
Figure FDA0002368649980000015
representing the camera pose and position at the current time, and Δ t represents the time interval between the last time and the current time.
3. The dynamic environment-oriented wheel speed meter-assisted visual odometry method of claim 1, wherein: in the third step, calculating a projection point x'jAnd tracking point xjThe pixel distance d between the two pixels is used as a re-projection error, and the size of the re-projection error is compared with a preset pixel distance threshold value: if the reprojection error d is smaller than the pixel distance threshold Tset,d<TsetTaking the tracking point as an effective characteristic point; if the reprojection error d is greater than or equal to the pixel distance threshold Tset,d<TsetThen the tracking point is not taken as a valid feature point.
CN202010043797.9A 2020-01-15 2020-01-15 Dynamic environment-oriented wheel speed meter-assisted visual odometer method Active CN111260725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010043797.9A CN111260725B (en) 2020-01-15 2020-01-15 Dynamic environment-oriented wheel speed meter-assisted visual odometer method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043797.9A CN111260725B (en) 2020-01-15 2020-01-15 Dynamic environment-oriented wheel speed meter-assisted visual odometer method

Publications (2)

Publication Number Publication Date
CN111260725A CN111260725A (en) 2020-06-09
CN111260725B true CN111260725B (en) 2022-04-19

Family

ID=70950651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010043797.9A Active CN111260725B (en) 2020-01-15 2020-01-15 Dynamic environment-oriented wheel speed meter-assisted visual odometer method

Country Status (1)

Country Link
CN (1) CN111260725B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950370B (en) * 2020-07-10 2022-08-26 重庆邮电大学 Dynamic environment offline visual milemeter expansion method
WO2022147655A1 (en) * 2021-01-05 2022-07-14 深圳市大疆创新科技有限公司 Positioning method and apparatus, spatial information acquisition method and apparatus, and photographing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN110232308A (en) * 2019-04-17 2019-09-13 浙江大学 Robot gesture track recognizing method is followed based on what hand speed and track were distributed
CN110570449A (en) * 2019-09-16 2019-12-13 电子科技大学 positioning and mapping method based on millimeter wave radar and visual SLAM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10612929B2 (en) * 2017-10-17 2020-04-07 AI Incorporated Discovering and plotting the boundary of an enclosure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN109387204A (en) * 2018-09-26 2019-02-26 东北大学 The synchronous positioning of the mobile robot of dynamic environment and patterning process in faced chamber
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN110232308A (en) * 2019-04-17 2019-09-13 浙江大学 Robot gesture track recognizing method is followed based on what hand speed and track were distributed
CN110570449A (en) * 2019-09-16 2019-12-13 电子科技大学 positioning and mapping method based on millimeter wave radar and visual SLAM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Visual Odometry based on Stereo Image Sequences with RANSAC-based Outlier Rejection Scheme";Andreas Geiger.et al;《researchGate》;20100731;全文 *
"动态场景下基于运动物体检测的立体视觉里程计";林志林等;《光学学报》;20171130;第37卷(第11期);全文 *

Also Published As

Publication number Publication date
CN111260725A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN108986037B (en) Monocular vision odometer positioning method and positioning system based on semi-direct method
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN112734852B (en) Robot mapping method and device and computing equipment
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US9888235B2 (en) Image processing method, particularly used in a vision-based localization of a device
KR101784183B1 (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
CN110766785B (en) Real-time positioning and three-dimensional reconstruction device and method for underground pipeline
CN113012197B (en) Binocular vision odometer positioning method suitable for dynamic traffic scene
CN111260725B (en) Dynamic environment-oriented wheel speed meter-assisted visual odometer method
JP7173471B2 (en) 3D position estimation device and program
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
CN114964236A (en) Mapping and vehicle positioning system and method for underground parking lot environment
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN115077519A (en) Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
Qimin et al. A methodology of vehicle speed estimation based on optical flow
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN111862146B (en) Target object positioning method and device
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN111696155A (en) Monocular vision-based multi-sensing fusion robot positioning method
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN115797490A (en) Drawing construction method and system based on laser vision fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant