CN107356252B - Indoor robot positioning method integrating visual odometer and physical odometer - Google Patents

Indoor robot positioning method integrating visual odometer and physical odometer Download PDF

Info

Publication number
CN107356252B
CN107356252B CN201710408258.9A CN201710408258A CN107356252B CN 107356252 B CN107356252 B CN 107356252B CN 201710408258 A CN201710408258 A CN 201710408258A CN 107356252 B CN107356252 B CN 107356252B
Authority
CN
China
Prior art keywords
robot
odometer
physical
pose
step
Prior art date
Application number
CN201710408258.9A
Other languages
Chinese (zh)
Other versions
CN107356252A (en
Inventor
周唐恺
江济良
王运志
Original Assignee
青岛克路德机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛克路德机器人有限公司 filed Critical 青岛克路德机器人有限公司
Priority to CN201710408258.9A priority Critical patent/CN107356252B/en
Publication of CN107356252A publication Critical patent/CN107356252A/en
Application granted granted Critical
Publication of CN107356252B publication Critical patent/CN107356252B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention discloses an indoor robot positioning method fusing a visual odometer and a physical odometer. The invention adds a visual sensor to carry out closed-loop detection on the robot in a known environment, so as to eliminate the accumulated error of the particle filter-based physical odometer in the whole world, change the global error of the odometer into staged accumulation, and construct a closed map on the basis. The method disclosed by the invention effectively solves the problem of error accumulation of the physical odometer after integrating the visual odometer, can enable the robot to carry out self-positioning and accurate repositioning in a known environment, has small added calculation amount, can ensure efficiency and real-time performance, meets the indoor navigation requirement with accuracy, and is an effective method for solving the problem of inaccurate robot positioning in a large environment at the present stage.

Description

Indoor robot positioning method integrating visual odometer and physical odometer

Technical Field

The invention relates to a method for automatically positioning precision of an indoor mobile robot, in particular to an indoor robot positioning method integrating a visual odometer and a physical odometer.

Background

In the related research of the intelligent navigation technology of the autonomous mobile robot, the simultaneous localization and mapping (SLAM) technology of the robot in an unknown environment is taken as a key technology, has double values in engineering and academia, and has become a research hotspot in the field in the last two decades. In this trend, the scholars propose various methods for solving the SLAM problem, and also apply various sensors to solve the environmental perception problem in SLAM.

The problem to be solved by SLAM technology is to select an appropriate sensor system to realize real-time robot positioning. In practical applications, sensors with high accuracy in both the range and the azimuth angle based on the laser radar are preferred sensors, and various sensors such as infrared, ultrasonic, IMU, visual sensor, and odometer are also needed to assist positioning to provide positioning accuracy. However, the multi-sensor fusion is always a technical difficulty in the SLAM field, and the SLAM method which can be effectively fused and commercialized at present is basically not available. For the indoor mobile robot, in consideration of the actual use scene and the current development condition, besides the laser radar and the physical odometer, the visual odometer is added to improve the positioning accuracy, and the method is the optimal solution for the SLAM technology of the indoor mobile robot in the real production stage.

The prior art can satisfy the situation that the robot is in an environment with a simple indoor structure and a small area through an improved Monte Carlo particle filtering and positioning method of a physical odometer, however, the physical odometer calculates through displacement increment of two time periods, only local movement is considered, so errors can be continuously superposed and accumulated until drift is too large and cannot be eliminated, and the positioning errors are larger particularly when wheels slip or incline.

Disclosure of Invention

In view of the above, the invention provides an indoor robot positioning method fusing a visual odometer and a physical odometer, which is used for accurately positioning a robot by extracting ORB (object-oriented features) characteristics through collected images to perform image matching, camera pose estimation and closed-loop detection.

An indoor robot positioning method integrating a visual odometer and a physical odometer comprises the following implementation steps:

step 1, acquiring color and depth images by using a camera;

2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, and estimating the pose change of the camera through characteristic matching between adjacent images;

step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;

step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;

step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;

and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.

Further, the step of estimating the pose change of the camera is: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) and solving a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera.

Has the advantages that:

the invention adds a visual sensor to carry out closed-loop detection on the robot in a known environment, so as to eliminate the accumulated error of the particle filter-based physical odometer in the whole world, change the global error of the odometer into staged accumulation, and construct a closed map on the basis. Compared with the traditional SLAM method, the method disclosed by the invention has the advantages that the problem of error accumulation of the physical odometer is effectively solved after the visual odometer is fused, the robot can carry out self-positioning and accurate relocation in the known environment, the increased calculation amount is small, the efficiency and the real-time performance can be ensured, the indoor navigation requirement can be met in precision, and the method is an effective method for solving the problem of inaccurate robot positioning in the large environment at the present stage.

Drawings

FIG. 1 is a flow chart of a fusion positioning method of the present invention;

FIG. 2 is a schematic diagram of the positioning process of the fusion visual odometer and physical odometer of the present invention.

Detailed Description

The invention is described in detail below by way of example with reference to the accompanying drawings.

As shown in fig. 1 and 2, the present invention provides a method of manufacturing a semiconductor device

Step 1, acquiring color and depth images by using a luxurious depth camera Xtion;

step 2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, and estimating the pose change of the camera through characteristic matching between adjacent images: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) obtaining a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera;

step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;

step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;

step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;

and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.

The actual width of the closed map constructed by the method is 86.4m, and the height of the closed map is 38.4 m.

In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. An indoor robot positioning method integrating a visual odometer and a physical odometer is characterized by comprising the following implementation steps:
step 1, acquiring color and depth images by using a camera;
step 2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, estimating the change of the camera pose through characteristic matching between adjacent images, wherein the step of estimating the change of the camera pose is as follows: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) obtaining a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera;
step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;
step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;
step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;
and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.
CN201710408258.9A 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer CN107356252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710408258.9A CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710408258.9A CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Publications (2)

Publication Number Publication Date
CN107356252A CN107356252A (en) 2017-11-17
CN107356252B true CN107356252B (en) 2020-06-16

Family

ID=60271649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710408258.9A CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Country Status (1)

Country Link
CN (1) CN107356252B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092264A (en) * 2017-06-21 2017-08-25 北京理工大学 Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
CN108247647A (en) * 2018-01-24 2018-07-06 速感科技(北京)有限公司 A kind of clean robot
CN110360999A (en) * 2018-03-26 2019-10-22 京东方科技集团股份有限公司 Indoor orientation method, indoor locating system and computer-readable medium
CN108931245A (en) * 2018-08-02 2018-12-04 上海思岚科技有限公司 The local method for self-locating and equipment of mobile robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9148650B2 (en) * 2012-09-17 2015-09-29 Nec Laboratories America, Inc. Real-time monocular visual odometry

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A HIGH EFFICIENT 3D SLAM ALGORITHM BASED ON PCA;施尚杰等;《The 6th Annual IEEE International Conference on Cyber Technology in Automation,Control, and Intelligent Systems(cyber)》;20160330;第109-114页 *

Also Published As

Publication number Publication date
CN107356252A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
Lin et al. Autonomous aerial navigation using monocular visual‐inertial fusion
CN105045263B (en) A kind of robot method for self-locating based on Kinect depth camera
JP5881790B2 (en) Method for navigating a mobile system
CN105043396B (en) The method and system of self-built map in a kind of mobile robot room
CN104062977B (en) Full-autonomous flight control method for quadrotor unmanned aerial vehicle based on vision SLAM
CN103278170B (en) Based on mobile robot's cascade map creating method that remarkable scene point detects
US20180102058A1 (en) High-precision autonomous obstacle-avoidance flying method for unmanned aerial vehicle
Fraundorfer et al. Vision-based autonomous mapping and exploration using a quadrotor MAV
KR102016551B1 (en) Apparatus and method for estimating position
CN102538781B (en) Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN103925920B (en) A kind of MAV indoor based on perspective image autonomous navigation method
CN104236548B (en) Autonomous navigation method in a kind of MAV room
KR20180079428A (en) Apparatus and method for automatic localization
Bosse et al. Map matching and data association for large-scale two-dimensional laser scan-based slam
CN104181926B (en) The navigation control method of robot
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
Badino et al. Visual topometric localization
CN103954283B (en) Inertia integrated navigation method based on scene matching aided navigation/vision mileage
CN105928505A (en) Determination method and apparatus for position and orientation of mobile robot
CN103411609B (en) A kind of aircraft return route planing method based on online composition
US9766074B2 (en) Vision-aided inertial navigation
Tao et al. Lane marking aided vehicle localization
CN103033189B (en) Inertia/vision integrated navigation method for deep-space detection patrolling device
CN105487535A (en) Mobile robot indoor environment exploration system and control method based on ROS
WO2015165266A1 (en) Autonomous navigation method and system, and map modeling method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant