CN111141274A - Robot automatic positioning and navigation method based on computer vision - Google Patents

Robot automatic positioning and navigation method based on computer vision Download PDF

Info

Publication number
CN111141274A
CN111141274A CN201911340542.2A CN201911340542A CN111141274A CN 111141274 A CN111141274 A CN 111141274A CN 201911340542 A CN201911340542 A CN 201911340542A CN 111141274 A CN111141274 A CN 111141274A
Authority
CN
China
Prior art keywords
robot
image feature
grid
image
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911340542.2A
Other languages
Chinese (zh)
Inventor
赵玺
骆新
王宁
姚威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shineon Technology Co ltd
Original Assignee
Shineon Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shineon Technology Co ltd filed Critical Shineon Technology Co ltd
Priority to CN201911340542.2A priority Critical patent/CN111141274A/en
Publication of CN111141274A publication Critical patent/CN111141274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The invention discloses a robot automatic positioning and navigation method based on computer vision, which comprises the following steps: s1, preprocessing: the system is used for initializing the system and calibrating the sensor; s2, image feature acquisition: the robot system is used for acquiring images at intervals according to a certain distance in the running process of the robot system and extracting image characteristics, and only storing the image characteristics without reserving an original image; s3, image feature comparison: the image feature data set is used for enabling the robot system to determine self coordinates by acquiring an environment picture and comparing the environment picture with the stored image feature data set. By the method, the system for constructing the map for the robot and determining the position of the robot through computer vision is obtained, the robot system can determine the accurate position of the robot through a vision mode, and therefore the robot can achieve more functions.

Description

Robot automatic positioning and navigation method based on computer vision
Technical Field
The invention relates to the technical field of computers, in particular to a robot automatic positioning and navigation method based on computer vision.
Background
For robots with automatic processing capabilities, such as sweeping robots and attendant robots, maps and navigation are important aspects for determining the degree of robot intelligence, and in conventional solutions, navigation is usually achieved in some magnetic tracks or high-contrast sticker tracks, or in some algorithms in combination with sensors, the robots themselves can map and navigate the environment.
In the process of map construction and navigation of a robot through a sensor of the robot in the traditional method, although the construction of a map can be completed, the robot cannot accurately describe the environment of the robot due to lack of effective reference information, so that the condition of failure of the robot navigation occurs, the robot can be tucked in a region, and the robot is difficult to know the position of the robot when the robot carries out secondary navigation by using the existing map, so that the original map is failed. If computer vision is used for navigation and positioning, the robot is more effective for accurately acquiring the position of the robot.
Disclosure of Invention
The purpose of the invention is realized by the following technical scheme.
According to a first aspect of the present invention, there is provided a computer vision based robot automatic positioning and navigation method, comprising the steps of:
s1, preprocessing: the system is used for initializing the system and calibrating the sensor;
s2, image feature acquisition: the robot system is used for acquiring images at intervals according to a certain distance in the running process of the robot system and extracting image characteristics, and only storing the image characteristics without reserving an original image;
s3, image feature comparison: the image feature data set is used for enabling the robot system to determine self coordinates by acquiring an environment picture and comparing the environment picture with the stored image feature data set.
Further, the preprocessing step S1 further includes:
s11, sensor calibration step: calibrating an electronic compass sensor and an accelerometer through in-situ rotation;
s12, map initialization step: an electronic compass sensor is utilized to rotate the machine body to a certain direction as the positive direction of a Y axis, the position of the machine body is taken as an original point, and a plane coordinate system is established as a map.
Further, the image feature acquiring step S2 further includes:
s21, mesh initialization step: constructing a grid map on the map at intervals of 1 meter;
s22, grid filling step: filling collected pictures in four directions, namely front, back, left and right, on the grid;
s23, image feature storage step: and calculating and storing the characteristic values of all the pictures on the grid.
Further, the feature value of the picture adopts a BRISK feature, namely a binary key point feature which is not affected by deformation.
Further, the image feature comparison step S3 further includes:
s31, grid optimization step: only screening out the prime grids corresponding to the directions according to the directions determined by the electronic compass sensor;
s32, feature extraction: collecting pictures of the direction faced by the robot system, and extracting features;
s33, a feature similarity calculation step: and performing similarity calculation on the extracted features and all grid features, and outputting the grid position with the highest similarity.
According to a second aspect of the present invention, a robot automatic positioning and navigation method based on computer vision is characterized by comprising the following steps:
B1. initializing and calibrating a sensor, and initializing a map;
B2. initializing a grid;
B3. the robot starts to move;
B4. judging whether the current grid is empty, if so, carrying out B5, otherwise, skipping to B3;
B5. collecting images, extracting features, and storing the features in a current grid;
B6. judging whether a stop command is received, if so, carrying out B7, otherwise, jumping to B3;
B7. finishing;
B8. calibrating the sensor;
B9. acquiring four-direction images and extracting features;
B10. and judging whether the acquired features are consistent with the stored image feature set, if so, successfully positioning the image feature set, ending the process, and otherwise, skipping to B3.
Further, the condition for judging that the acquired features are consistent with the stored image feature set is as follows: at least two directions match.
According to a third aspect of the present invention, a computer vision based robotic automatic positioning and navigation system comprises:
the preprocessing module is used for initializing the system and calibrating the sensor;
the robot system comprises an image characteristic acquisition module, a data acquisition module and a data processing module, wherein the image characteristic acquisition module is used for acquiring images at intervals according to a certain distance in the operation process of the robot system and extracting image characteristics, and only the image characteristics are stored without reserving original images;
and the image characteristic comparison module is used for enabling the robot system to determine the self coordinates by acquiring the environment picture and comparing the environment picture with the stored image characteristic data set.
The invention has the advantages that: by the method, the system for constructing the map for the robot and determining the position of the robot through computer vision is obtained, the robot system can determine the accurate position of the robot through a vision mode, and therefore the robot can achieve more functions.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a method for automatic positioning and navigation of a robot based on computer vision according to an embodiment of the invention;
FIG. 2 shows a flow chart of pre-processing steps according to an embodiment of the invention;
FIG. 3 shows a flow chart of image feature acquisition steps according to an embodiment of the invention;
FIG. 4 is a flow chart illustrating image feature comparison steps according to an embodiment of the present invention;
FIG. 5 illustrates a detailed flow chart of a method for automatic positioning and navigation of a robot according to an embodiment of the present invention;
fig. 6 is a diagram illustrating a computer vision based robotic automatic positioning and navigation system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Because the algorithm design is too complicated and the efficiency is extremely low because the data volume is large when the computer vision is used for positioning and navigation, the use effect cannot be achieved, if the algorithm for automatically constructing the map by the traditional robot is adopted, and the computer vision auxiliary positioning is added, the complexity of the algorithm can be reduced, and the operation efficiency can be improved.
As shown in fig. 1, the present invention provides a robot automatic positioning and navigation method based on computer vision, comprising the following steps:
s1, preprocessing: for initializing the system, calibrating the sensor, etc.;
s2, image feature acquisition: the robot system is used for acquiring images at intervals according to a certain distance in the running process of the robot system and extracting image characteristics, original images are not reserved, and only the image characteristics are stored;
s3, image feature comparison: the image feature data set is used for enabling the robot system to determine self coordinates by acquiring an environment picture and comparing the environment picture with the stored image feature data set.
As shown in fig. 2, the preprocessing step S1 further includes:
s11, sensor calibration step: calibrating sensors such as an electronic compass, an accelerometer and the like through in-situ rotation;
s12, map initialization step: the electronic compass sensor is utilized to rotate the machine body to a certain direction, such as the north, the north is taken as the positive direction of the Y axis, the position of the machine body is taken as the original point, and a plane coordinate system is established as a map.
As shown in fig. 3, the image feature acquiring step S2 further includes:
s21, mesh initialization step: constructing a grid map on a map at intervals of 1 meter;
s22, grid filling step: filling collected pictures in four directions, namely front, back, left and right, on the grid;
s23, image feature storage step: and calculating and storing the characteristic values of all the pictures on the grid. The image features adopt BRISK features (Binary Robust In variable Scalable keys), namely Binary key point features which are not influenced by deformation.
As shown in fig. 4, the image feature comparison step S3 further includes:
s31, grid optimization step: according to the direction determined by the electronic compass, only the prime grids in the corresponding direction are screened out;
s32, feature extraction: collecting pictures of the direction faced by the robot system, and extracting features;
s33, a feature similarity calculation step: similarity calculation is carried out on the extracted features and all the optimized grid features, and the grid position with the highest similarity is output;
fig. 5 shows a detailed flowchart of an automatic robot positioning and navigating method according to an embodiment of the present invention, including:
the map construction process comprises the following steps:
B1. initializing and calibrating a sensor, and initializing a map;
B2. the mesh is initialized and the mesh is initialized,
B3. the robot starts to move;
B4. judging whether the current grid is empty, if so, carrying out B5, otherwise, skipping to B3;
B5. collecting images, extracting features, and storing the features in a current grid;
B6. judging whether a stop command is received, if so, carrying out B7, otherwise, jumping to B3;
B7. and (6) ending.
Self-positioning process:
B8. calibrating the sensor;
B9. acquiring four-direction images and extracting features;
B10. and judging whether the adopted features are consistent with the stored feature set, if so, successfully positioning the self, ending the process, otherwise, skipping to B3. The condition for judging the conformity of the acquired features and the stored image feature set is as follows: at least two directions match.
By the method, the system for constructing the map for the robot and determining the position of the robot through computer vision is obtained, the robot system can determine the accurate position of the robot through a vision mode, and therefore the robot can achieve more functions.
As shown in fig. 6, the present invention also discloses a robot automatic positioning and navigation system 100 based on computer vision, comprising:
the preprocessing module 101: the system is responsible for initializing the system, calibrating the sensor and the like;
the image feature acquisition module 102: the robot system is responsible for acquiring images at certain distance intervals and extracting image features in the running process of the robot system, original images are not reserved, and only the image features are stored;
the image feature comparison module 103: and the robot system is responsible for acquiring the environment picture and comparing the environment picture with the stored image characteristic data set so as to determine the coordinates of the robot system.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A robot automatic positioning and navigation method based on computer vision is characterized by comprising the following steps:
s1, preprocessing: the system is used for initializing the system and calibrating the sensor;
s2, image feature acquisition: the robot system is used for acquiring images at intervals according to a certain distance in the running process of the robot system and extracting image characteristics, and only storing the image characteristics without reserving an original image;
s3, image feature comparison: the image feature data set is used for enabling the robot system to determine self coordinates by acquiring an environment picture and comparing the environment picture with the stored image feature data set.
2. The method of claim 1, wherein the robot is a robot having a robot arm and a robot arm,
the preprocessing step S1 further includes:
s11, sensor calibration step: calibrating an electronic compass sensor and an accelerometer through in-situ rotation;
s12, map initialization step: an electronic compass sensor is utilized to rotate the machine body to a certain direction as the positive direction of a Y axis, the position of the machine body is taken as an original point, and a plane coordinate system is established as a map.
3. The method of claim 2, wherein the robot is a robot having a plurality of robots,
the image feature acquisition step S2 further includes:
s21, mesh initialization step: constructing a grid map on the map at intervals of 1 meter;
s22, grid filling step: filling collected pictures in four directions, namely front, back, left and right, on the grid;
s23, image feature storage step: and calculating and storing the characteristic values of all the pictures on the grid.
4. The method of claim 3, wherein the robot is a robot having a plurality of robots,
the characteristic value of the picture adopts BRISK characteristic, namely binary key point characteristic which is not influenced by deformation.
5. The method for automatic robot positioning and navigation based on computer vision according to claim 3 or 4,
the image feature comparison step S3 further includes:
s31, grid optimization step: only screening out the prime grids corresponding to the directions according to the directions determined by the electronic compass sensor;
s32, feature extraction: collecting pictures of the direction faced by the robot system, and extracting features;
s33, a feature similarity calculation step: and performing similarity calculation on the extracted features and all grid features, and outputting the grid position with the highest similarity.
6. A robot automatic positioning and navigation method based on computer vision is characterized by comprising the following steps:
B1. initializing and calibrating a sensor, and initializing a map;
B2. initializing a grid;
B3. the robot starts to move;
B4. judging whether the current grid is empty, if so, carrying out B5, otherwise, skipping to B3;
B5. collecting images, extracting features, and storing the features in a current grid;
B6. judging whether a stop command is received, if so, carrying out B7, otherwise, jumping to B3;
B7. finishing;
B8. calibrating the sensor;
B9. acquiring four-direction images and extracting features;
B10. and judging whether the acquired features are consistent with the stored image feature set, if so, successfully positioning the image feature set, ending the process, and otherwise, skipping to B3.
7. The method of claim 6, wherein the robot is a robot having a robot arm and a robot arm,
the condition for judging the conformity of the acquired features and the stored image feature set is as follows: at least two directions match.
8. A computer vision based robotic automatic positioning and navigation system comprising:
the preprocessing module is used for initializing the system and calibrating the sensor;
the robot system comprises an image characteristic acquisition module, a data acquisition module and a data processing module, wherein the image characteristic acquisition module is used for acquiring images at intervals according to a certain distance in the operation process of the robot system and extracting image characteristics, and only the image characteristics are stored without reserving original images;
and the image characteristic comparison module is used for enabling the robot system to determine the self coordinates by acquiring the environment picture and comparing the environment picture with the stored image characteristic data set.
CN201911340542.2A 2019-12-23 2019-12-23 Robot automatic positioning and navigation method based on computer vision Pending CN111141274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911340542.2A CN111141274A (en) 2019-12-23 2019-12-23 Robot automatic positioning and navigation method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911340542.2A CN111141274A (en) 2019-12-23 2019-12-23 Robot automatic positioning and navigation method based on computer vision

Publications (1)

Publication Number Publication Date
CN111141274A true CN111141274A (en) 2020-05-12

Family

ID=70519339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911340542.2A Pending CN111141274A (en) 2019-12-23 2019-12-23 Robot automatic positioning and navigation method based on computer vision

Country Status (1)

Country Link
CN (1) CN111141274A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199261A (en) * 2021-12-01 2022-03-18 广东开放大学(广东理工职业学院) Aruco code-based mobile robot visual positioning and navigation method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105398A (en) * 2006-07-11 2008-01-16 天津大学 Device for implementing rough north-seeking of gyroscope using electronic compass
CN102682448A (en) * 2012-03-14 2012-09-19 浙江大学 Stereo vision rapid navigation and positioning method based on double trifocal tensors
CN104866873A (en) * 2015-04-10 2015-08-26 长安大学 Indoor positioning method based on mobile phone image matching
CN105300375A (en) * 2015-09-29 2016-02-03 塔米智能科技(北京)有限公司 Robot indoor positioning and navigation method based on single vision
US20180350093A1 (en) * 2017-05-30 2018-12-06 Hand Held Products, Inc. Systems and methods for determining a location of a user when using an imaging device in an indoor facility
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105398A (en) * 2006-07-11 2008-01-16 天津大学 Device for implementing rough north-seeking of gyroscope using electronic compass
CN102682448A (en) * 2012-03-14 2012-09-19 浙江大学 Stereo vision rapid navigation and positioning method based on double trifocal tensors
CN104866873A (en) * 2015-04-10 2015-08-26 长安大学 Indoor positioning method based on mobile phone image matching
CN105300375A (en) * 2015-09-29 2016-02-03 塔米智能科技(北京)有限公司 Robot indoor positioning and navigation method based on single vision
US20180350093A1 (en) * 2017-05-30 2018-12-06 Hand Held Products, Inc. Systems and methods for determining a location of a user when using an imaging device in an indoor facility
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199261A (en) * 2021-12-01 2022-03-18 广东开放大学(广东理工职业学院) Aruco code-based mobile robot visual positioning and navigation method

Similar Documents

Publication Publication Date Title
CN109084746B (en) Monocular mode for autonomous platform guidance system with auxiliary sensor
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
WO2019170164A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
JP5759161B2 (en) Object recognition device, object recognition method, learning device, learning method, program, and information processing system
CN112461230B (en) Robot repositioning method, apparatus, robot, and readable storage medium
CN111210477B (en) Method and system for positioning moving object
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN111561923A (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
JP7147753B2 (en) Information processing device, information processing method, and program
US20160210761A1 (en) 3d reconstruction
CN103489200A (en) Image processing apparatus and image processing method
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN110686676A (en) Robot repositioning method and device and robot
CN114255323A (en) Robot, map construction method, map construction device and readable storage medium
CN112785705A (en) Pose acquisition method and device and mobile equipment
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
JP6922348B2 (en) Information processing equipment, methods, and programs
CN113506342B (en) SLAM omni-directional loop correction method based on multi-camera panoramic vision
WO2021189784A1 (en) Scenario reconstruction method, system and apparatus, and sweeping robot
CN111141274A (en) Robot automatic positioning and navigation method based on computer vision
CN113902801A (en) Mobile robot repositioning method, device, equipment and storage medium
CN113052907A (en) Positioning method of mobile robot in dynamic environment
JP4389663B2 (en) Image processing method and image processing apparatus
CN113807182B (en) Method, device, medium and electronic equipment for processing point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512