CN107421540B - Mobile robot navigation method and system based on vision - Google Patents

Mobile robot navigation method and system based on vision Download PDF

Info

Publication number
CN107421540B
CN107421540B CN201710311501.5A CN201710311501A CN107421540B CN 107421540 B CN107421540 B CN 107421540B CN 201710311501 A CN201710311501 A CN 201710311501A CN 107421540 B CN107421540 B CN 107421540B
Authority
CN
China
Prior art keywords
image
mobile robot
vision
identifier
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710311501.5A
Other languages
Chinese (zh)
Other versions
CN107421540A (en
Inventor
文生平
陈志鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201710311501.5A priority Critical patent/CN107421540B/en
Publication of CN107421540A publication Critical patent/CN107421540A/en
Application granted granted Critical
Publication of CN107421540B publication Critical patent/CN107421540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a mobile robot navigation method and system based on vision; comprises a vision sensor, an image processor, a motion control module and a mobile robot car body. The vision sensor is responsible for collecting scene images, the image processor processes and analyzes the images, and the motion control module sends out instructions to control the operation of the robot. Presetting a lane line in a mobile robot work environment, and setting an identifier at a specific position; the mobile robot acquires a scene image in the environment, and acquires an image which is orthographic projected with the ground through perspective transformation; the obtained image is converted into a binary image through graying and thresholding, and then the binary image is segmented to obtain an image only containing identifiers and an image only containing lane lines; and finding out and tracking an approximate lane track line on an image only containing the lane line, identifying an identifier, and obtaining position and acceleration information to realize accurate navigation of the mobile robot.

Description

Mobile robot navigation method and system based on vision
Technical Field
The invention relates to the field of robot navigation, in particular to a mobile robot navigation method and system based on vision.
Background
The mobile robot can effectively improve the logistics process in the production workshop, reduce the labor cost of enterprises and improve the production efficiency, and has wide application requirements in a plurality of industries such as automobiles, foods, printing, material transportation and the like.
One key technology of mobile robots is navigation technology, and there are various ways for mobile robot navigation, such as inertial navigation, tape navigation, laser navigation, visual navigation, etc. The mobile robots with different navigation modes have respective characteristics and determine the flexibility degree and the cost of the formed logistics system. Inertial navigation utilizes gyroscopes and photoelectric encoders, which are easily affected by interference, and have relatively poor control performance; the tape navigation needs to be paved with the tape in advance, so that the maintenance and transformation costs are high; the laser navigation requires an additional reflector, and the laser sensor has high manufacturing cost and high maintenance cost. Therefore, the visual navigation mobile robot has higher practical value and wide application prospect.
Because the environment where the mobile robot is located is usually complex, and because the dynamic characteristics of the mobile robot are that images generated in the motion process need to be processed in real time, if an accurate environment map is to be established, the processed data volume is huge, the requirement on hardware is high, the cost is high, and the implementation is difficult. The existing visual navigation method based on the two-dimensional code and the navigation belt is simple and quick, but the recognition success rate is reduced due to the fact that the two-dimensional code is easily influenced by dust or dirt in the actual application scene of the mobile robot, and the effectiveness of the visual navigation method is to be improved.
Disclosure of Invention
The invention aims to overcome the defects and the shortcomings of the prior art and provide a visual-based mobile robot navigation method and a visual-based mobile robot navigation system which are simple, reliable, good in real-time performance and good in controllability.
The invention is realized by the following technical scheme:
a vision-based mobile robotic navigation system comprising the following components:
a visual sensor;
an image processor;
a motion control module;
a mobile robot body;
the visual sensor, the image processor and the motion control module are all carried by the mobile robot body; the vision sensor is responsible for collecting scene images in real time, the image processor processes and analyzes the collected images, and the motion control module sends out instructions to control the operation of the robot according to analysis results.
The visual sensor adopts a USB camera and is centrally installed at the front end of the mobile robot car body.
The USB camera is also provided with an LED light source matched with the USB camera.
The image processor is ARM Cortex-a9.
A navigation method of a vision-based mobile robot navigation system, comprising the steps of:
step one: the vision sensor collects an environment image of the ground in front of the moving robot car body in the process of traveling;
a dark band is preset on the light ground in the scene of the environment image to serve as a lane line, and a plurality of identifiers are arranged at preset positions of the band; the outside of the identifier is provided with a black (or dark) ring; the identifier includes position information and speed change information of the mobile robot body;
step two: the image obtained in the first step is subjected to perspective transformation by an image processor, so that the image is converted into an image of which the camera of the vision sensor and the ground are in orthographic projection;
step three: carrying out noise reduction treatment on the image obtained in the second step and converting the image into a gray image;
and thresholding the image to obtain a binary image and performing image segmentation to obtain an ROI image only containing the identifier and an ROI image only containing the lane line.
Then, designating two discontinuous processing areas on the obtained ROI image only containing the lane lines, traversing all pixels of the set pixel rows, easily solving the middle point of the lane lines on the preset pixel rows in the two processing areas, connecting the two points to obtain a track line similar to the lane lines, and tracking the track line to realize navigation;
the obtained ROI image containing only the identifier obtains the position information and the acceleration change information by identifying a predetermined identifier in the image.
The first step further comprises the following steps:
step a: firstly, determining the placement position and the posture of a camera of a vision sensor, and adjusting the brightness of a matched light source of the camera according to an actual scene;
step b: according to the current adjusted light source brightness, adjusting parameters of the visual sensor to enable the visual sensor to acquire clear road images;
step c: and collecting continuous clear environmental images in real time, and transmitting the collected images to an image processor for further image processing.
Compared with the prior art, the invention has the following advantages and effects:
the invention is responsible for collecting scene images by the vision sensor, the image processor processes and analyzes the images, and the motion control module sends out instructions to control the operation of the robot. Presetting a lane line in a mobile robot work environment, and setting an identifier at a specific position; the mobile robot acquires a scene image in the environment, and acquires an image which is orthographic projected with the ground through perspective transformation; the obtained image is converted into a binary image through graying and thresholding, and then the binary image is segmented to obtain an image only containing identifiers and an image only containing lane lines; and finding out and tracking an approximate lane track line on an image only containing the lane line, identifying an identifier, and obtaining position and acceleration information to realize a navigation function. The invention can remarkably simplify the control and tracking processes of the mobile robot and realize the accurate navigation of the mobile robot.
Drawings
Fig. 1 is a flow chart of a navigation method of a vision-based mobile robot navigation system.
Fig. 2 is a schematic illustration of calibration of a camera perspective transformation.
Fig. 3 is a lane line and an identifier.
Fig. 4 is a thresholded binary image.
Fig. 5 is a schematic view of a segmented lane line ROI image.
Fig. 6 is a schematic view of a segmented identifier ROI image.
Fig. 7 is a schematic view of the image processing region of the segmented lane line ROI.
Fig. 8 is an approximate lane trajectory obtained by image processing.
Fig. 9 is a schematic diagram of a vision-based mobile robotic navigation system.
Detailed Description
The present invention will be described in further detail with reference to specific examples.
As shown in fig. 1. Step S100: and acquiring a scene image of the mobile robot in a preset environment, and converting the acquired image into an image of which the front projection is formed between the camera and the ground through perspective transformation, as shown in fig. 2. The lane lines are pre-paved in the road under the preset environment, a plurality of identifiers are arranged at specific positions of the lane lines, and the identifiers are provided with circular outlines with higher contrast with the background of the identifiers, as shown in fig. 3.
In the embodiment, the width of a lane line pre-paved on a road is 3cm, the radius of the outer circle of the outline of the identifier ring is 10cm, and the inner diameter of the outline of the identifier ring is 6cm.
The step S100 further includes the following sub-steps:
step S101: and determining the placing position and the pose of the camera and adjusting the brightness of a matched light source of the camera according to the actual scene.
In the embodiment, the camera adopts a USB camera, the placement height is 30cm from the ground, and the shooting angle is 45 degrees between the axis of the camera and the horizontal ground.
Step S102: and adjusting camera parameters according to the current adjusted light source brightness so that clear road images can be acquired.
Step S103: and collecting continuous clear environmental images in real time, and transmitting the collected images to a microprocessor for further image processing.
The processor model in this embodiment is ARM Cortex-a9.
Step S200: the image obtained in step S100 is subjected to noise reduction processing and converted into a gray image, and then subjected to global binarization processing by thresholding, so that the image is finally converted into a binary image, as shown in fig. 4.
In this embodiment, the image obtained in step S100 is smoothed by mean filtering, and the mean kernel size is 3*3.
Step S300: the binary image obtained in step S200 is segmented to obtain an ROI image containing only lane lines and an ROI image containing only identifiers, as shown in fig. 5 and 6.
Regarding step S300, the following sub-steps are also included:
step S301, detecting the edges of the lane lines in the binary image obtained in step S200 through a Hough transform algorithm, and re-segmenting to obtain the ROI image only containing the lane lines.
In step S302, circles in the image are detected by hough circle transformation algorithm for the binary image obtained in step S200, and when the circle outline is detected to be included in the binary image, the image is segmented into ROI images containing only identifiers.
In this embodiment, the cumulative probability hough transform is used to perform the line detection, the unit radius of the progressive size of the line search is set to be 1, and the shortest line length is 3cm, so as to shorten the detection time.
In this embodiment, the minimum circle diameter of hough circle transformation is set to be 9cm, and the maximum circle diameter is set to be 11cm, so as to shorten the detection time.
Step S400: and (3) identifying the ROI image only containing the lane lines and the ROI image only containing the identifiers obtained in the step S300, tracking and navigating the lane lines, and acquiring current position information and acceleration information of the mobile robot.
Regarding step S400, the following sub-steps are also included:
step S401 of designating two discrete processing regions on the image for the ROI image containing only the lane lines obtained in step S301, as shown in fig. 7, traversing all pixels of the specific pixel row. Because the gray value on the lane line is 255 and the gray value outside the lane line is 0, the midpoint on the lane line of a specific pixel row in two processing areas is easy to find, the two points are connected to obtain a track line similar to the lane line, as shown in fig. 8, and the track line is tracked to realize navigation.
Step S402, for the ROI image obtained in step 302, which contains only the identifier, the position information and the acceleration change information are obtained by identifying the specific identifier in the image.
The identifier in the embodiment has high identification rate and good lane tracking effect.
As described above, the present invention can be preferably realized.
The embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principles of the invention should be made and equivalents should be construed as falling within the scope of the invention.

Claims (4)

1. A navigation method of a vision-based mobile robot navigation system, characterized in that the vision-based mobile robot navigation system comprises the following components:
a visual sensor;
an image processor;
a motion control module;
a mobile robot body;
the visual sensor, the image processor and the motion control module are all carried by the mobile robot body; the vision sensor is responsible for collecting scene images in real time, the image processor processes and analyzes the collected images, and the motion control module sends out instructions to control the operation of the robot according to analysis results;
the navigation method of the vision-based mobile robot navigation system comprises the following steps:
step one: the vision sensor collects an environment image of the ground in front of the moving robot car body in the process of traveling;
a dark band is preset on the light ground in the scene of the environment image to serve as a lane line, and a plurality of identifiers are arranged at preset positions of the band; the outside of the identifier is provided with a black circular ring; the identifier includes position information and speed change information of the mobile robot body;
step two: the image obtained in the first step is subjected to perspective transformation by an image processor, so that the image is converted into an image of which the camera of the vision sensor and the ground are in orthographic projection;
step three: carrying out noise reduction treatment on the image obtained in the second step and converting the image into a gray image;
thresholding the image to obtain a binary image and performing image segmentation to obtain an ROI image only containing identifiers and an ROI image only containing lane lines;
then, designating two discontinuous processing areas on the obtained ROI image only containing the lane lines, traversing all pixels of the set pixel rows, easily solving the middle point of the lane lines on the preset pixel rows in the two processing areas, connecting the two points to obtain a track line similar to the lane lines, and tracking the track line to realize navigation;
the obtained ROI image only containing the identifier obtains position information and acceleration change information by identifying a preset identifier in the image;
the visual sensor adopts a USB camera and is centrally installed at the front end of the mobile robot car body.
2. A method of navigating a vision based mobile robotic navigation system as claimed in claim 1, wherein: the first step further comprises the following steps:
step a: firstly, determining the placement position and the posture of a camera of a vision sensor, and adjusting the brightness of a matched light source of the camera according to an actual scene;
step b: according to the current adjusted light source brightness, adjusting parameters of the visual sensor to enable the visual sensor to acquire clear road images;
step c: and collecting continuous clear environmental images in real time, and transmitting the collected images to an image processor for further image processing.
3. A method of navigating a vision based mobile robotic navigation system as claimed in claim 2, wherein: the USB camera is also provided with an LED light source matched with the USB camera.
4. A method of navigating a vision based mobile robotic navigation system as claimed in claim 2, wherein: the image processor is ARM Cortex-a9.
CN201710311501.5A 2017-05-05 2017-05-05 Mobile robot navigation method and system based on vision Active CN107421540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710311501.5A CN107421540B (en) 2017-05-05 2017-05-05 Mobile robot navigation method and system based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710311501.5A CN107421540B (en) 2017-05-05 2017-05-05 Mobile robot navigation method and system based on vision

Publications (2)

Publication Number Publication Date
CN107421540A CN107421540A (en) 2017-12-01
CN107421540B true CN107421540B (en) 2023-05-23

Family

ID=60424452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710311501.5A Active CN107421540B (en) 2017-05-05 2017-05-05 Mobile robot navigation method and system based on vision

Country Status (1)

Country Link
CN (1) CN107421540B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053447A (en) * 2017-12-18 2018-05-18 纳恩博(北京)科技有限公司 Method for relocating, server and storage medium based on image
CN109961454A (en) * 2017-12-22 2019-07-02 北京中科华正电气有限公司 Human-computer interaction device and processing method in a kind of embedded intelligence machine
CN109960145B (en) * 2017-12-22 2022-06-14 天津工业大学 Mobile robot mixed vision trajectory tracking strategy
CN110274594B (en) * 2018-03-14 2021-04-23 京东方科技集团股份有限公司 Indoor positioning equipment and method
CN110146094A (en) * 2019-06-27 2019-08-20 成都圭目机器人有限公司 Robot localization navigation system and its implementation based on lane line
CN110389588A (en) * 2019-07-17 2019-10-29 宁波财经学院 A kind of mobile robot
CN111896012A (en) * 2020-03-15 2020-11-06 上海谕培汽车科技有限公司 Vehicle-mounted navigation method based on machine vision
CN111780744B (en) * 2020-06-24 2023-12-29 浙江华睿科技股份有限公司 Mobile robot hybrid navigation method, equipment and storage device
CN112558600A (en) * 2020-11-09 2021-03-26 福建汉特云智能科技有限公司 Robot movement correction method and robot
CN116592876B (en) * 2023-07-17 2023-10-03 北京元客方舟科技有限公司 Positioning device and positioning method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561871A (en) * 2009-02-17 2009-10-21 昆明理工大学 Method for recognizing manually-set road sign in agricultural machine visual navigation
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
CN103389733A (en) * 2013-08-02 2013-11-13 重庆市科学技术研究院 Vehicle line walking method and system based on machine vision
CN103646249A (en) * 2013-12-12 2014-03-19 江苏大学 Greenhouse intelligent mobile robot vision navigation path identification method
CN105651286A (en) * 2016-02-26 2016-06-08 中国科学院宁波材料技术与工程研究所 Visual navigation method and system of mobile robot as well as warehouse system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561871A (en) * 2009-02-17 2009-10-21 昆明理工大学 Method for recognizing manually-set road sign in agricultural machine visual navigation
CN102789233A (en) * 2012-06-12 2012-11-21 湖北三江航天红峰控制有限公司 Vision-based combined navigation robot and navigation method
CN103389733A (en) * 2013-08-02 2013-11-13 重庆市科学技术研究院 Vehicle line walking method and system based on machine vision
CN103646249A (en) * 2013-12-12 2014-03-19 江苏大学 Greenhouse intelligent mobile robot vision navigation path identification method
CN105651286A (en) * 2016-02-26 2016-06-08 中国科学院宁波材料技术与工程研究所 Visual navigation method and system of mobile robot as well as warehouse system

Also Published As

Publication number Publication date
CN107421540A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN107421540B (en) Mobile robot navigation method and system based on vision
KR102032070B1 (en) System and Method for Depth Map Sampling
US20090290758A1 (en) Rectangular Table Detection Using Hybrid RGB and Depth Camera Sensors
Broggi et al. Self-calibration of a stereo vision system for automotive applications
CN106863332B (en) Robot vision positioning method and system
US8116519B2 (en) 3D beverage container localizer
CN108481327B (en) Positioning device, positioning method and robot for enhancing vision
CN1598610A (en) Apparatus and method for estimating a position and an orientation of a mobile robot
Wang et al. A vision-based road edge detection algorithm
CN108544494B (en) Positioning device, method and robot based on inertia and visual characteristics
CN107527368B (en) Three-dimensional space attitude positioning method and device based on two-dimensional code
Lin et al. Construction of fisheye lens inverse perspective mapping model and its applications of obstacle detection
US11281916B2 (en) Method of tracking objects in a scene
CN111968132A (en) Panoramic vision-based relative pose calculation method for wireless charging alignment
CN206832260U (en) A kind of Navigation System for Mobile Robot of view-based access control model
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road
CN103056864A (en) Device and method for detecting position and angle of wheeled motion robot in real time
JP4967758B2 (en) Object movement detection method and detection apparatus
Fries et al. Monocular template-based vehicle tracking for autonomous convoy driving
CN112720408B (en) Visual navigation control method for all-terrain robot
CN112489080A (en) Binocular vision SLAM-based vehicle positioning and vehicle 3D detection method
KR101594113B1 (en) Apparatus and Method for tracking image patch in consideration of scale
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
CN113910265B (en) Intelligent inspection method and system for inspection robot
Mutka et al. A low cost vision based localization system using fiducial markers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant