CN106500714B - A kind of robot navigation method and system based on video - Google Patents

A kind of robot navigation method and system based on video Download PDF

Info

Publication number
CN106500714B
CN106500714B CN201610839909.5A CN201610839909A CN106500714B CN 106500714 B CN106500714 B CN 106500714B CN 201610839909 A CN201610839909 A CN 201610839909A CN 106500714 B CN106500714 B CN 106500714B
Authority
CN
China
Prior art keywords
robot
ground location
camera
video image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610839909.5A
Other languages
Chinese (zh)
Other versions
CN106500714A (en
Inventor
刘德建
念小义
何学城
陈宏展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Netdragon Websoft Co Ltd
Original Assignee
Fujian Netdragon Websoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Netdragon Websoft Co Ltd filed Critical Fujian Netdragon Websoft Co Ltd
Priority to CN201610839909.5A priority Critical patent/CN106500714B/en
Publication of CN106500714A publication Critical patent/CN106500714A/en
Application granted granted Critical
Publication of CN106500714B publication Critical patent/CN106500714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3647Guidance involving output of stored or live camera images or video streams

Abstract

The present invention relates to navigation field more particularly to a kind of robot navigation methods and system based on video.The present invention is by establishing the corresponding relationship of video image pixel position and ground location;Obtain video image pixel;According to the position acquisition ground location of the pixel.It realizes by installing camera in robot, the actual scene that robot can detect is obtained in real time, obtain video image, a pixel is chosen in video image, and ground location represented by robot to the pixel is driven, so that realizing can be for the actual scene navigating robot around robot.

Description

A kind of robot navigation method and system based on video
Technical field
The present invention relates to navigation field more particularly to a kind of robot navigation methods and system based on video.
Background technique
The robot navigation of industry at present clicks some pixel on map substantially in a width navigation map Or some position is selected, then Aspect Ratio by selected location relative to entire map is converted into reality scene to calculate Then target position controls robot and moves again to that position.It is using the mobile disadvantage of digital map navigation control robot Map is a static images, finishes the markers such as route, obstacle in proportion on map in advance, can not reflect actual field in real time Variation in scape.
Summary of the invention
The technical problems to be solved by the present invention are: a kind of robot navigation method and system based on video is provided, it is real It now can be for the actual scene navigating robot around robot.
In order to solve the above-mentioned technical problem, the technical solution adopted by the present invention are as follows:
The present invention provides a kind of robot navigation method based on video, comprising:
Establish the corresponding relationship of video image pixel position and ground location;
Obtain video image pixel;
According to the position acquisition ground location of the pixel.
The above-mentioned robot navigation method based on video, the beneficial effect is that: it is different from the prior art and is led using map Boat control robot is mobile, can not be navigated according to the variation in actual scene.The present invention is taken the photograph by installing in robot As head, the actual scene that robot can detect is obtained in real time, video image is obtained, resettles pixel position in video image With the corresponding relationship of ground location, a pixel can be chosen in video image, and drives robot to pixel institute table The ground location shown, so that realizing can be for the actual scene navigating robot around robot.
The present invention also provides a kind of Algorithms of Robots Navigation System based on video, comprising:
Module is established, for establishing the corresponding relationship of video image pixel position and ground location;
First obtains module, for obtaining video image pixel;
Second obtains module, for the position acquisition ground location according to the pixel.
The above-mentioned Algorithms of Robots Navigation System based on video, the beneficial effect is that: video image is established by establishing module The corresponding relationship of middle pixel position and ground location chooses a pixel by the first acquisition module in video image, then Module is obtained by second and obtains ground location corresponding to the pixel chosen, so as to drive robot to the pixel Represented ground location, realizing can be for the actual scene navigating robot around robot.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the robot navigation method based on video of the present invention;
Fig. 2 is a kind of structural block diagram of the Algorithms of Robots Navigation System based on video of the present invention;
Label declaration:
1, module is established;2, first module is obtained;3, second module is obtained;31, the first computing unit;32, it second calculates Unit;4, third obtains module;5, computing module;6, the 4th module is obtained;7, drive module.
Specific embodiment
To explain the technical content, the achieved purpose and the effect of the present invention in detail, below in conjunction with embodiment and cooperate attached Figure is explained.
The most critical design of the present invention is: by obtaining the scene video image around robot in real time, in scene visual Pixel is selected in frequency image, and position of the pixel in video image is converted into ground location, drives robot To the ground location, realizing can be for the actual scene navigating robot around robot.
As shown in Figure 1, the present invention provides a kind of robot navigation method based on video, comprising:
Establish the corresponding relationship of video image pixel position and ground location;
Obtain video image pixel;
According to the position acquisition ground location of the pixel.
Further, according to the position acquisition ground location of the pixel, specifically:
According to the position of the pixel, Vertical Square of the camera of robot relative to the ground location is calculated To drift angle and horizontal direction drift angle;
According to the vertical direction drift angle and the horizontal direction drift angle, the ground location is calculated relative to machine The vertical range and horizontal distance of the camera of people.
Seen from the above description, the pixel can be calculated according to the position of pixel corresponding to ground location apart from machine The offset of the camera of device people.
Preferably, the camera of calculating robot is inclined relative to the vertical direction drift angle of the ground location and horizontal direction The method and step at angle specifically:
Using the upper left corner of the video image of monitoring client as origin, right is X-axis positive axis, and lower section is the foundation of Y-axis positive axis Rectangular coordinate system obtains the first coordinate system.A pixel is chosen in the video image of monitoring client as target pixel points, the picture Coordinate of the vegetarian refreshments in the first coordinate system is (x1, y1), and the resolution ratio of monitoring client is width1*height1.Calculate the pixel Vertical scale and horizontal proportion of the point in the video image of monitoring client, rateX1=x1/width1, rateY1=y1/ height1;
Using the upper left corner of the video image of monitoring client as origin, right is X-axis positive axis, and lower section is the foundation of Y-axis positive axis Rectangular coordinate system obtains the second coordinate system.Obtain the resolution ratio width2*height2 of robotic end video image;Described in calculating Coordinate (x2, y2) of the target pixel points in robotic end video image, x2=width2*rateX1, y2=height2* rateY1;
Obtain the horizontal view angle angW and vertical angle of view angH of camera;The target pixel points are calculated correspondingly Level angle angX and vertical drift angle angY between face position and camera, angX=(x2/width2-0.5) * angW, AngY=(Y2/height2-0.5) * angH;
Obtain vertical height z of the camera away from ground;The corresponding ground location of the target pixel points is calculated relative to taking the photograph As the horizontal offset x3 and vertical offset y3, y3=z/tan (angY) of corresponding ground location;X3=y3*tan (angX).Further, further includes:
The camera of robot is obtained relative to the angle immediately ahead of robot;
Vertical range and horizontal distance and the angle according to the ground location relative to the camera of robot, meter Calculation obtains vertical range and horizontal distance of the ground location relative to robot.
Seen from the above description, the ground can be calculated relative to the offset of the camera of robot according to ground location Offset of the position relative to robot.The camera of robot may be mounted at arbitrary orientation, when robot camera not When being mounted in the front of robot, need by ground location relative to the offset of robot camera be converted to relative to Offset immediately ahead of robot correctly could drive robot to the ground location according to offset.
Preferably, vertical range and horizontal distance of the ground location relative to robot are calculated specifically:
With the artificial origin of machine, front is Y-axis positive axis, and right is X-axis positive axis, and top is the foundation of Z axis positive axis Coordinate system obtains third coordinate system.Obtain coordinate (x4, y4, z4) of the camera of robot in third coordinate system, camera Relative to the angle angC immediately ahead of robot, the corresponding ground location of the target pixel points relative to camera correspondingly The horizontal offset x3 and vertical offset y3 of face position, be calculated the corresponding ground location of the target pixel points relative to The horizontal offset x5 and vertical offset y5, x5=x3*cos (- angC)-y3*sin (- angC)+x4 of robot;Y5=x3* sin(-angC)+y3*cos(-angC)+y4。
Further, further includes:
The picture for obtaining the camera shooting of robot in real time, obtains the video image.
Seen from the above description, the actual scene obtained around robot in real time is realized.
Further, further includes:
Drive robot to the ground location.
Seen from the above description, realize that driving robot to the pixel chosen in video image corresponds to actual scene In ground location.
As shown in Fig. 2, the present invention also provides a kind of Algorithms of Robots Navigation System based on video, comprising:
Module 1 is established, for establishing the corresponding relationship of video image pixel position and ground location;
First obtains module 2, for obtaining video image pixel;
Second obtains module 3, for the position acquisition ground location according to the pixel.
Further, the second acquisition module 3 includes:
First computing unit 31, for the position according to the pixel, be calculated the camera of robot relative to The vertical direction drift angle and horizontal direction drift angle of the ground location;
Second computing unit 32, for institute to be calculated according to the vertical direction drift angle and the horizontal direction drift angle State vertical range and horizontal distance of the ground location relative to the camera of robot.
Further, further includes:
Third obtains module 4, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module 5, for according to the ground location relative to the camera of robot vertical range and it is horizontal away from From with the angle, vertical range and horizontal distance of the ground location relative to robot is calculated.
Further, further includes:
4th obtains module 6, and the picture that the camera for obtaining robot in real time is shot obtains the video image.
Further, further includes:
Drive module 7, for driving robot to the ground location.
The embodiment of the present invention one are as follows:
The picture for obtaining the camera shooting of robot in real time, obtains video image;
Establish the corresponding relationship of video image pixel position and ground location;
Obtain video image pixel;
According to the position of the pixel, Vertical Square of the camera of robot relative to the ground location is calculated To drift angle and horizontal direction drift angle;
According to the vertical direction drift angle and the horizontal direction drift angle, the ground location is calculated relative to machine The vertical range and horizontal distance of the camera of people;
Drive robot to the ground location.
Seen from the above description, the present embodiment realizes the video image that can obtain the actual scene around robot in real time, And the pixel navigating robot in selecting video image can be passed through.
The embodiment of the present invention two are as follows:
On the basis of example 1, further includes:
The camera of robot is obtained relative to the angle immediately ahead of robot;
Vertical range and horizontal distance and the angle according to the ground location relative to the camera of robot, meter Calculation obtains vertical range and horizontal distance of the ground location relative to robot.
Seen from the above description, the camera for obtaining actual scene video image around robot in the present embodiment can pacify Arbitrary orientation loaded on robot, by the way that ground location is converted to ground location phase relative to the offset of robot camera For the offset of robot, robot can be driven to the ground location.
The embodiment of the present invention three are as follows:
The pixel resolution of the video image of robot camera is 1280*720, the resolution ratio of video image on monitoring client For 1440*810.Monitoring client obtains video image from robotic end in real time, and carries out distortion correction to the video image got.
Using the upper left corner of video image as origin, right is X-axis positive axis, and lower section is that Y-axis positive axis establishes rectangular co-ordinate System.Target pixel points of the pixel (800,710) as navigating robot are chosen in the video image of monitoring client.
Vertical scale of the target pixel points in the video image of monitoring client: 710/810=0.8765;Horizontal proportion: 800/1440=0.5556;Pixel coordinate of the target pixel points in the video image of robot camera is (711,631), Calculating process is 1280*0.5556=711,720*0.8765=631;According to 28.64 degree of the vertical angle of view of camera, horizontal view The pixel coordinate of 61.18 degree of angle, camera 1 meter of height and target pixel points away from ground in the video image of camera calculates Ground location corresponding to target pixel points out, i.e. target floor position relative to camera vertical drift angle be 10.78 degree with Level angle is 3.39 degree.Using camera as origin, the front of camera is X-axis positive axis, and the front-right of camera is Y-axis Positive axis establishes rectangular coordinate system, calculates mesh relative to the vertical drift angle of camera and level angle according to target floor position It is 0.31 that ground location, which is marked, relative to the horizontal offset of camera, vertical offset 5.25.It is origin, machine with robot The front of device people is Y-axis positive axis, and the front-right of robot is that X-axis positive axis establishes rectangular coordinate system.By target floor position It sets and is converted to target floor position relative to the horizontal inclined of robot relative to the horizontal offset and vertical offset of camera Shifting amount and vertical offset.According to the corresponding relationship of video image and ground location, target can be calculated according to the offset Real standard distance and practical vertical range of the ground location relative to camera.According to target floor positional distance robot Real standard distance and practical vertical range drive robot to target position.
The embodiment of the present invention four are as follows:
4th acquisition module obtains the picture of the camera shooting of robot in real time, obtains the video image;
Establish the corresponding relationship that module establishes video image pixel position and ground location;
First, which obtains module, obtains video image pixel;
Robot is calculated for the position according to the pixel in the first computing unit in second acquisition module Vertical direction drift angle and horizontal direction drift angle of the camera relative to the ground location;Second computing unit is used for basis Camera of the ground location relative to robot is calculated in the vertical direction drift angle and the horizontal direction drift angle Vertical range and horizontal distance;
Third obtains module, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module, for the vertical range and horizontal distance according to the ground location relative to the camera of robot With the angle, vertical range and horizontal distance of the ground location relative to robot is calculated;
Drive module drives robot to the ground location.
Seen from the above description, the present embodiment provides a kind of Algorithms of Robots Navigation System based on video, the system comprises 4th acquisition module establishes module, the first acquisition module, the second acquisition module, third acquisition module, computing module and driving mould Block.Wherein, the second acquisition module includes the first computing unit and the second computing unit.Real-time acquisition can be realized by above system The video image of actual scene around robot, and the pixel navigating robot in selecting video image can be passed through.
In conclusion a kind of robot navigation method based on video provided by the invention, by being installed in robot Camera obtains the actual scene that robot can detect in real time, obtains video image, resettle pixel point in video image The corresponding relationship with ground location is set, a pixel can be chosen in video image, and drives robot to the pixel institute The ground location of expression, so that realizing can be for the actual scene navigating robot around robot;It further, can be according to picture The position of vegetarian refreshments calculates the offset of camera of the ground location corresponding to the pixel apart from robot;Further, Offset of the ground location relative to robot can be calculated relative to the offset of the camera of robot according to ground location Amount;Further, the actual scene obtained around robot in real time is realized;Further, realize driving robot in video The pixel chosen in image corresponds to the ground location in actual scene.The present invention also provides a kind of robots based on video Navigation system the system comprises the 4th acquisition module, establishes module, the first acquisition module, the second acquisition module, third acquisition Module, computing module and drive module;Wherein, the second acquisition module includes the first computing unit and the second computing unit;Pass through Above system can realize the video image for obtaining the actual scene around robot in real time, and can be by selecting video image Pixel navigating robot.
The above description is only an embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalents made by bright specification and accompanying drawing content are applied directly or indirectly in relevant technical field, similarly include In scope of patent protection of the invention.

Claims (6)

1. a kind of robot navigation method based on video characterized by comprising
Establish the corresponding relationship of video image pixel position and ground location;
Obtain video image pixel;
According to the position acquisition ground location of the pixel;Wherein, according to the position acquisition ground location of the pixel, tool Body are as follows:
According to the resolution ratio of coordinate of the pixel in video image, video image, in conjunction with camera horizontal view angle and Vertical angle of view, the camera that robot is calculated are inclined relative to the vertical direction drift angle of the ground location and horizontal direction Angle;
Vertical height according to camera away from ground, the vertical direction drift angle and the horizontal direction drift angle, are calculated institute State vertical range and horizontal distance of the ground location relative to the camera of robot;
Further include:
The camera of robot is obtained relative to the angle immediately ahead of robot;
Vertical range and horizontal distance and the angle according to the ground location relative to the camera of robot, calculate Vertical range and horizontal distance to the ground location relative to robot.
2. a kind of robot navigation method based on video according to claim 1, which is characterized in that further include:
The picture for obtaining the camera shooting of robot in real time, obtains the video image.
3. a kind of robot navigation method based on video according to claim 1, which is characterized in that further include:
Drive robot to the ground location.
4. a kind of Algorithms of Robots Navigation System based on video characterized by comprising
Module is established, for establishing the corresponding relationship of video image pixel position and ground location;
First obtains module, for obtaining video image pixel;
Second obtains module, for the position acquisition ground location according to the pixel;Wherein, described second module packet is obtained It includes:
First computing unit, for the resolution ratio according to coordinate of the pixel in video image, video image, in conjunction with taking the photograph As the horizontal view angle and vertical angle of view of head, the camera that robot is calculated is inclined relative to the vertical direction of the ground location Angle and horizontal direction drift angle;
Second computing unit, for the vertical height according to the camera away from ground, vertical direction drift angle and the level side To drift angle, vertical range and horizontal distance of the ground location relative to the camera of robot is calculated;
Further include:
Third obtains module, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module, for according to the ground location relative to the camera of robot vertical range and horizontal distance and institute Angle is stated, vertical range and horizontal distance of the ground location relative to robot is calculated.
5. a kind of Algorithms of Robots Navigation System based on video according to claim 4, which is characterized in that further include:
4th obtains module, and the picture that the camera for obtaining robot in real time is shot obtains the video image.
6. a kind of Algorithms of Robots Navigation System based on video according to claim 4, which is characterized in that further include:
Drive module, for driving robot to the ground location.
CN201610839909.5A 2016-09-22 2016-09-22 A kind of robot navigation method and system based on video Active CN106500714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610839909.5A CN106500714B (en) 2016-09-22 2016-09-22 A kind of robot navigation method and system based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610839909.5A CN106500714B (en) 2016-09-22 2016-09-22 A kind of robot navigation method and system based on video

Publications (2)

Publication Number Publication Date
CN106500714A CN106500714A (en) 2017-03-15
CN106500714B true CN106500714B (en) 2019-11-29

Family

ID=58290950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610839909.5A Active CN106500714B (en) 2016-09-22 2016-09-22 A kind of robot navigation method and system based on video

Country Status (1)

Country Link
CN (1) CN106500714B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110733424B (en) * 2019-10-18 2022-03-15 深圳市麦道微电子技术有限公司 Method for calculating horizontal distance between ground position and vehicle body in driving video system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN103459099A (en) * 2011-01-28 2013-12-18 英塔茨科技公司 Interfacing with mobile telepresence robot
CN104034316A (en) * 2013-03-06 2014-09-10 深圳先进技术研究院 Video analysis-based space positioning method
CN105171756A (en) * 2015-07-20 2015-12-23 缪学良 Method for controlling remote robot through combination of videos and two-dimensional coordinate system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4055603B2 (en) * 2003-02-21 2008-03-05 日本ビクター株式会社 Image position detection method and image position detection apparatus
CN102999051B (en) * 2011-09-19 2016-06-22 广州盈可视电子科技有限公司 A kind of method of cradle head control and device
CN102682292B (en) * 2012-05-10 2014-01-29 清华大学 Method based on monocular vision for detecting and roughly positioning edge of road
CN104034305B (en) * 2014-06-10 2016-05-11 杭州电子科技大学 A kind of monocular vision is the method for location in real time
CN104200469B (en) * 2014-08-29 2017-02-08 暨南大学韶关研究院 Data fusion method for vision intelligent numerical-control system
CN104835173B (en) * 2015-05-21 2018-04-24 东南大学 A kind of localization method based on machine vision
CN104954747B (en) * 2015-06-17 2020-07-07 浙江大华技术股份有限公司 Video monitoring method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN103459099A (en) * 2011-01-28 2013-12-18 英塔茨科技公司 Interfacing with mobile telepresence robot
CN104034316A (en) * 2013-03-06 2014-09-10 深圳先进技术研究院 Video analysis-based space positioning method
CN105171756A (en) * 2015-07-20 2015-12-23 缪学良 Method for controlling remote robot through combination of videos and two-dimensional coordinate system

Also Published As

Publication number Publication date
CN106500714A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
KR101590530B1 (en) Automatic scene calibration
CN104848858B (en) Quick Response Code and be used for robotic vision-inertia combined navigation system and method
US9043146B2 (en) Systems and methods for tracking location of movable target object
US9679385B2 (en) Three-dimensional measurement apparatus and robot system
WO2015089403A1 (en) Estimating three-dimensional position and orientation of articulated machine using one or more image-capturing devices and one or more markers
EP1262909A3 (en) Calculating camera offsets to facilitate object position determination using triangulation
CN104976950B (en) Object space information measuring device and method and image capturing path calculating method
CN104552341A (en) Single-point multi-view meter-hanging posture error detecting method of mobile industrial robot
WO2020156923A3 (en) Map and method for creating a map
EP3847617A1 (en) Vision system for a robotic machine
JP2010112731A (en) Joining method of coordinate of robot
CN106500714B (en) A kind of robot navigation method and system based on video
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
KR101706092B1 (en) Method and apparatus for 3d object tracking
CN107442973A (en) Welding bead localization method and device based on machine vision
KR101153221B1 (en) Computation of Robot Location and Orientation using Repeating Feature of Ceiling Textures and Optical Flow Vectors
KR101438514B1 (en) Robot localization detecting system using a multi-view image and method thereof
JP2007171018A (en) Object position recognition method and device
JP6598552B2 (en) Position measurement system
US20150172606A1 (en) Monitoring camera apparatus with depth information determination
Sauer et al. Occlusion handling in augmented reality user interfaces for robotic systems
CN110672009B (en) Reference positioning, object posture adjustment and graphic display method based on machine vision
Aliakbarpour et al. Inertial-visual fusion for camera network calibration
KR101380123B1 (en) Location recognizing apparatus for smart robot
Safin et al. Experiments on mobile robot stereo vision system calibration under hardware imperfection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant