CN113409387A - Robot vision positioning method and robot - Google Patents

Robot vision positioning method and robot Download PDF

Info

Publication number
CN113409387A
CN113409387A CN202110525333.6A CN202110525333A CN113409387A CN 113409387 A CN113409387 A CN 113409387A CN 202110525333 A CN202110525333 A CN 202110525333A CN 113409387 A CN113409387 A CN 113409387A
Authority
CN
China
Prior art keywords
robot
target object
camera
positioning method
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110525333.6A
Other languages
Chinese (zh)
Inventor
伍浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Topband Co Ltd
Original Assignee
Shenzhen Topband Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Topband Co Ltd filed Critical Shenzhen Topband Co Ltd
Priority to CN202110525333.6A priority Critical patent/CN113409387A/en
Publication of CN113409387A publication Critical patent/CN113409387A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2393Updating materialised views
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention relates to a robot vision positioning method and a robot. The method comprises the following steps: s1, acquiring a first included angle between the robot course and the camera optical axis when the robot is at a first position, wherein the target object is located in the center of the camera view; s2, the robot drives from the first position to the second position to obtain the driving distance; s3, acquiring a second included angle between the robot course and the camera optical axis when the robot is at a second position, wherein the target object is located in the center of the camera view; and S4, determining the relative position information of the target object and the robot according to the first included angle, the second included angle and the driving distance. The invention does not need a distance measuring sensor, and obtains the relative position information of the target object and the robot by using the moving distance of the robot and the testing angle of the camera.

Description

Robot vision positioning method and robot
Technical Field
The invention relates to the field of robot navigation, in particular to a robot vision positioning method and a robot.
Background
The robot needs to acquire the distance information of the obstacle in the autonomous navigation process, in the prior art, a distance measuring sensor is used for measuring the obstacle, for example, a laser distance measuring sensor, a radar distance measuring sensor, an ultrasonic distance measuring sensor and the like are used, and for the robot without the distance measuring sensor, how to acquire the distance of the obstacle is the problem to be solved.
Disclosure of Invention
The present invention provides a robot vision positioning method and a robot, aiming at the above-mentioned defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a robot vision positioning method is constructed, and comprises the following steps:
s1, acquiring a first included angle between the robot course and the camera optical axis when the robot is at a first position, wherein the target object is located in the center of the camera view;
s2, the robot drives from the first position to the second position to obtain a driving distance;
s3, acquiring a second included angle between the robot course and the camera optical axis when the robot is at the second position, wherein the target object is located in the center of the camera view;
s4, determining the relative position information of the target object and the robot according to the first included angle, the second included angle and the driving distance.
Further, in the robot visual positioning method according to the present invention, the step S2 in which the robot travels from the first position to a second position includes: and in the process that the robot runs from the first position to the second position, the camera is continuously adjusted so that the target object is always positioned in the center of the visual field of the camera.
Further, in the robot visual positioning method according to the present invention, the acquiring the travel distance in step S2 includes: and acquiring the running distance according to the wheel circumference and the wheel revolution of the robot.
Further, in the robot visual positioning method according to the present invention, the robot stores a map, a coordinate of the first position in the map is a first coordinate, and a coordinate of the second position in the map is a second coordinate; the step S4 includes:
s41, determining a third coordinate of the target object in the map according to the first coordinate, the second coordinate, the first included angle, the second included angle and the driving distance.
Further, in the robot vision positioning method according to the present invention, after the step S41, the method further includes:
and S51, storing the third coordinate into a map of the robot.
Further, in the robot vision positioning method according to the present invention, after the step S51, the method further includes:
s52, the robot avoids the target object when planning the path in the map.
Further, in the robot vision positioning method according to the present invention, after the step S41, the method further includes:
s61, the robot sends the third coordinate to a server, and the server stores the third coordinate in a map;
and S62, the user terminal accesses the server, and acquires and displays the updated map and the target object.
Further, in the robot vision positioning method according to the present invention, after the step S41, the method further includes:
and S7, the robot sends the third coordinate to a user terminal, and the user terminal stores the third coordinate in a map and displays the target object.
Further, in the visual positioning method for a robot according to the present invention, the relative position information includes the second included angle and a linear distance, and the linear distance is a distance between the target object and the robot.
In addition, the invention also provides a robot, which comprises a processor, a memory, a holder and a camera, wherein the camera is arranged on the robot through the holder, and the holder drives the camera to rotate;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the robot visual positioning method as described above.
The robot vision positioning method and the robot have the following beneficial effects: the invention does not need a distance measuring sensor, and obtains the relative position information of the target object and the robot by using the moving distance of the robot and the testing angle of the camera.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a flowchart of a robot vision positioning method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a robot vision positioning method according to an embodiment of the present invention.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
In a preferred embodiment, referring to fig. 1 and 2, the robot visual positioning method of the embodiment is applied to a mobile robot, and a camera is mounted on the robot through a pan-tilt, and the pan-tilt can drive the camera to rotate. Specifically, the robot vision positioning method comprises the following steps:
and S1, acquiring a first included angle between the robot course and the camera optical axis when the robot is at the first position, wherein the target object is positioned in the center of the camera visual field. Specifically, when a target object needs to be positioned, the holder drives the camera to rotate, and the target object is identified by using a preset image algorithm, which can be referred to and limited to the technology. After the target object is identified, the cradle head continues to adjust the position of the camera, so that the target object is positioned in the center of the visual field of the camera. When the target object is located in the center of the visual field of the camera, a first included angle between the robot course and the optical axis of the camera when the robot is at a first position is obtained, wherein the robot course refers to the current running direction of the robot, and the target object is also located on the optical axis of the camera when the target object is located in the center of the visual field of the camera. It can be understood that when the target object is located in the center of the visual field of the camera, the included angle of the center of the camera relative to the heading of the robot is the first included angle, so that the first included angle can be acquired through the rotation angle of the holder and the current heading. In fig. 2, the position a is the first position of the robot, and the included angle Φ 1 is the first included angle.
And S2, the robot drives from the first position to the second position to obtain the driving distance. Specifically, after the first included angle is obtained at the first position, the robot travels a distance along the current course to reach the second position, and the travel distance is obtained. Alternatively, the number of wheel revolutions of the wheels of the robot is recorded during the travel of the robot from the first position to the second position, and the travel distance is obtained from the circumference of the wheels of the robot and the number of wheel revolutions. In fig. 2, the position B is the second position, and the distance between the position a and the position B is D, i.e. the driving distance between the first position and the second position is D.
And S3, acquiring a second included angle between the robot course and the camera optical axis when the robot is at the second position, wherein the target object is positioned in the center of the camera visual field. Specifically, the target object needs to be identified again after the robot travels to the second position, the holder drives the camera to rotate when the target object is identified, the target object is identified by using a preset image algorithm, and the preset image algorithm can refer to the technology. After the target object is identified, the cradle head continues to adjust the position of the camera, so that the target object is positioned in the center of the visual field of the camera. And when the target object is positioned in the center of the visual field of the camera, acquiring a second included angle between the robot course and the optical axis of the camera when the robot is at a second position, wherein the robot course refers to the current driving direction of the robot, and the target object is also positioned on the optical axis of the camera when the target object is positioned in the center of the visual field of the camera. In fig. 2, the position B is the second position of the robot, and the angle Φ 2 is the second angle. As an option, in order to avoid the target object from losing the target object in the moving process, the robot continuously adjusts the camera so that the target object is always located in the center of the visual field of the camera when driving from the first position to the second position, so that the second included angle can be directly acquired when the robot drives to the second position, the camera does not need to be adjusted again, and the acquisition speed can be increased.
And S4, determining the relative position information of the target object and the robot according to the first included angle, the second included angle and the driving distance. Specifically, a triangle is formed by the first position, the second position and the target object, for example, a triangle is formed by taking a center point of the robot at the first position and the second position and a center point of the target object. Alternatively, the relative position information includes the second angle and a linear distance, which is a distance between the target object and the robot. In fig. 2, point a is a central point of the robot at the first position, point B is a central point of the robot at the second position, and point P is a central point of the target object, and a triangle PAB is formed by point a, point B, and point P. In the triangle PAB, the angle PAB is a first included angle, namely an included angle phi 1; the angle PBA is (180-phi 2); the length of the side length AB is the driving distance D, and the length of the side PA and the length of the side PB can be calculated according to the triangle principle, so that the relative position information of the target object and the robot can be determined.
The embodiment does not need a distance measuring sensor, and obtains the relative position information of the target object and the robot by using the moving distance of the robot and the testing angle of the camera.
In some embodiments, the robot stores a map, and the coordinates of the first position in the map are first coordinates and the coordinates of the second position in the map are second coordinates. Step S4 includes: and S41, determining a third coordinate of the target object in the map according to the first coordinate, the second coordinate, the first included angle, the second included angle and the driving distance. Specifically, the above embodiment can calculate the length of the side PB of the triangle PAB, taking PB as the hypotenuse of the triangle PBQ, the extension BQ of AB as the right-angle side, and PQ perpendicular to the extension of AB, and can calculate the length L1 of the side PQ and the length L2 of the side BQ according to the included angle Φ 2 and the PB length. Further, an included angle between the ABQ straight line and the coordinate in the map coordinate system can be known from the first coordinate and the second coordinate, and the length L1 and the length L2 are subjected to coordinate system conversion according to the included angle, so that the coordinate of the point P in the map coordinate system is obtained, and the third coordinate of the target object in the map is obtained. The third coordinate of the target object in the map is obtained by utilizing the triangle principle and the coordinate system conversion without a distance measuring sensor.
In the robot vision positioning method of some embodiments, after step S41, the method further includes:
and S51, storing the third coordinate into the map of the robot.
And S52, avoiding the target object when the robot plans the path in the map.
In the embodiment, the third coordinate is stored in the map of the robot, the map updating is realized, and the influence of the target object can be considered when the robot automatically plans the driving path subsequently, so that a more reasonable moving path can be made.
In the robot vision positioning method of some embodiments, after step S41, the method further includes:
and S61, the robot sends the third coordinate to the server, and the server stores the third coordinate in the map. Specifically, the robot comprises a wireless communication module, the robot is in communication connection with a server through the wireless communication module, the robot sends the third coordinate to the server, and the server stores the third coordinate in a map.
And S62, the user terminal accesses the server, and acquires and displays the updated map and the target object. Specifically, the user terminal is in communication connection with the server, accesses the server, acquires and displays the updated map and the target object, and the user can perform robot path planning by using the updated map.
In the embodiment, the third coordinate is stored in the map of the server, so that the map is updated, a user can remotely access the server by using the user terminal, the environmental information of the position where the robot is located is obtained in time, and the management efficiency is improved.
In the robot vision positioning method of some embodiments, after step S41, the method further includes: and S7, the robot sends the third coordinate to the user terminal, and the user terminal stores the third coordinate in the map and displays the target object. Specifically, the robot includes the near field communication module, for example bluetooth module, and the robot passes through near field communication module direct communication connection user terminal, sends the third coordinate to user terminal, and user terminal saves the third coordinate to the map in and show the target object, and convenience of customers in time knows the barrier information to make rational planning.
In a preferred embodiment, the robot of this embodiment includes a processor, a memory, a pan-tilt and a camera, the camera is installed on the robot through the pan-tilt, and the pan-tilt drives the camera to rotate. The memory is used for storing a computer program; the processor is adapted to execute a computer program stored in the memory to implement the robot visual positioning method as described above.
The robot does not need a distance measuring sensor, and the relative position information of the target object and the robot is obtained by using the moving distance of the robot and the testing angle of the camera.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes and modifications made within the scope of the claims of the present invention should be covered by the claims of the present invention.

Claims (10)

1. A robot vision positioning method is characterized by comprising the following steps:
s1, acquiring a first included angle between the robot course and the camera optical axis when the robot is at a first position, wherein the target object is located in the center of the camera view;
s2, the robot drives from the first position to the second position to obtain a driving distance;
s3, acquiring a second included angle between the robot course and the camera optical axis when the robot is at the second position, wherein the target object is located in the center of the camera view;
s4, determining the relative position information of the target object and the robot according to the first included angle, the second included angle and the driving distance.
2. The robot vision positioning method of claim 1, wherein the step S2 of the robot driving from the first position to a second position comprises: and in the process that the robot runs from the first position to the second position, the camera is continuously adjusted so that the target object is always positioned in the center of the visual field of the camera.
3. The robot vision positioning method of claim 1, wherein the step S2 of acquiring the travel distance comprises: and acquiring the running distance according to the wheel circumference and the wheel revolution of the robot.
4. The robot vision positioning method according to claim 1, wherein the robot stores a map, and the coordinates of the first position in the map are first coordinates, and the coordinates of the second position in the map are second coordinates; the step S4 includes:
s41, determining a third coordinate of the target object in the map according to the first coordinate, the second coordinate, the first included angle, the second included angle and the driving distance.
5. The robot vision positioning method of claim 4, further comprising, after the step S41:
and S51, storing the third coordinate into a map of the robot.
6. The robot vision positioning method of claim 5, further comprising, after the step S51:
s52, the robot avoids the target object when planning the path in the map.
7. The robot vision positioning method of claim 4, further comprising, after the step S41:
s61, the robot sends the third coordinate to a server, and the server stores the third coordinate in a map;
and S62, the user terminal accesses the server, and acquires and displays the updated map and the target object.
8. The robot vision positioning method of claim 4, further comprising, after the step S41:
and S7, the robot sends the third coordinate to a user terminal, and the user terminal stores the third coordinate in a map and displays the target object.
9. The robot vision positioning method of claim 1, wherein the relative position information includes the second included angle and a linear distance, and the linear distance is a distance between the target object and the robot.
10. A robot is characterized by comprising a processor, a memory, a holder and a camera, wherein the camera is mounted on the robot through the holder and drives the camera to rotate;
the memory is used for storing a computer program;
the processor is configured to execute a computer program stored in the memory to implement the robot visual positioning method of any of claims 1 to 9.
CN202110525333.6A 2021-05-11 2021-05-11 Robot vision positioning method and robot Pending CN113409387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110525333.6A CN113409387A (en) 2021-05-11 2021-05-11 Robot vision positioning method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110525333.6A CN113409387A (en) 2021-05-11 2021-05-11 Robot vision positioning method and robot

Publications (1)

Publication Number Publication Date
CN113409387A true CN113409387A (en) 2021-09-17

Family

ID=77678505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110525333.6A Pending CN113409387A (en) 2021-05-11 2021-05-11 Robot vision positioning method and robot

Country Status (1)

Country Link
CN (1) CN113409387A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009008649A (en) * 2007-05-31 2009-01-15 Nsk Ltd Wheel-carriage robot
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN104677329A (en) * 2015-03-19 2015-06-03 广东欧珀移动通信有限公司 Camera-based target distance measurement method and device
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN107543531A (en) * 2017-08-13 2018-01-05 天津职业技术师范大学 A kind of Robot visual location system
CN108571963A (en) * 2018-05-07 2018-09-25 西安交通大学 A kind of orchard robot and its more ultrasonic videos point Combinated navigation method
CN108876762A (en) * 2018-05-11 2018-11-23 西安交通大学苏州研究院 Robot vision recognition positioning method towards intelligent production line
CN110738706A (en) * 2019-09-17 2020-01-31 杭州电子科技大学 quick robot vision positioning method based on track conjecture
CN111652069A (en) * 2020-05-06 2020-09-11 天津博诺智创机器人技术有限公司 Target identification and positioning method of mobile robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009008649A (en) * 2007-05-31 2009-01-15 Nsk Ltd Wheel-carriage robot
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN104677329A (en) * 2015-03-19 2015-06-03 广东欧珀移动通信有限公司 Camera-based target distance measurement method and device
CN106851095A (en) * 2017-01-13 2017-06-13 深圳拓邦股份有限公司 A kind of localization method, apparatus and system
CN107543531A (en) * 2017-08-13 2018-01-05 天津职业技术师范大学 A kind of Robot visual location system
CN108571963A (en) * 2018-05-07 2018-09-25 西安交通大学 A kind of orchard robot and its more ultrasonic videos point Combinated navigation method
CN108876762A (en) * 2018-05-11 2018-11-23 西安交通大学苏州研究院 Robot vision recognition positioning method towards intelligent production line
CN110738706A (en) * 2019-09-17 2020-01-31 杭州电子科技大学 quick robot vision positioning method based on track conjecture
CN111652069A (en) * 2020-05-06 2020-09-11 天津博诺智创机器人技术有限公司 Target identification and positioning method of mobile robot

Similar Documents

Publication Publication Date Title
CN109976324B (en) Method for controlling robot charging, robot, and computer-readable storage medium
EP3168705B1 (en) Domestic robotic system
US20100053593A1 (en) Apparatus, systems, and methods for rotating a lidar device to map objects in an environment in three dimensions
JP2019181692A (en) System and method for calculating orientation of device
CN106814753B (en) Target position correction method, device and system
WO2014055430A2 (en) Enhanced bundle adjustment techniques
US20210341566A1 (en) Systems and methods for positioning devices
JP7355484B2 (en) 3D surveying equipment and 3D surveying method
CN111637877B (en) Robot positioning method and device, electronic equipment and nonvolatile storage medium
US20230195130A1 (en) Information collection method, device and storage medium
US20220242444A1 (en) Roadmodel Manifold for 2D Trajectory Planner
CN112362054A (en) Calibration method, calibration device, electronic equipment and storage medium
JP2023503021A (en) Rotation Tracking with Swing Sensor
CN111090284A (en) Method for returning from traveling equipment to base station and self-traveling equipment
JP2018005709A (en) Autonomous mobile device
CN113885506B (en) Robot obstacle avoidance method and device, electronic equipment and storage medium
CN113409387A (en) Robot vision positioning method and robot
US11885911B2 (en) Apparatus and method for efficient scanner trajectory reconstruction from mobile LIDAR point cloud
JP2003337993A5 (en)
CN111856509A (en) Positioning method, positioning device and mobile equipment
CN113207412B (en) Target tracking method of visual servo mowing robot and visual servo mowing robot
US20210046653A1 (en) Position Accuracy Robotic Printing System
JP7098267B2 (en) Robot activation Positioning method, device, electronic device and storage medium
CN114111681A (en) Wheelbase calibration method and system for robot chassis
US20220252406A1 (en) 3D Odometry in 6D Space With Roadmodel 2D Manifold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination