CN107687855B - Robot positioning method and device and robot - Google Patents

Robot positioning method and device and robot Download PDF

Info

Publication number
CN107687855B
CN107687855B CN201710725155.5A CN201710725155A CN107687855B CN 107687855 B CN107687855 B CN 107687855B CN 201710725155 A CN201710725155 A CN 201710725155A CN 107687855 B CN107687855 B CN 107687855B
Authority
CN
China
Prior art keywords
robot
dimensional code
image
determining
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710725155.5A
Other languages
Chinese (zh)
Other versions
CN107687855A (en
Inventor
扬帅
梁顺健
崔传佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KUKA Robotics Guangdong Co Ltd
Original Assignee
Guangdong Midea Intelligent Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Midea Intelligent Technologies Co Ltd filed Critical Guangdong Midea Intelligent Technologies Co Ltd
Priority to CN201710725155.5A priority Critical patent/CN107687855B/en
Publication of CN107687855A publication Critical patent/CN107687855A/en
Application granted granted Critical
Publication of CN107687855B publication Critical patent/CN107687855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a robot positioning method and device based on two-dimensional codes and a robot. Therefore, the current position of the robot can be determined according to the coordinates of the two-dimensional code, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, which are contained in the image acquired according to the current position of the robot, so that the robot positioning mode is simplified, and the positioning precision and accuracy are improved.

Description

Robot positioning method and device and robot
Technical Field
The application relates to the technical field of mobile robots, in particular to a robot positioning method, a device and a robot.
Background
In order to apply the mobile robot to automated production and warehouse management, the mobile robot must precisely identify the current position, and only by precisely identifying the current position, the product can be accurately assembled and transported. Currently, there are several general mobile robot positioning methods: the method comprises a distance measurement method, a visual positioning method, a combined method of reckoning positioning and RFID and a bar code positioning method.
The visual positioning scheme mainly obtains image information of a ceiling through a vertically arranged camera, and the position of the robot is judged by utilizing image analysis and comparison technologies, so that the function of traversing and cleaning operation in the whole room is realized.
However, the above positioning method has problems that the position result determined by visual comparison is not accurate, and image extraction and information comparison are required by comparing an image acquired at any time with a pre-stored ceiling image, and the robot is required to have large image storage and processing capacity and high cost.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the first aspect of the application provides a robot positioning method based on a two-dimensional code, which realizes that the current position of the robot can be determined according to the coordinates of the two-dimensional code included in the image obtained from the current position of the robot, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, thereby simplifying the robot positioning mode and improving the positioning precision and accuracy.
The second aspect of the application provides a robot positioning device based on two-dimensional codes.
A third aspect of the present application provides a robot.
A fourth aspect of the present application proposes a computer-readable storage medium.
An embodiment of a first aspect of the present application provides a robot positioning method based on a two-dimensional code, including:
acquiring an image corresponding to the current position of the robot, wherein the image comprises a two-dimensional code with a unique spatial coordinate value;
and determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code and the area and/or position information of the two-dimensional code in the image.
According to the robot positioning method based on the two-dimensional code, firstly, an image corresponding to the current position of the robot is obtained, and then the current position of the robot is determined according to the coordinates corresponding to the two-dimensional code in the current image, the area of the two-dimensional code in the image and/or the position information of the two-dimensional code in the image. Therefore, the current position of the robot can be determined according to the coordinates of the two-dimensional code, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, which are contained in the image acquired according to the current position of the robot, so that the robot positioning mode is simplified, and the positioning precision and accuracy are improved.
The embodiment of the second aspect of the invention provides a robot positioning device based on two-dimensional codes, which comprises:
the robot comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image corresponding to the current position of the robot, and the image comprises a two-dimensional code with a unique spatial coordinate value;
and the determining module is used for determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code and the area and/or position information of the two-dimensional code in the image.
The robot positioning device based on the two-dimensional code, provided by the embodiment of the application, firstly obtains an image corresponding to the current position of the robot, and then determines the current position of the robot according to the coordinates corresponding to the two-dimensional code in the current image, the area of the two-dimensional code in the image and/or the position information of the two-dimensional code in the image. Therefore, the current position of the robot can be determined according to the coordinates of the two-dimensional code, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, which are contained in the image acquired according to the current position of the robot, so that the robot positioning mode is simplified, and the positioning precision and accuracy are improved.
An embodiment of a third aspect of the present invention provides a robot, including: a memory, a processor, and a vision system;
the vision system is used for acquiring an image corresponding to the current position;
the memory for storing an executable computer program;
the processor is configured to call and execute the executable computer program stored in the memory according to the image corresponding to the current position acquired by the vision system, so as to implement the two-dimensional code-based robot positioning method according to the first aspect.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the two-dimensional code-based robot positioning method according to the first aspect.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a two-dimensional code-based robot positioning method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a two-dimensional code setting manner provided in an embodiment of the present application;
fig. 3a to 3e are schematic diagrams of images of different two-dimensional codes acquired by a robot at the same position according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a two-dimensional code-based robot positioning method according to another embodiment of the present application;
FIG. 4b is a schematic structural diagram of a QR two-dimensional code;
fig. 5a to 5c are schematic diagrams of the same two-dimensional code image acquired by the robot at different positions according to the embodiment of the present application;
FIG. 6 is a schematic structural diagram of a two-dimensional code-based robot positioning device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The embodiments of the application mainly aim at the problems that in the prior art, the scheme of positioning through vision is a mode of determining the position of a robot by carrying out image extraction and information comparison on an image acquired at any time and a pre-stored ceiling image, the positioning result is not accurate, the image storage and processing capacity of the robot is large, and the cost is high. A robot positioning method based on two-dimensional codes is provided. According to the robot positioning method based on the two-dimensional code, the two-dimensional code image corresponding to the current position of the robot is obtained firstly, then the two-dimensional code image is analyzed, the space coordinate value corresponding to the current two-dimensional code and the position information of the two-dimensional code in the image are determined, and then the current position of the robot is determined according to the space coordinate value of the two-dimensional code and the position information of the two-dimensional code in the image.
The following describes a two-dimensional code-based robot positioning method and a robot in detail with reference to the drawings.
Fig. 1 is a schematic flowchart of a two-dimensional code-based robot positioning method according to an embodiment of the present application.
As shown in fig. 1, the robot positioning method based on the two-dimensional code includes:
step 101, acquiring an image corresponding to the current position of the robot, wherein the image comprises a two-dimensional code with a unique spatial coordinate value.
The robot positioning method based on the two-dimensional code can be executed by the robot provided by the embodiment of the application, so that the position of the robot can be accurately positioned. Alternatively, the method may be executed by the scheduling server to accurately locate each robot in the scheduling system.
Specifically, in the embodiment of the present application, the two-dimensional codes may be uniformly arranged on the floor or the ceiling of the robot work area. Each two-dimensional code corresponds to a unique spatial coordinate value and is used for defining the position of the two-dimensional code in the working area of the robot. The spatial coordinate values are coordinate values in the world coordinate system.
Correspondingly, the robot can acquire the two-dimensional code image corresponding to the position of the robot in real time through the upper visual system or the lower visual system, and then the position of the robot is determined.
In a preferred implementation form of this embodiment, because the height of robot, the floor height of work area and the restriction of each work or material rest position in the work area, if through setting up the two-dimensional code on the ceiling, the last vision system of robot probably can not accurately acquire the two-dimensional code image that is located on the ceiling in real time, therefore in this application, through set up the two-dimensional code on work area's ground to the supplementary robot that fixes a position.
That is, step 101 specifically includes:
and acquiring a two-dimensional code image corresponding to the current position of the robot through a visual system under the robot.
The lower robot vision system is a vision system capable of photographing the surrounding ground where the robot is located.
And 102, determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code and the area and/or position information of the two-dimensional code in the image.
It can be understood that, if two-dimensional codes representing different spatial coordinate values are sequentially covered on the ground of a robot working area, the robot can acquire the two-dimensional code corresponding to the current position in real time in the moving process, and then determine the current position according to the spatial coordinate value corresponding to the currently acquired two-dimensional code. However, because the working area of the robot is large, when the robot is positioned by adopting the above mode, the number of the required two-dimensional codes is very large, and the matching degree requirement on the image acquisition and processing speed and the running speed of the robot is high, so in the embodiment of the application, the two-dimensional codes can be uniformly arranged on the ground of the working area of the robot. The step 102 may be implemented as follows:
and 102a, judging whether the area of the two-dimensional code in the image is larger than a threshold value, if so, executing a step 102b, otherwise, executing a step 102 c.
And 102b, determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code.
And 102c, determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the area of the two-dimensional code in the image and the position information.
Specifically, fig. 2 is a schematic diagram of a two-dimensional code setting manner provided in an embodiment of the present application.
As shown in fig. 2, since the two-dimensional codes are uniformly arranged on the ground, the spatial coordinate value corresponding to each two-dimensional code corresponds to the position of the two-dimensional code on the ground. If the ground coordinate system is as shown in fig. 2, and the distance between each two-dimensional code along the X-axis and Y-axis is 1 meter (m), the coordinates of each two-dimensional code are (1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4), (3, 1), (3, 2), (3, 3), and (3, 4), respectively, from bottom to top and from left to right in the figure.
Therefore, after the robot acquires the two-dimensional code image corresponding to the current position, the robot can firstly scan the two-dimensional code image to determine the spatial coordinate value corresponding to the two-dimensional code in the current image, and thus can determine the current approximate position of the robot.
In actual use, the robot can utilize the lower visual system to obtain the two-dimensional code image corresponding to the current position in real time in the moving process. When the robot is located above the two-dimensional code or is close to the two-dimensional code, the space coordinate value corresponding to the two-dimensional code can be directly determined as the current position of the robot when the area of the two-dimensional code in the two-dimensional code image obtained by using the lower vision is larger than the threshold value.
The size of the threshold may be determined according to the setting position of the vision system under the robot, the functional parameters, the actual size of the two-dimensional code, and the like, which is not limited in this embodiment.
Further, if the robot is currently located in the middle area of the two-dimensional codes, as shown in fig. 3a, if the robot is currently located at point a in fig. 3a, at this time, the area occupied by the two-dimensional codes in the obtained two-dimensional code image is relatively small by using the lower vision system, and the position of the two-dimensional codes in the image is related to the current position relationship between the robot and the two-dimensional codes.
For example, when the robot is in the point a area as shown in fig. 3a, the acquired image of the two-dimensional code including the upper left corner of the point a may be as shown in fig. 3b, the acquired image of the two-dimensional code including the upper right corner of the point a may be as shown in fig. 3c, the acquired image of the two-dimensional code including the lower left corner of the point a may be as shown in fig. 3d, and the acquired image of the two-dimensional code including the lower right corner of the point a may be as shown in fig. 3 e. Therefore, after the robot analyzes the currently acquired two-dimensional code image, the robot determines that the space coordinate value of the two-dimensional code included in the current image corresponds to the two-dimensional code at the upper left corner, and the position of the two-dimensional code in the image is in the form shown in fig. 3b, so that the point A of the current robot on the ground can be determined.
In the embodiment of the application, when it is determined that the area of the two-dimensional code in the two-dimensional code image is smaller than the threshold, the position of the robot can be determined according to the position of the two-dimensional code in the image and the coordinates of the two-dimensional code.
It can be understood that the above-mentioned manner of performing robot positioning based on two-dimensional codes is implemented based on that the focal length of the vision system under the robot is not adjustable. The focal length of the image acquired by the vision system under the robot is fixed, so that when the distance between the robot and the two-dimensional code is different, the size of the two-dimensional code in the acquired image is different, and the position of the robot can be determined according to the space coordinate value of the two-dimensional code and the position of the two-dimensional code in the acquired two-dimensional code image.
The robot positioning method based on the two-dimensional code provided by the embodiment includes the steps of firstly obtaining an image corresponding to the current position of the robot, and then determining the current position of the robot according to coordinates corresponding to the two-dimensional code in the current image, the area of the two-dimensional code in the image and/or position information of the two-dimensional code in the image. Therefore, the current position of the robot can be determined according to the coordinates of the two-dimensional code, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, which are contained in the image acquired according to the current position of the robot, so that the robot positioning mode is simplified, and the positioning precision and accuracy are improved.
Through the analysis, the robot can determine the current position of the robot according to the space coordinate value corresponding to the two-dimensional code in the currently acquired image, the area of the two-dimensional code in the image and the position information. The following describes the specific process of determining the current position of the robot in detail by taking a two-dimensional code as a Quick-response code (QR two-dimensional code for short) as an example with reference to fig. 4.
Fig. 4 is a schematic flowchart of a two-dimensional code-based robot positioning method according to another embodiment of the present application.
As shown in fig. 4, the robot positioning method based on the two-dimensional code includes:
step 401, acquiring an image corresponding to the current position of the robot, wherein the image includes a two-dimensional code with a unique spatial coordinate value.
Step 402, determining whether the area of the two-dimensional code in the image is larger than a threshold, if so, executing step 403, otherwise, executing step 404.
And 403, determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code.
And step 404, determining coordinate values of each position detection graph in the quick response code in the image.
And 405, determining a distance value and an angle value of the two-dimensional code and the robot in the space according to the coordinate values of the position detection graphs in the image.
Specifically, the structure of the QR two-dimensional code is shown in fig. 4 b. The QR two-dimensional code is square, and three corners of the QR two-dimensional code respectively contain a position detection graph which can help to position the two-dimensional code. In the embodiment of the application, the positions of the graphs can be detected based on the three positions, the position of the two-dimensional code in the current image can be determined, and then the mutual position of the two-dimensional code and the robot in the space can be determined.
In the embodiment of the present application, the mutual position of the two-dimensional code and the robot in the space is determined based on the position of the two-dimensional code in the current image, and the following holds true: the directions of all the two-dimensional codes in the robot work area are the same. The directions of the two-dimensional codes mean that the directions of the same position detection patterns in the two-dimensional codes are the same.
For example, as shown in fig. 4b, if the three position detection patterns in the three corners of the two-dimensional code are respectively identified as 1, 2 and 3, the two-dimensional codes in the robot work area may have the same direction by orienting the position detection patterns identified as 1 in the two-dimensional codes in the same direction, such as the form shown in fig. 2.
Therefore, when the two-dimensional codes have the same direction, the positions of the two-dimensional codes are different in the images containing the same two-dimensional code, which are acquired by the robot at different positions.
For example, fig. 5a to 5c are schematic diagrams of the same two-dimensional code image acquired by the robot at different positions according to the embodiment of the present disclosure.
If the position of each position detection pattern in the two-dimensional code is in the form shown in fig. 5a, it indicates that the robot is currently right behind the two-dimensional code, and if the position of each position detection pattern in the two-dimensional code is in the form shown in fig. 5b, it indicates that the robot is currently right in front of the two-dimensional code, and so on, the relative positions of the two-dimensional code and the robot in the space can be determined according to the coordinate values of each position detection pattern in the image.
Furthermore, because the coordinate values of each position detection pattern in the image can only reflect the area and the position of the two-dimensional code in the image, the distance value and the angle value between the two-dimensional code and the robot in the space are determined according to the area and the position of the two-dimensional code in the image, and the conversion relationship between the pixel point and the actual position in the image acquired by the robot needs to be determined.
Therefore, in the embodiment of the application, under the condition that the distance between the two-dimensional code and the robot in the space is known, an image containing the two-dimensional code is obtained, and then the coordinate values of each position detection figure in the image are determined. Therefore, when the system is in actual use, the coordinates of the detection graphs at the positions in the newly acquired image can be compared with the coordinates of the detection graphs at the positions in the preset image to determine the distance value and the angle value of the current two-dimensional code and the robot in the space.
For example, if the positions of the position detection patterns are known to be in the form shown in FIG. 5a, the distance between the two-dimensional code and the robot in the space is 50 centimeters (cm), and the two-dimensional code is located right in front of the robot, wherein, in FIG. 5a, the line connecting the first position detection pattern and the origin of coordinates of the image forms an angle α with the X-axis of the image1The line connecting the second position detecting pattern and the origin of coordinates of the image forms an angle β with the X-axis of the image1The included angle between the connecting line between the third position detection graph and the image coordinate origin and the X axis in the image is gamma1
Then, when the position of each position detection pattern in the acquired image is in the form shown in FIG. 5c, the line connecting the first position detection pattern and the origin of coordinates of the image makes an angle α with the X-axis in the image2The line connecting the second position detecting pattern and the origin of coordinates of the image forms an angle β with the X-axis of the image2The included angle between the connecting line between the third position detection graph and the image coordinate origin and the X axis in the image is gamma2. And the distances between the first detected position pattern and the second detected position pattern in fig. 5c and 5a are equal. That is, when fig. 5a and 5c are compared, the two-dimensional codes have the same area, that is, the distance between the robot and the two-dimensional codes is the same.
However, the angle between the two-dimensional code and the coordinate origin in the image is changed, so that the relative position of the current robot and the two-dimensional code in a world coordinate system can be determined according to the change value of the included angle between the connecting line between each detection position graph and the coordinate origin and the X axis by utilizing the geometric relation. If the two-dimensional code is calculated, it can be known that in fig. 5c, an included angle between a connecting line of the two-dimensional code and the robot in the space and the X axis is-45 degrees, that is, the robot is currently at the upper left of the two-dimensional code.
And 406, determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, and the distance and angle value between the two-dimensional code and the robot in the space.
Specifically, after the distance and the angle value of the robot and the two-dimensional code in the space are determined, the coordinate value of the two-dimensional code in the space can be combined to determine the position of the robot.
For distance, if the spatial coordinate value of the two-dimensional code included in the image currently acquired by the robot is (2, 3), and the coordinates of the detection graph in the image are detected according to each position in the two-dimensional code, it is determined that the distance value between the two-dimensional code and the robot in the space is 0.5, and the included angle between the connecting line between the two-dimensional code and the robot and the Y axis is 45 degrees, it can be determined that the current position of the robot is
Figure BDA0001385852870000111
The robot positioning method based on the two-dimensional code comprises the steps of firstly obtaining an image corresponding to the current position of a robot, wherein the image comprises the two-dimensional code with a unique spatial coordinate value, then determining the current position of the robot according to the spatial coordinate value corresponding to the two-dimensional code when the area of the image of the two-dimensional code is larger than a threshold value, and determining the position of the robot according to the spatial coordinate value of the two-dimensional code, the area of the two-dimensional code in the image and position information when the area of the two-dimensional code in the image is smaller than the threshold value. Therefore, the robot can be accurately positioned by utilizing the two-dimensional code with the fixed direction, the positioning process of the robot is simplified, and the positioning accuracy and reliability of the robot are improved.
In order to realize the embodiment, the application further provides a robot positioning device based on the two-dimensional code.
Fig. 6 is a schematic structural diagram of a two-dimensional code-based robot positioning device according to an embodiment of the present application.
As shown in fig. 6, the robot positioning device based on the two-dimensional code includes:
the acquiring module 61 is configured to acquire an image corresponding to a current position of the robot, where the image includes a two-dimensional code with a unique spatial coordinate value;
and the determining module 62 is configured to determine the current position of the robot according to the space coordinate value corresponding to the two-dimensional code and the area and/or position information of the two-dimensional code in the image.
Specifically, the robot positioning device based on the two-dimensional code provided in this embodiment of the present invention can execute the robot positioning method based on the two-dimensional code provided in this embodiment of the present invention. The robot positioning device based on the two-dimensional code can be configured in a robot with the two-dimensional code in any working area so as to accurately position the robot.
In a possible implementation form of this embodiment, the obtaining module 61 is specifically configured to:
and acquiring an image corresponding to the current position of the robot through a visual system under the robot.
In one possible implementation form, the determining module 62 includes:
a judging unit 621, configured to judge whether an area of the two-dimensional code in the image is larger than a threshold;
a first processing unit 622, configured to determine a current position of the robot according to a spatial coordinate value corresponding to the two-dimensional code if an area of the two-dimensional code in the image is greater than a threshold;
and a second processing unit 623, configured to determine the position of the robot according to the spatial coordinate value corresponding to the two-dimensional code, the area of the two-dimensional code in the image, and the position information if the area of the two-dimensional code in the image is not greater than the threshold.
In a possible implementation form of this embodiment, the two-dimensional code is a quick response code;
correspondingly, the second processing unit 623 is specifically configured to:
determining coordinate values of each position detection graph in the quick response code in the image;
determining a distance value and an angle value of the two-dimensional code and the robot in space according to coordinate values of the position detection graphs in the image;
and determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the distance between the two-dimensional code and the robot in the space and the angle value.
Further, the second processing unit is specifically configured to compare coordinate values of each position detection pattern in the image with preset coordinate values of each position detection pattern in the image, and determine a distance value and an angle value of the two-dimensional code and the robot in space.
It should be noted that the foregoing description of the two-dimensional code based robot positioning method is also applicable to the two-dimensional code based robot positioning apparatus provided in this embodiment, and details thereof are not repeated here.
The robot positioning device based on the two-dimensional code provided by this embodiment first obtains an image corresponding to a current position of the robot, and then determines the current position of the robot according to coordinates corresponding to the two-dimensional code in the current image, an area of the two-dimensional code in the image, and/or position information of the two-dimensional code in the image. Therefore, the current position of the robot can be determined according to the coordinates of the two-dimensional code, the area of the two-dimensional code in the image and/or the position of the two-dimensional code in the image, which are contained in the image acquired according to the current position of the robot, so that the robot positioning mode is simplified, and the positioning precision and accuracy are improved.
In order to realize the embodiment, the application also provides a robot.
Fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present application.
As shown in fig. 7, the robot includes a memory 71, a processor 72, and a vision system 73.
The vision system 73 is configured to acquire an image corresponding to a current position;
the memory 71 for storing an executable computer program;
the processor 72 is configured to call and execute the executable computer program stored in the memory according to the image corresponding to the current position acquired by the vision system, so as to implement the two-dimensional code-based robot positioning method.
It should be noted that the foregoing explanation on the embodiment of the robot positioning method based on the two-dimensional code is also applicable to the robot in this embodiment, and is not repeated here.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the two-dimensional code based robot positioning method as in the foregoing embodiments.
To achieve the above object, a fifth aspect of the present invention provides a computer program product, which when executed by an instruction processor in the computer program product, performs the two-dimensional code-based robot positioning method as in the foregoing embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A robot positioning method based on two-dimensional codes is characterized by comprising the following steps:
acquiring an image corresponding to the current position of the robot, wherein the image comprises a two-dimensional code with a unique spatial coordinate value, and the two-dimensional code is a quick response code and is uniformly arranged on the ground of a working area of the robot;
judging whether the area of the two-dimensional code in the image is larger than a threshold value;
if so, determining the current position of the robot according to the space coordinate value corresponding to the two-dimensional code;
and if not, determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the area of the two-dimensional code in the image and the position information, wherein the position information is determined according to the position of each position detection graph in the two-dimensional code.
2. The method of claim 1, wherein the determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code and the area and position information of the two-dimensional code in the image comprises:
determining coordinate values of each position detection graph in the quick response code in the image;
determining a distance value and an angle value of the two-dimensional code and the robot in space according to coordinate values of the position detection graphs in the image;
and determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the distance between the two-dimensional code and the robot in the space and the angle value.
3. The method of claim 2, wherein the determining the distance value and the angle value of the two-dimensional code and the robot in space comprises:
and comparing the coordinate value of each position detection figure in the image with the coordinate value of each preset position detection figure in the image, and determining the distance value and the angle value of the two-dimensional code and the robot in the space.
4. The method of any one of claims 1-3, wherein said obtaining an image corresponding to a current location of the robot comprises:
and acquiring an image corresponding to the current position of the robot through a visual system under the robot.
5. The utility model provides a robot positioning device based on two-dimensional code which characterized in that includes:
the acquisition module is used for acquiring an image corresponding to the current position of the robot, wherein the image comprises a two-dimensional code with a unique spatial coordinate value, and the two-dimensional code is a quick response code and is uniformly arranged on the ground of a working area of the robot;
a determination module comprising:
the judging unit is used for judging whether the area of the two-dimensional code in the image is larger than a threshold value or not;
the first processing unit is used for determining the current position of the robot according to a space coordinate value corresponding to the two-dimensional code if the area of the two-dimensional code in the image is larger than a threshold value;
and the second processing unit is used for determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the area of the two-dimensional code in the image and position information if the area of the two-dimensional code in the image is not larger than a threshold value, wherein the position information is determined according to the position of each position detection graph in the two-dimensional code.
6. The apparatus as claimed in claim 5, wherein said second processing unit is specifically configured to:
determining coordinate values of each position detection graph in the quick response code in the image;
determining a distance value and an angle value of the two-dimensional code and the robot in space according to coordinate values of the position detection graphs in the image;
and determining the position of the robot according to the space coordinate value corresponding to the two-dimensional code, the distance between the two-dimensional code and the robot in the space and the angle value.
7. The apparatus as claimed in claim 6, wherein said second processing unit is specifically configured to:
and comparing the coordinate value of each position detection figure in the image with the coordinate value of each preset position detection figure in the image, and determining the distance value and the angle value of the two-dimensional code and the robot in the space.
8. The apparatus of any one of claims 5-7, wherein the acquisition module is specifically configured to:
and acquiring an image corresponding to the current position of the robot through a visual system under the robot.
9. A robot comprising a memory, a processor and a vision system;
the vision system is used for acquiring an image corresponding to the current position;
the memory for storing an executable computer program;
the processor is configured to call and execute the executable computer program stored in the memory according to the image corresponding to the current position acquired by the vision system, so as to implement the two-dimensional code-based robot positioning method according to any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the two-dimensional code based robot positioning method according to any one of claims 1 to 4.
CN201710725155.5A 2017-08-22 2017-08-22 Robot positioning method and device and robot Active CN107687855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710725155.5A CN107687855B (en) 2017-08-22 2017-08-22 Robot positioning method and device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710725155.5A CN107687855B (en) 2017-08-22 2017-08-22 Robot positioning method and device and robot

Publications (2)

Publication Number Publication Date
CN107687855A CN107687855A (en) 2018-02-13
CN107687855B true CN107687855B (en) 2020-07-31

Family

ID=61153653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710725155.5A Active CN107687855B (en) 2017-08-22 2017-08-22 Robot positioning method and device and robot

Country Status (1)

Country Link
CN (1) CN107687855B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108594822A (en) * 2018-05-10 2018-09-28 哈工大机器人(昆山)有限公司 Robot localization method, robot charging method based on Quick Response Code and system
CN109471443A (en) * 2018-11-30 2019-03-15 上海物景智能科技有限公司 A kind of wheeled robot localization method and wheeled robot
CN109459032B (en) * 2018-12-04 2022-12-27 北京旷视机器人技术有限公司 Mobile robot positioning method, navigation method and grid map establishing method
CN109711503A (en) * 2018-12-29 2019-05-03 中国银联股份有限公司 A kind of robot localization method and device
CN109909989A (en) * 2019-03-22 2019-06-21 驿涛项目管理有限公司 A kind of sandy beach garbage collection robot
CN110170993A (en) * 2019-04-18 2019-08-27 张煜轩 A kind of network-enabled intelligent robot
CN111950314A (en) * 2019-05-17 2020-11-17 锥能机器人(上海)有限公司 Positioning method and device, machine readable medium and system thereof
CN110160528B (en) * 2019-05-30 2021-06-11 华中科技大学 Mobile device pose positioning method based on angle feature recognition
CN110530376B (en) * 2019-10-10 2021-04-23 上海钛米机器人科技有限公司 Robot positioning method, device, robot and storage medium
CN110992423A (en) * 2019-11-15 2020-04-10 曾菊阳 Driving positioning method, system and medium based on image matching
CN117032285B (en) * 2023-08-18 2024-03-29 五八智能科技(杭州)有限公司 Foot type robot movement method and system
CN117824666B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method
CN117830604B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Two-dimensional code anomaly detection method and medium for positioning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN105437251A (en) * 2016-01-04 2016-03-30 杭州亚美利嘉科技有限公司 Method and device for locating position of robot
CN106153050A (en) * 2016-08-27 2016-11-23 杭州国辰牵星科技有限公司 A kind of indoor locating system based on beacon and method
EP3153940A1 (en) * 2015-10-07 2017-04-12 DB Systel GmbH Apparatus and method for controlling a vehicle
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8757477B2 (en) * 2011-08-26 2014-06-24 Qualcomm Incorporated Identifier generation for visual beacon

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
EP3153940A1 (en) * 2015-10-07 2017-04-12 DB Systel GmbH Apparatus and method for controlling a vehicle
CN105354296A (en) * 2015-10-31 2016-02-24 广东欧珀移动通信有限公司 Terminal positioning method and user terminal
CN105437251A (en) * 2016-01-04 2016-03-30 杭州亚美利嘉科技有限公司 Method and device for locating position of robot
CN106153050A (en) * 2016-08-27 2016-11-23 杭州国辰牵星科技有限公司 A kind of indoor locating system based on beacon and method
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨添瑞.自动导引车精确定位技术的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015,(第6期), *
自动导引车精确定位技术的研究;杨添瑞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150615(第6期);第5,24-32,39页 *

Also Published As

Publication number Publication date
CN107687855A (en) 2018-02-13

Similar Documents

Publication Publication Date Title
CN107687855B (en) Robot positioning method and device and robot
CN110599541B (en) Method and device for calibrating multiple sensors and storage medium
US11049280B2 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN107766859B (en) Mobile robot positioning method and device and mobile robot
US10930015B2 (en) Method and system for calibrating multiple cameras
CN107671863B (en) Robot control method and device based on two-dimensional code and robot
US9529945B2 (en) Robot simulation system which simulates takeout process of workpieces
CN109118543B (en) System and method for calibrating a machine vision camera along at least three discrete planes
US8265425B2 (en) Rectangular table detection using hybrid RGB and depth camera sensors
US10452949B2 (en) System and method for scoring clutter for use in 3D point cloud matching in a vision system
KR102056664B1 (en) Method for work using the sensor and system for performing thereof
CN107431788B (en) Method and system for image-based tray alignment and tube slot positioning in a vision system
CN111380510B (en) Repositioning method and device and robot
CN113146073B (en) Vision-based laser cutting method and device, electronic equipment and storage medium
US10778902B2 (en) Sensor control device, object search system, object search method, and program
CN112716401B (en) Obstacle-detouring cleaning method, device, equipment and computer-readable storage medium
CN113284178A (en) Object stacking method and device, computing equipment and computer storage medium
US20130100281A1 (en) Method, System and Computer Program Product for Detecting an Obstacle with a Camera
JP2015007639A (en) Information processing apparatus, information processing method and program
CN116038701B (en) Hand-eye calibration method and device for four-axis mechanical arm
JP2013532451A (en) Method and apparatus for locating an object in a warehouse
CN108564626A (en) Method and apparatus for determining the relative attitude angle being installed between the camera of acquisition entity
CN113554703B (en) Robot positioning method, apparatus, system and computer readable storage medium
CN114043531B (en) Table tilt angle determination, use method, apparatus, robot, and storage medium
CN110866950B (en) Object positioning and guiding system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220621

Address after: 528311 area a, 2nd floor, building B1, 3 Liaoshen Road, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Patentee after: KUKA ROBOT (GUANGDONG) Co.,Ltd.

Address before: 528311 Global Innovation Center of Penglai Road Industrial Avenue Mei, Beijiao Town, Shunde District, Foshan City, Guangdong Province

Patentee before: GUANGDONG MIDEA INTELLIGENT ROBOT Co.,Ltd.

TR01 Transfer of patent right