CN112650207A - Robot positioning correction method, apparatus, and storage medium - Google Patents

Robot positioning correction method, apparatus, and storage medium Download PDF

Info

Publication number
CN112650207A
CN112650207A CN201910963898.5A CN201910963898A CN112650207A CN 112650207 A CN112650207 A CN 112650207A CN 201910963898 A CN201910963898 A CN 201910963898A CN 112650207 A CN112650207 A CN 112650207A
Authority
CN
China
Prior art keywords
reference line
positioning information
edge pixel
information
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910963898.5A
Other languages
Chinese (zh)
Inventor
朱建华
郭斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN201910963898.5A priority Critical patent/CN112650207A/en
Publication of CN112650207A publication Critical patent/CN112650207A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

The application provides a positioning correction method, equipment and a storage medium of a robot, wherein the method comprises the following steps: acquiring first positioning information of a reference line according to the acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot; determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line; and correcting the current positioning information of the robot according to the first positioning information and the second positioning information. The method realizes the correction of the positioning error and improves the positioning precision.

Description

Robot positioning correction method, apparatus, and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, and a storage medium for positioning and correcting a robot.
Background
With the development of artificial intelligence technology, the research and development of mobile robots are very rapid, some service robots have gradually entered into people's daily life, such as cleaning robots, and with the increase of the intelligence degree of cleaning robots, the usage rate of cleaning robots in homes is higher and higher. Regardless of the type of robot, navigation positioning is required as long as the robot moves autonomously.
To save costs, most cleaning robots today employ inertial navigation and collision to map and locate a clean environment (e.g., an indoor environment), but inertial navigation relies heavily on the mileage generated by the wheels of the robot. In the actual use process, the wheels and the ground have certain degree of slip, and accumulated errors exist in the inertial navigation process, so that the positioning precision is influenced.
Disclosure of Invention
The application provides a positioning correction method and equipment of a robot and a storage medium, so as to improve positioning accuracy.
In a first aspect, the present application provides a positioning correction method for a robot, including:
acquiring first positioning information of a reference line according to the acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line;
and correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
In a second aspect, the present application provides a robot comprising:
the camera comprises a first lens, a second lens, a processor and a memory;
wherein the memory is used for storing executable instructions of the processor;
the first lens is used for acquiring a first environment image;
the second lens assembly is used for collecting a second environment image;
the processor is configured to implement, via execution of the executable instructions:
acquiring first positioning information of a reference line according to the acquired first environment image and the acquired second environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line;
and correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of the first aspects via execution of the executable instructions.
According to the positioning correction method, the positioning correction device and the storage medium of the robot, first positioning information of a reference line is obtained according to an acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot; determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line; according to the first positioning information and the second positioning information, the current positioning information of the robot is corrected, so that accumulated errors in the robot positioning process can be eliminated, and the positioning accuracy is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an embodiment of a positioning correction method for a robot according to the present disclosure;
FIG. 3 is a schematic diagram of an embodiment of a method provided herein;
FIG. 4 is a schematic diagram illustrating a lens position relationship according to an embodiment of the method provided by the present application;
FIG. 5 is a schematic diagram of a first edge pixel point extracted according to an embodiment of the method provided by the present application;
FIG. 6 is a schematic three-dimensional coordinate diagram of a first edge pixel point according to an embodiment of the method provided by the present application;
FIG. 7 is a schematic diagram of a second edge pixel point extracted according to an embodiment of the method provided by the present application;
FIG. 8 is a schematic diagram of a projected pixel according to an embodiment of the method provided in the present application;
fig. 9 is a schematic diagram of three-dimensional coordinates after optimization of a first edge pixel point according to an embodiment of the method provided by the present application;
FIG. 10 is a schematic illustration of an environment map according to an embodiment of the method provided herein;
FIG. 11 is a functional area partition diagram of an embodiment of a method provided by the present application; a
FIG. 12 is a schematic structural diagram of an embodiment of a robot provided by the present application
Fig. 13 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the drawings described herein are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Firstly, the application scenario related to the present application is introduced:
the method provided by the embodiment of the application is applied to intelligent robots, such as floor sweeping robots and other service robots, so that the positioning accuracy is improved.
The robot in this application embodiment can be equipped with the image acquisition unit, and in an implementation, the image acquisition unit can include two image acquisition units, and one is the RGB image acquisition unit, and one is the depth image acquisition unit, and wherein the depth image acquisition unit can be the structured light camera lens, can also be based on time of flight (TOF for short) camera lens.
In one embodiment, the two image capturing units form a binocular camera assembly, for example, a TOF lens is disposed above the binocular camera assembly, and an RGB lens is disposed below the binocular camera assembly. The two lenses have a certain common viewing area.
In one embodiment, as shown in fig. 1, the robot 10 performs back-and-forth cleaning on the area to be cleaned in a zigzag path during the cleaning process, and if a collision or the like occurs during the zigzag cleaning process, the back-and-forth 4-meter positioning drift can reach a maximum of one machine body. During cleaning, the robot 10 turns around when it encounters a wall, and when the adjacent channel 11 turns around, there is a partial common viewing area, i.e. the same skirting line on the wall can be seen. Skirting lines on the same wall surface in a common room are parallel to each other. Therefore, in the scheme of the application, when the same skirting line is seen by the robot in the cleaning process, certain positioning accumulated errors can be eliminated according to the parallel characteristic between the skirting lines.
In the method of the embodiment of the application, the execution subject may be a processor of the robot, wherein the processor may be integrated in the robot; or separated from the robot and integrated in a controller; or may be an external device, such as a server, for example, which is not limited in the embodiments of the present invention and communicates with the robot.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of an embodiment of a method for correcting robot positioning according to the present application. As shown in fig. 2, the method provided by this embodiment includes:
step 101, acquiring first positioning information of a reference line according to an acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
specifically, as shown in fig. 3, in the zigzag cleaning process, the robot cleans a room with the navigation channel as a guide (for example, the distance between the navigation channels is fixed), and when a kick line (a straight line in fig. 3, for example, is an upper edge of the kick line) is observed for the first time, the positioning information (direction information and coordinate information in the environment map) of the kick line is recorded in the environment map. When the robot enters the next channel and observes the skirting line again, the current first positioning information of the skirting line can be acquired through the acquired environment image. The current direction information and coordinate information are obtained, for example, from depth information and pixel coordinates of the environment image.
And 102, determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line.
Specifically, according to the first positioning information of the reference line, the same marked positioning information (i.e., the second positioning information) of the reference line is searched in the environment map, that is, the positioning information of the reference line in the environment map that has been determined before, such as the positioning information of the left leg line in fig. 3, two leg lines in fig. 3 are actually the same leg line, and the positioning information is different due to the positioning error.
And 103, correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
Specifically, the direction information included in the current positioning information of the robot may be corrected according to the direction information included in the first positioning information of the reference line and the direction information included in the second positioning information of the reference line;
and correcting the coordinate information included in the current positioning information of the robot according to the coordinate information included in the first positioning information of the reference line and the coordinate information included in the second positioning information of the reference line.
As shown in fig. 3, the distance between the robot and the skirting line can be calculated by the depth information of the collected environment image. Since the coordinates and direction of the kick line are known, the value of the angle in the robot's current positioning information (e.g., the positioning information measured by the IMU) can be corrected by the difference between direction 1 (e.g., the angle from the X-axis) and direction 2 (e.g., the angle from the X-axis). Secondly, since the position coordinates of the skirting line in the environment map are fixed, the coordinates (X, Y value) in the current positioning information can be corrected through the first positioning information of the currently observed skirting line and the second positioning information obtained by observing the skirting line before. In one implementation, the offset of the robot in the X direction during the course switching process is generally considered to be small, and when the robot observes the skirting line again, the Y2 value obtained at this time should be the previous Y1 minus the Y offset Dis1, so that the angle and the coordinates of the IMU information can be corrected.
Further, if the second positioning information of the reference line does not exist in the environment map, the first positioning information of the reference line is added to the environment map according to the current positioning information of the robot.
Specifically, the robot may establish and refine an environment map in the cleaning process, and may add the first positioning information of the newly identified reference line to the environment map. I.e. the coordinates of the skirting line are marked in the map.
After the first positioning information of the reference line is determined, color information may be given to the currently identified skirting line according to RGB color information, that is, the color information corresponding to the reference line is determined according to the RGB color information in the second environment image and is added to the environment map.
The color information can be the color of a pure skirting line, the color of a wall body, the color of a ground and the combination of the colors. For example, the skirting line is black in color and the wall is white in color.
TABLE 1
Height of skirting line Skirting line color Color of wall Colour of ground Skirting line angle Location information
80mm White colour Yellow colour Texture floor (x,y)
80 0 0 0 238 220 130 139 69 19 0 (x,y)
Line 3 in table 1 represents the markers in the environment map, with the colors represented by grayscale.
Further, when the second positioning information matched with the reference line is determined, the second positioning information can be determined by combining the first positioning information and the color information corresponding to the reference line.
In some embodiments, in addition to extracting the positioning information of the skirting line, other straight line segments of each functional area in the indoor environment, such as the positioning information of the straight line at the boundary of the door frame and the wall, may also be extracted. For example, we can extract the first positioning information of the doorframe edge line according to the aforementioned method, build the following table 2, mark in the environment map, and perform positioning correction using the positioning information of the doorframe edge line.
TABLE 2
Doorframe color (left side) Wall color (Right side) Edge position coordinates
139 71 38 255 250 250 x,y
As shown in fig. 10, the location information of the reference line marked in the environment map may be represented.
According to the method, first positioning information of a reference line is acquired according to an acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot; determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line; according to the first positioning information and the second positioning information, the current positioning information of the robot is corrected, so that accumulated errors in the robot positioning process can be eliminated, and the positioning accuracy is improved.
On the basis of the above embodiment, further, the acquired environment images include a first environment image acquired by a first lens and a second environment image acquired by a second lens, and the determining the positioning information of the reference line in step 101 may be specifically implemented by:
extracting at least two first edge pixel points corresponding to the reference line in the first environment image;
determining the three-dimensional coordinates of each first edge pixel point under a first lens coordinate system according to the pixel coordinates of each first edge pixel point;
optimizing the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates of the projection pixel points of each first edge pixel point on the second environment image and the pixel coordinates of at least two second edge pixel points corresponding to the reference line extracted from the second environment image to obtain the optimized three-dimensional coordinates of each first edge pixel point;
and determining first positioning information of the reference line according to the optimized three-dimensional coordinates of the first edge pixel points.
In one implementation, as shown in fig. 4, the first lens may be a TOF lens and the second lens may be an RGB lens. In the following examples, the reference line is described by taking a skirting line as an example.
Because the skirting line is installed on the ground, the upper edge and the lower edge of the skirting line are parallel to the horizontal ground. In a first environment image acquired by a TOF lens, the depth value of each pixel point is known, and three-dimensional coordinates of point clouds at the edge of a skirting line under a TOF lens coordinate system need to be converted into three-dimensional coordinates of the point clouds under an RGB lens according to external parameters of a camera (namely, relative pose relationship between the TOF lens and the RGB lens). And then calculating the position of a projection pixel point of the point cloud on a second environment image (such as an RGB image) according to the internal parameters of the RGB lens. Because the RGB image contains rich color characteristics, the edge pixel position of the skirting line can be easily extracted. Further, according to the pixel coordinates of the projection pixel points and the pixel coordinates of the second edge pixel points extracted from the second environment image, the three-dimensional coordinates of the first edge pixel points under the first lens coordinate system are optimized, and then according to the optimized three-dimensional coordinates of the first edge pixel points, the first positioning information of the skirting line is determined.
In an implementation manner, at least two first edge pixel points corresponding to the reference line in the first environment image may be extracted according to luminance information in the first environment image, and a pixel coordinate of each first edge pixel point may be obtained. As shown in fig. 5, the extracted edge pixel points of the skirting line are shown.
Specifically, edge information of a skirting line in the first environment image, namely first edge pixel points, can be extracted according to the brightness gradient of the first environment image, so that pixel coordinates of each first edge pixel point are obtained.
In an implementation manner, the three-dimensional coordinates of each first edge pixel point in the first lens coordinate system are determined, which may specifically be implemented by the following steps:
and determining the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates and the depth information of each first edge pixel point and the internal parameters of the first lens.
In particular, pi=[xi,yi,zi]N, where N is the number of extracted first edge pixel points; p is a radical ofiIs the three-dimensional coordinate of the first edge pixel point.
xi=depthi(ui-cx)/fx
yi=depthi(vi-cy)/fy
zi=depthi
Wherein depth isiPixel coordinates (u) representing a first edge pixel pointi,vi) Corresponding depth, internal reference matrix K of the first lens1Is composed of
Figure BDA0002229855810000081
Wherein, cx、cyRespectively representing the shift of the optical axis in the image plane, i.e. the pixel coordinates of the principal point of the camera, fx、fyThe focal lengths in the horizontal direction and the vertical direction are respectively expressed, and the obtained three-dimensional coordinates are shown in fig. 6.
In an implementation manner, the pixel coordinates of the projection pixel points of each first edge pixel point in the second environment image are determined, and the method specifically includes the following steps:
determining the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system according to the relative pose relationship between the first lens and the second lens and the three-dimensional coordinates of the first edge pixel points in the first lens coordinate system;
and determining the pixel coordinates of the projection pixel points of the first edge pixel points in the second environment image according to the internal reference information of the second lens and the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system.
Specifically, the position of the projection pixel point of the point cloud under the first lens coordinate system on the second environment image can be calculated through the following formula:
Figure BDA0002229855810000082
wherein
Figure BDA0002229855810000091
And the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system are represented, R represents a rotation matrix between the first lens and the second lens, and t represents the translation amount between the first lens and the second lens, as shown in fig. 4, and represents the relative relationship between the first lens (TOF lens) and the second lens (RGB lens).
Further, according to the position coordinates of the three-dimensional point corresponding to the first edge pixel point in the second lens coordinate system, the pixel coordinates of the projection pixel point of the first edge pixel point in the second environment image are determined by using the following formula:
Figure BDA0002229855810000092
wherein, the pixel coordinate of the projection pixel point is (U)i,Vi) In which K is2The internal reference matrix of the second lens has the same meaning as the internal reference matrix of the first lens. The pixel coordinate of the projection pixel point is (U)i,Vi) Middle UiAnd ViAs a function of the depth information of the first edge pixel.
The white dots shown in fig. 8 are projected pixel dots.
On the basis of the above embodiment, further, according to RGB color information in the second environment image, each second edge pixel point corresponding to a skirting line in the second environment image may be extracted, and a pixel coordinate of each second edge pixel point may be obtained;
and generating a line segment corresponding to each second edge pixel according to the pixel coordinate of each second edge pixel point.
Specifically, a color gradient may be used to extract second edge pixel points in a second environment image (such as an RGB image), or other edge detection algorithms may be used to extract second edge pixel points, and before extraction, the RGB image may be subjected to distortion removal processing. Assuming that the line segment equation of the extracted second edge pixel point is:
aU1+bV1+c=0
wherein (U1, V1) is the pixel coordinate of the second edge pixel point, and a, b, c are the parameters of the line segment.
And generating a line segment corresponding to each second edge pixel by using the line segment equation according to the pixel coordinate of each second edge pixel point, namely determining the values of the parameters a, b and c. As shown in fig. 7, the white line is a line segment formed by the second edge pixel points.
Further, in an implementation manner, the three-dimensional coordinates of the first edge pixel point may be optimized through the following steps:
optimizing the depth information of the first edge pixel points corresponding to the projection pixel points according to the distance between the projection pixel points and the line segments to obtain the optimized depth information of the first edge pixel points;
and determining the optimized three-dimensional coordinates of the first edge pixel points according to the optimized depth information of the first edge pixel points and the pixel coordinates of the first edge pixel points in the first environment image.
Specifically, the distance between each projection pixel point and the line segment is calculated, projection pixel points larger than a preset threshold are eliminated, and the distance can be calculated by adopting the following formula:
Figure BDA0002229855810000101
wherein d isiIs the distance between the ith projection pixel point and the line segment (U)i,Vi) The pixel coordinate of the ith projection pixel point.
Due to UiAnd ViOptimizing pixels of a projected pixel as a function of depth information for a first edge pixelAnd coordinates, namely optimizing the depth information of the first edge pixel points corresponding to the projection pixel points, optimizing the target to enable the projection pixel points to fall on the generated line segment, namely, the distance between the projection pixel points and the line segment is minimum, finally obtaining the optimized depth information of the first edge pixel points, and further obtaining the optimized three-dimensional coordinates of the first edge pixel points according to the formula for calculating the three-dimensional coordinates. Fig. 9 is a three-dimensional coordinate after the first edge pixel point is optimized.
On the basis of the foregoing embodiment, the method of this embodiment may further include:
and correcting the angle drift of the gyroscope of the robot according to the direction information included in the first positioning information and the direction information included in the second positioning information of the reference line.
Specifically, for example, the direction information includes an angle between the reference line and the X axis, if there is no drift in the gyroscope, the angle values of the two calculated angles should be the same, and if different, the angle offset of the gyroscope may be corrected according to the difference between the angle values of the two calculated angles.
Furthermore, the environment map can be divided into functional areas according to first positioning information and corresponding color information of a reference line included in the environment map;
and determining the functional area where the robot is located according to the first positioning information of the reference line and the corresponding color information.
Specifically, the skirting materials and colors of different functional areas (e.g. living room, bedroom, study room, etc.) may be different, and we can divide the functional areas of the environment map according to the location information position of the skirting marked in the environment map and the continuity of colors (skirting, wall surface, ground surface and color combination thereof). As shown in fig. 11, different functional areas (living room, bedroom) can be divided according to the color attribute of the skirting line, so as to realize the function of area division.
When the robot needs to be repositioned, which functional area is located can be firstly presumed according to the color of the currently identified skirting line, or after cleaning, the skirting line is matched with the constructed environment map, so that which functional area is located at the moment is known, and a navigation basis (returning to the seat for charging or cleaning the next functional area) is provided for the subsequent steps after cleaning is completed.
Further, the method of this embodiment further includes:
and determining the cleaning direction of the robot according to the direction information included in the first positioning information of the reference line.
Specifically, the cleaning direction of the general robot coincides with the direction when the seat is lowered. In order to improve the cleaning efficiency, the robot should clean the room in a direction perpendicular to the wall surface. By utilizing the direction information of the identified skirting line, the cleaning direction of the robot can be ensured to be vertical to the wall surface (namely the direction of the skirting line) during cleaning, so that the cleaning efficiency is ensured.
Further, the method of this embodiment further includes:
and correcting the depth value measured by the first lens according to the three-dimensional coordinate of each first edge pixel point corresponding to the reference line in the first lens coordinate system and the optimized three-dimensional coordinate of each first edge pixel point.
Specifically, the depth value measured by the first lens may be corrected according to the coordinate information included in the first positioning information of the reference line, such as the z coordinate in the three-dimensional coordinates, i.e., the depth value, optimized as described above, and the depth value obtained according to the first environment image, i.e., the z coordinate in the initial three-dimensional coordinates.
Fig. 12 is a structural diagram of an embodiment of a robot provided in the present application, and as shown in fig. 12, the robot includes:
a first lens 121, a second lens 122, a processor 123, and a memory 124 for storing executable instructions for the processor 123.
Optionally, the method may further include: and the communication interface is used for realizing communication with other equipment.
The first lens is used for collecting a first environment image, and the second lens is used for collecting a second environment image.
The above components may communicate over one or more buses.
Wherein the processor 123 is configured to perform, via execution of the executable instructions:
acquiring first positioning information of a reference line according to the acquired first environment image and the acquired second environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line;
and correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
In one possible implementation, the acquired environmental image includes: a first ambient image captured by the first lens and a second ambient image captured by the second lens, the processor 123 is configured to:
extracting at least two first edge pixel points corresponding to the reference line in the first environment image;
determining the three-dimensional coordinates of each first edge pixel point under a first lens coordinate system according to the pixel coordinates of each first edge pixel point;
optimizing the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates of the projection pixel points of each first edge pixel point on the second environment image and the pixel coordinates of at least two second edge pixel points corresponding to the reference line extracted from the second environment image to obtain the optimized three-dimensional coordinates of each first edge pixel point;
and determining first positioning information of the reference line according to the optimized three-dimensional coordinates of the first edge pixel points.
In one possible implementation, the processor 123 is configured to:
extracting at least two first edge pixel points corresponding to the reference line in the first environment image according to the brightness information in the first environment image, and acquiring the pixel coordinates of each first edge pixel point;
correspondingly, the determining the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates of each first edge pixel point includes:
and determining the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates and the depth information of each first edge pixel point and the internal parameters of the first lens.
In one possible implementation, the processor 123 is configured to:
determining the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system according to the relative pose relationship between the first lens and the second lens and the three-dimensional coordinates of the first edge pixel points in the first lens coordinate system;
and determining the pixel coordinates of the projection pixel points of the first edge pixel points in the second environment image according to the internal reference information of the second lens and the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system.
In one possible implementation, the processor 123 is configured to:
extracting each second edge pixel point according to RGB color information in the second environment image, and acquiring pixel coordinates of each second edge pixel point;
generating a line segment corresponding to each second edge pixel according to the pixel coordinate of each second edge pixel point;
optimizing the depth information of the first edge pixel points corresponding to the projection pixel points according to the distance between the projection pixel points and the line segments to obtain the optimized depth information of the first edge pixel points;
and determining the optimized three-dimensional coordinates of the first edge pixel points according to the optimized depth information of the first edge pixel points and the pixel coordinates of the first edge pixel points in the first environment image.
In one possible implementation, the processor 123 is configured to:
and if the second positioning information of the reference line does not exist in the environment map, adding the first positioning information of the reference line into the environment map according to the current positioning information of the robot.
In one possible implementation, the processor 123 is configured to:
and determining color information corresponding to the reference line according to the RGB color information in the second environment image, and adding the color information to the environment map.
In one possible implementation, the processor 123 is configured to:
and determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line and the corresponding color information.
In one possible implementation, the processor 123 is configured to:
and correcting the angle drift of the gyroscope of the robot according to the direction information included in the first positioning information and the direction information included in the second positioning information of the reference line.
In one possible implementation, the processor 123 is configured to:
and determining the cleaning direction of the robot according to the direction information included in the first positioning information of the reference line.
In one possible implementation, the processor 123 is configured to:
and correcting the depth value measured by the first lens according to the three-dimensional coordinate of each first edge pixel point corresponding to the reference line in the first lens coordinate system and the optimized three-dimensional coordinate of each first edge pixel point.
In one possible implementation, the processor 123 is configured to:
dividing the environment map into functional areas according to first positioning information and corresponding color information of a reference line included in the environment map;
and determining the functional area where the robot is located according to the first positioning information of the reference line and the corresponding color information.
The device of this embodiment is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process thereof may refer to the foregoing method embodiment, which is not described herein again.
Fig. 13 is a block diagram of an embodiment of an electronic device provided in the present application, and as shown in fig. 13, the electronic device includes:
a processor 131, and a memory 132 for storing executable instructions for the processor 131.
Optionally, the method may further include: a communication interface 133 for enabling communication with other devices.
The above components may communicate over one or more buses.
The processor 131 is configured to execute the corresponding method in the foregoing method embodiment by executing the executable instruction, and the specific implementation process of the method may refer to the foregoing method embodiment, which is not described herein again.
The electronic device may be another control device, such as a server or the like, in communication with the robot.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. A method for correcting the position of a robot, comprising:
acquiring first positioning information of a reference line according to the acquired environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line;
and correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
2. The method of claim 1, wherein the acquired environmental image comprises: the first environmental image that first camera lens was gathered and the second environmental image that the second camera lens was gathered, according to the environmental image who gathers, obtain the first locating information of reference line, include:
extracting at least two first edge pixel points corresponding to the reference line in the first environment image;
determining the three-dimensional coordinates of each first edge pixel point under a first lens coordinate system according to the pixel coordinates of each first edge pixel point;
optimizing the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates of the projection pixel points of each first edge pixel point on the second environment image and the pixel coordinates of at least two second edge pixel points corresponding to the reference line extracted from the second environment image to obtain the optimized three-dimensional coordinates of each first edge pixel point;
and determining first positioning information of the reference line according to the optimized three-dimensional coordinates of the first edge pixel points.
3. The method of claim 2, wherein the extracting at least two first edge pixel points corresponding to the reference line in the first environment image comprises:
extracting at least two first edge pixel points corresponding to the reference line in the first environment image according to the brightness information in the first environment image, and acquiring the pixel coordinates of each first edge pixel point;
correspondingly, the determining the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates of each first edge pixel point includes:
and determining the three-dimensional coordinates of each first edge pixel point under the first lens coordinate system according to the pixel coordinates and the depth information of each first edge pixel point and the internal parameters of the first lens.
4. The method of claim 2, wherein before optimizing the three-dimensional coordinates of each of the first edge pixels in the first lens coordinate system, the method further comprises:
determining the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system according to the relative pose relationship between the first lens and the second lens and the three-dimensional coordinates of the first edge pixel points in the first lens coordinate system;
and determining the pixel coordinates of the projection pixel points of the first edge pixel points in the second environment image according to the internal reference information of the second lens and the position coordinates of the corresponding three-dimensional points of the first edge pixel points in the second lens coordinate system.
5. The method of claim 2, wherein before optimizing the three-dimensional coordinates of each of the first edge pixels in the first lens coordinate system, the method further comprises:
extracting each second edge pixel point according to RGB color information in the second environment image, and acquiring pixel coordinates of each second edge pixel point;
generating a line segment corresponding to each second edge pixel according to the pixel coordinate of each second edge pixel point;
correspondingly, the optimizing the three-dimensional coordinates of each first edge pixel point in the first lens coordinate system includes:
optimizing the depth information of the first edge pixel points corresponding to the projection pixel points according to the distance between the projection pixel points and the line segments to obtain the optimized depth information of the first edge pixel points;
and determining the optimized three-dimensional coordinates of the first edge pixel points according to the optimized depth information of the first edge pixel points and the pixel coordinates of the first edge pixel points in the first environment image.
6. The method of any one of claims 1-5, further comprising:
and if the second positioning information of the reference line does not exist in the environment map, adding the first positioning information of the reference line into the environment map according to the current positioning information of the robot.
7. The method of any of claims 2-5, further comprising:
and determining color information corresponding to the reference line according to the RGB color information in the second environment image, and adding the color information to the environment map.
8. The method of claim 7, wherein determining second positioning information matching the reference line in the environment map according to the first positioning information of the reference line comprises:
and determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line and the corresponding color information.
9. The method of any one of claims 1-5, further comprising:
and correcting the angle drift of the gyroscope of the robot according to the direction information included in the first positioning information and the direction information included in the second positioning information of the reference line.
10. The method of any one of claims 1-5, further comprising:
and determining the cleaning direction of the robot according to the direction information included in the first positioning information of the reference line.
11. The method of any of claims 2-5, further comprising:
and correcting the depth value measured by the first lens according to the three-dimensional coordinate of each first edge pixel point corresponding to the reference line in the first lens coordinate system and the optimized three-dimensional coordinate of each first edge pixel point.
12. The method of claim 7, further comprising:
dividing the environment map into functional areas according to first positioning information and corresponding color information of a reference line included in the environment map;
and determining the functional area where the robot is located according to the first positioning information of the reference line and the corresponding color information.
13. A robot, comprising:
the camera comprises a first lens, a second lens, a processor and a memory;
wherein the memory is used for storing executable instructions of the processor;
the first lens is used for acquiring a first environment image;
the second lens assembly is used for collecting a second environment image;
the processor is configured to implement, via execution of the executable instructions:
acquiring first positioning information of a reference line according to the acquired first environment image and the acquired second environment image; the first positioning information includes: coordinate information and direction information of the reference line in the created environment map of the robot;
determining second positioning information matched with the reference line in the environment map according to the first positioning information of the reference line;
and correcting the current positioning information of the robot according to the first positioning information and the second positioning information.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-12.
15. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-12 via execution of the executable instructions.
CN201910963898.5A 2019-10-11 2019-10-11 Robot positioning correction method, apparatus, and storage medium Pending CN112650207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910963898.5A CN112650207A (en) 2019-10-11 2019-10-11 Robot positioning correction method, apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910963898.5A CN112650207A (en) 2019-10-11 2019-10-11 Robot positioning correction method, apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN112650207A true CN112650207A (en) 2021-04-13

Family

ID=75342743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910963898.5A Pending CN112650207A (en) 2019-10-11 2019-10-11 Robot positioning correction method, apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN112650207A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102026013A (en) * 2010-12-18 2011-04-20 浙江大学 Stereo video matching method based on affine transformation
CN102656532A (en) * 2009-10-30 2012-09-05 悠进机器人股份公司 Map generating and updating method for mobile robot position recognition
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
EP2741161A2 (en) * 2012-12-10 2014-06-11 Miele & Cie. KG Autonomous apparatus for processing a surface and method for detecting the position of the autonomous apparatus
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN104346829A (en) * 2013-07-29 2015-02-11 中国农业机械化科学研究院 Three-dimensional color reconstruction system and method based on PMD (photonic mixer device) cameras and photographing head
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
US20170017238A1 (en) * 2013-12-27 2017-01-19 Komatsu Ltd. Mining machine management system and management method
CN106989746A (en) * 2017-03-27 2017-07-28 远形时空科技(北京)有限公司 Air navigation aid and guider
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN109556616A (en) * 2018-11-09 2019-04-02 同济大学 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102656532A (en) * 2009-10-30 2012-09-05 悠进机器人股份公司 Map generating and updating method for mobile robot position recognition
CN102026013A (en) * 2010-12-18 2011-04-20 浙江大学 Stereo video matching method based on affine transformation
EP2741161A2 (en) * 2012-12-10 2014-06-11 Miele & Cie. KG Autonomous apparatus for processing a surface and method for detecting the position of the autonomous apparatus
CN104346829A (en) * 2013-07-29 2015-02-11 中国农业机械化科学研究院 Three-dimensional color reconstruction system and method based on PMD (photonic mixer device) cameras and photographing head
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
US20170017238A1 (en) * 2013-12-27 2017-01-19 Komatsu Ltd. Mining machine management system and management method
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN106989746A (en) * 2017-03-27 2017-07-28 远形时空科技(北京)有限公司 Air navigation aid and guider
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN109556616A (en) * 2018-11-09 2019-04-02 同济大学 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method
CN109643127A (en) * 2018-11-19 2019-04-16 珊口(深圳)智能科技有限公司 Construct map, positioning, navigation, control method and system, mobile robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘晓晴: "基于双目视觉的图像处理与定位技术研究", 中国优秀硕士学位论文全文数据库信息科技辑, vol. 2018, no. 06, pages 138 - 1825 *

Similar Documents

Publication Publication Date Title
CN110522359B (en) Cleaning robot and control method of cleaning robot
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
WO2020019221A1 (en) Method, apparatus and robot for autonomous positioning and map creation
EP3825903A1 (en) Method, apparatus and storage medium for detecting small obstacles
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
RU2621480C1 (en) Device for evaluating moving body position and method for evaluating moving body position
WO2020228694A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
KR101617078B1 (en) Apparatus and method for image matching unmanned aerial vehicle image with map image
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
CN103150728A (en) Vision positioning method in dynamic environment
CN108398139A (en) A kind of dynamic environment visual odometry method of fusion fish eye images and depth image
CN112805766B (en) Apparatus and method for updating detailed map
CN106650701A (en) Binocular vision-based method and apparatus for detecting barrier in indoor shadow environment
CN112950696A (en) Navigation map generation method and generation device and electronic equipment
CN103729860A (en) Image target tracking method and device
CN108983769B (en) Instant positioning and map construction optimization method and device
KR101333496B1 (en) Apparatus and Method for controlling a mobile robot on the basis of past map data
CN110070577B (en) Visual SLAM key frame and feature point selection method based on feature point distribution
KR101319526B1 (en) Method for providing location information of target using mobile robot
CN107403448B (en) Cost function generation method and cost function generation device
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN112650207A (en) Robot positioning correction method, apparatus, and storage medium
CN105335959A (en) Quick focusing method and device for imaging apparatus
CN111724432A (en) Object three-dimensional detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination