CN109074085B - Autonomous positioning and map building method and device and robot - Google Patents

Autonomous positioning and map building method and device and robot Download PDF

Info

Publication number
CN109074085B
CN109074085B CN201880001385.XA CN201880001385A CN109074085B CN 109074085 B CN109074085 B CN 109074085B CN 201880001385 A CN201880001385 A CN 201880001385A CN 109074085 B CN109074085 B CN 109074085B
Authority
CN
China
Prior art keywords
robot
map
image
point
landmark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880001385.XA
Other languages
Chinese (zh)
Other versions
CN109074085A (en
Inventor
徐泽元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Publication of CN109074085A publication Critical patent/CN109074085A/en
Application granted granted Critical
Publication of CN109074085B publication Critical patent/CN109074085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Abstract

The embodiment of the invention relates to an autonomous positioning and map building method, an autonomous positioning and map building device and a robot. The method comprises the following steps: obtaining a distance observation value of the robot to the landmark point; acquiring the position of the robot in a map; and adding a new road mark point belonging to a fixed object in the road mark points into the map, and obtaining the pose of the new road mark point according to the position and the distance observation value. According to the embodiment of the invention, only the road signs belonging to the fixed objects are added into the map, so that the situation that the road signs change when the robot refers to the road signs is avoided, and the influence of the surrounding environment on the positioning of the robot and the establishment of the map is reduced, so that the positioning of the robot and the calculation of the road signs in the map are more accurate.

Description

Autonomous positioning and map building method and device and robot
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to an autonomous positioning and map building method, an autonomous positioning and map building device and a robot.
Background
Slam (simultaneous Localization and mapping), namely, simultaneous Localization and map reconstruction, means that a robot carrying a specific sensor constructs an incremental map of an environment through the motion process of the robot without environment prior information, and simultaneously estimates the pose of the robot, thereby realizing autonomous Localization and navigation of the robot. With the development of science and technology, SLAM-based applications are increasing.
In the course of studying the prior art, the inventors found that there are at least the following problems in the related art: in the prior art, when a robot is positioned, the position of the robot in a map at the current moment is usually estimated according to the position of each landmark in the existing map and the observed value of the robot to the landmark at the current moment. And the pose calculation of the new road sign in the map depends on the position of the robot in the map and the distance observation value. If the road signs referenced by the robot change, the robot is inaccurately positioned, and simultaneously, the pose of the new road signs in the map is inaccurately calculated. Therefore, the positioning and the map building of the robot are greatly influenced by the environment.
Disclosure of Invention
An object of the embodiments of the present invention is to provide an autonomous positioning and map building method, an apparatus, and a robot, which can reduce the influence of the surrounding environment on the autonomous positioning and map building of the robot.
In a first aspect, an embodiment of the present invention provides an autonomous positioning and map building method, where the method is applied to a robot, and the method includes:
obtaining a distance observation value of the robot to the landmark point;
acquiring the position of the robot in a map;
and adding a new road mark point belonging to a fixed object in the road mark points into the map, and obtaining the pose of the new road mark point according to the position and the distance observation value.
In a second aspect, an embodiment of the present invention further provides an autonomous positioning and map building apparatus, where the apparatus is applied to a robot, and the apparatus includes:
the observation distance acquisition module is used for acquiring a distance observation value of the robot to the landmark point;
the positioning module is used for acquiring the position of the robot in a map;
and the map building module is used for adding a new road mark point belonging to a fixed object in the road mark points into the map and obtaining the pose of the new road mark point according to the position and the distance observation value.
In a third aspect, an embodiment of the present invention further provides a robot, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to the autonomous positioning and map building method, the autonomous positioning and map building device and the robot, only the road signs belonging to the fixed objects are added into the map, so that the situation that the road signs change when the robot refers to the road signs is avoided, the influence of the surrounding environment on the positioning and map building of the robot is reduced, and the positioning of the robot and the calculation of the road signs in the map are more accurate.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic diagram of an application scenario of the autonomous positioning and map building method and apparatus of the present invention;
FIG. 2 is a schematic diagram of robot positioning and construction in one embodiment of the present invention;
FIG. 3 is a flow chart of one embodiment of an autonomous positioning and map building method of the present invention;
FIG. 4 is a flowchart of the steps for obtaining a distance observation of a robot for a landmark according to an embodiment of the autonomous localization and map building method of the present invention;
FIG. 5 is a flow chart of one embodiment of an autonomous positioning and map building method of the present invention;
FIG. 6 is a schematic diagram of an embodiment of the autonomous positioning and mapping apparatus of the present invention;
FIG. 7 is a schematic diagram of an embodiment of the autonomous positioning and mapping apparatus of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a robot according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The autonomous positioning and map building method and device provided by the invention are suitable for the application scene shown in fig. 1, wherein the application scene comprises a robot 10, the robot 10 is a movable robot, and the robot refers to a machine with some artificial intelligence, such as a sweeping robot, a humanoid robot, an automatic driving automobile and the like. The robot 10 may need to move in an unknown environment in order to accomplish a user's task or otherwise. In order to realize autonomous positioning and navigation in the process of movement, an incremental map needs to be built, and the position of the incremental map is estimated.
For convenience, referring to fig. 2, a continuous motion of the robot 10 is divided into discrete time instants t 1, … k, each time instant being represented by xnIndicating the position of the robot 10 itself, i.e. x1,x2…xkWhich constitutes the motion trajectory of the robot 10. Suppose a map consists of many landmarks, such as y in the figure1、y2And y3At each time, the robot 10 observes a portion of the landmarks resulting in their range observation z (i.e., the distance between x and y). Positioning of the robot 10, i.e. estimating the position (x) of the robot 10 in a map, the robot 10 can passAnd estimating the position of the distance observation value of the landmark in the existing map. Map building, i.e., estimating, the location (y) of the landmark in the map, which may be obtained from the location (x) of the robot 10 and the distance observation (z). The positioning and mapping of the robot 10 is a continuous process, and as the position of the robot 10 changes, the robot 10 observes new landmarks and continuously adds new landmarks to the map.
Such as walls, windows, pillars, trees, buildings, tables, cabinets, flowers, signs, people, pets, vehicles, etc., among others. In some embodiments, the mobile nature of walls, windows, pillars, trees, buildings, etc. may be defined as "stationary objects," the mobile nature of tables, cabinets, flowers, signs, etc. as "movable objects," and the mobile nature of people, pets, vehicles, etc. as "moving objects. The robot 10 only adds the road signs belonging to the fixed objects into the map, and because the road signs are not easy to change, the situation that the road signs change when the robot 10 positions according to the road signs can be avoided, the influence of the surrounding environment on the robot positioning and map building can be reduced, and the positioning of the robot and the calculation of the road signs in the map are more accurate. It should be noted that the definition of the movement attribute may be defined in advance according to the application scenario of the robot 10, and not absolutely, the same object may be a "movable object" in some application scenarios, and may be a "fixed object" in other application scenarios.
Fig. 3 is a schematic flow chart of an autonomous positioning and mapping method provided by an embodiment of the present invention, which may be executed by the robot 10 in fig. 1, as shown in fig. 3, and the method includes:
101: and acquiring a distance observation value of the robot to the landmark point.
The distance observation of the robot 10 to the road signs may be based on a vision method, for example, obtaining an image in front of the robot vision by using a binocular camera or a depth camera, and then obtaining the distance between the robot 10 and each road sign by obtaining depth information of each pixel point in the image. In other embodiments, the robot may measure the distance between the robot 10 and the road sign by other methods.
Taking an example of obtaining a distance observation value by a binocular camera based on a vision method, referring to fig. 4, the robot 10 obtains a distance observation value for a road sign, and includes:
1011: and respectively acquiring a first image and a second image in a visual range through a binocular camera.
That is, a first image is obtained by a camera located at the left side of the robot 10, and a second image is obtained by a camera located at the right side of the robot 10, wherein the left camera and the right camera may be disposed at the left eye and the right eye of the robot 10, respectively.
1012: and respectively carrying out image recognition on the first image and the second image, recognizing the category of each region in the image, determining the movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object.
Specifically, the first image and the second image are subjected to image recognition, and for example, a neural network model based on deep learning can be used for recognition, so as to identify the category of each object in the image. Meanwhile, the movement attribute of each object in the image can be determined according to the movement attribute defined in advance by the object type, and the type and the movement attribute (such as a moving object, a movable object and a fixed object) of the pixel point of the corresponding region of the object are marked. For example, it is recognized through image recognition that the object types in the first image are respectively a table, a person, a wall, and the like, and according to the definition of the table, the person, and the wall in advance, the movement attributes thereof are respectively a movable object, a moving object, and a fixed object, then the pixel point label type and the movement attribute corresponding to the region of the table in the image are "table" and "movable object", the pixel point label type and the movement attribute corresponding to the region of the person in the image are "person" and "moving object", and the pixel point label type and the movement attribute corresponding to the region of the wall in the image are "wall" and "fixed object". It will be appreciated by those skilled in the art that in the actual application of labeling pixel points with categories and movement attributes, computer symbols representing actual categories and actual movement attributes may be employed rather than actual categories and actual movement attributes themselves. Wherein the movement attribute of each object may be defined in advance according to the application scenario of the robot 10.
1013: feature points are extracted based on the first image and the second image, and feature points belonging to a moving object are removed.
Specifically, feature points are extracted from each pixel point of the first image and the second image, and an algorithm such as SIFT or ORB may be adopted. The feature points are generally some "stable points" in the image, and will not disappear due to the change of the viewing angle, the change of the illumination, and the interference of noise, such as corner points, edge points, bright points in dark areas, dark points in bright areas, and the like. After the feature point extraction, the feature points marked as moving objects in the feature points may be removed. Because the moving objects have a high probability of moving, if the robot 10 is used as a reference for positioning, the robot 10 will have inaccurate position. Therefore, in this step, the signposts whose movement attribute is a moving object can be rejected.
In other embodiments, after each region in the image is identified, the region of the moving object to which the moving attribute belongs may be directly masked. In this way, when feature points are extracted, feature points located in the masked region are not extracted, so that the extracted feature points do not include feature points whose moving attributes are moving objects.
1014: and performing feature point matching on the basis of the first image and the second image after the feature points are removed, wherein the feature point matching is performed among the feature points with the same category, so as to obtain a distance observation value of a landmark point with a moving attribute of a non-moving object in the image.
Specifically, feature point matching may be performed based on, for example, a stereo matching algorithm, and matching may be performed between feature points having the same category. For example, feature point matching is performed between the category and a feature point belonging to "table", or between the category and a feature point belonging to "wall". Therefore, the matching range can be reduced, and the effectiveness of the matching result is improved. After matching the feature points, the matching result may be used to calculate the parallax of a point on the first image and the second image according to the triangulation principle to determine the depth of the feature point, i.e., the distance from the robot 10 to the feature point.
102: and acquiring the position of the robot in a map.
103: and adding a new road mark point belonging to a fixed object in the road mark points into the map, and obtaining the pose of the new road mark point according to the position and the distance observation value.
Referring to fig. 2, when the robot 10 starts a movement, the starting point (x) of the movement of the robot 10 may be set1) Arranged as dots, where the robot can observe the landmark y1And y2Suppose road sign y1And y2All the moving attributes of the road sign are fixed objects, namely the road sign y1And y2Added to the map. Since the robot 10 is now at a circular point, there is a road sign y according to the robot's alignment to the road sign1And y2Is observed value z of1And z2The road sign y can be obtained1And y2Pose in the map.
When the robot 10 moves to position x2In time, the robot 10 is facing the road sign y1And y2A distance observed value of z1' and z2', according to z1' and z2' on map (x)1A map of location acquisition) for location searching (i.e., positioning). When a certain position in the map is searched, the distance between the position and each landmark in the map is called a positioning distance, the distance observation value of the robot 10 actually corresponding to each corresponding landmark is called an observation distance, and if the coincidence degree of the positioning distance conforming to the observation distance is greater than a preset threshold value, the position is considered as x2An estimated location of the location.
Only road sign y is shown in fig. 21And y2Actually, each landmark includes a plurality of feature points (landmark points). The above-described matching degree is actually calculated to a degree that the positioning distance of each landmark point in the position or map matches the actual observation distance of the robot 10 for each corresponding landmark point. Example (b)If the preset threshold is 70, the landmark point with the positioning distance meeting the observation distance is marked as 1, and the landmark point with the positioning distance not meeting the observation distance is marked as 0. If more than 70 waypoints match the actual observed distance from the location, the location may be considered to be the location of the robot 10 in the map. Otherwise, a new location needs to be searched again.
In which determining whether the positioning distance of the landmark point meets the observation distance needs to be based on the corresponding landmark point, and therefore, before calculating the degree of the meeting, the correspondence between the landmark point observed by the robot 10 and the landmark point in the map needs to be matched. Feature point matching may be performed on landmark points observed by the robot 10 and landmark points in the map to determine corresponding landmark points in the map for landmark points observed by the robot. In some embodiments, when adding each landmark point into the map, the category to which each landmark point belongs may be marked, and the feature point matching may be performed between feature points of the same category, so as to reduce the matching range and improve the validity of the matching result.
In some embodiments, when adding each landmark point to the map, a moving weight value may also be marked for each landmark point. For example, although trees and building movement attributes are fixed objects as road signs in a map, buildings are relatively less prone to movement, and therefore, the movement weight value of a building may be set to be greater than that of a tree. For example, the movement weight value of a building is set to 3, and the movement weight value of a tree is set to 2. Correspondingly, the matching degree can also be obtained by combining the moving weight value corresponding to each landmark point in the map. In some embodiments, the degree of conformity and the mobile weight value are positively correlated. Also as explained above, assume road sign y1Has a moving weight value of 3, signpost y2Is 1, the preset threshold is again 70. If the signpost y1In which there are 20 characteristic points whose locating distance is in accordance with observation distance, signpost y2If there are 15 feature points whose location distance matches the observation distance, the matching degree is 20 × 3+15 × 1 ═ 75>70, the position is successfully located. If the signpost y1In which 8 positioning distances coincide with observationCharacteristic points of distance, road sign y245 feature points whose locating distance matches the observation distance have coincidence degree of 8 × 3+45 × 1 ═ 69<This position location fails and a new position needs to be searched in the map again 70.
Wherein, in some embodiments, in order to reduce the search range, the operation amount is reduced. It is also possible to estimate the displacement of the robot 10 by a detection device such as a sensor, estimate the position of the robot 10, and search for a position within a certain range around the estimated position on the map. Specifically, the displacement of the robot 10 between the first time and the second time is estimated, feature point matching may be performed on landmark point depth maps obtained at the first time and the second time, and the displacement of the robot 10 may be obtained by the depth of the same feature point in different depth maps. The displacement can be further filtered and fused with attitude information provided by an Inertial Measurement Unit (IMU).
When the robot 10 moves to position x4When the robot observes a new landmark y at this position3If the signpost y3If it also belongs to a fixed object, the road sign y will be marked3Added to the map. Road sign y3X obtained from position location of pose in map4Position of and the robot 10 is facing the landmark y at this position3Is obtained.
Because each road sign consists of a plurality of characteristic points, the characteristic points added into the map and corresponding to the road sign are pieced together to form the map.
According to the autonomous positioning and map building method provided by the embodiment of the invention, only the landmark points belonging to the fixed object are added into the map, so that the situation that the robot changes the landmark when referring to the landmark is avoided, and the influence of the surrounding environment on the positioning of the robot and the building of the map is reduced, so that the positioning of the robot and the calculation of the landmark in the map are more accurate.
In other embodiments, referring to fig. 5, the autonomous positioning and map building method further includes, in addition to the steps 101 and 103, the steps 104:
and if the position is coincident with the historical position, correcting the position and the position of the landmark point obtained between the historical position and the position according to the position of the landmark point obtained from the historical position.
In order to correct an error due to drift, a map having uniform information is obtained. The robot 10 will periodically check whether the current position is a history position visited before, i.e. perform loop back detection. If the current position coincides with the historical position, the current position and a map obtained from the historical position to the current position are corrected based on the map obtained at the historical position. Since the influence of the change of the road sign is eliminated when the robot 10 is positioned, the robot 10 can more easily find the historical position at the time of the loop detection.
Accordingly, an embodiment of the present invention further provides an autonomous positioning and map building apparatus, where the autonomous positioning and map building apparatus is used for the robot 10 shown in fig. 1, and as shown in fig. 6, the autonomous positioning and map building apparatus 600 includes:
an observation distance obtaining module 601, configured to obtain a distance observation value of the robot for a landmark point;
a positioning module 602, configured to obtain a position of the robot in a map;
and the map building module 603 is configured to add a new landmark point belonging to a fixed object in the landmark points to the map, and obtain a pose of the new landmark point according to the position and the distance observation value.
The autonomous positioning and map building device provided by the embodiment of the invention only adds the road signs belonging to the fixed objects into the map, so that the condition that the road signs change when the robot refers to the road signs is avoided, and the influence of the surrounding environment on the positioning and map building of the robot is reduced, thereby ensuring that the positioning of the robot and the calculation of the road signs in the map are more accurate.
In some embodiments of the autonomous positioning and map building apparatus 600, the observation distance obtaining module 601 is specifically configured to:
respectively acquiring a first image and a second image in a visual range through a binocular camera device;
respectively carrying out image recognition on the first image and the second image, recognizing the category of each region in the images, determining a movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object;
extracting feature points based on the first image and the second image, and removing the feature points belonging to a moving object;
and performing feature point matching on the basis of the first image and the second image after the feature points are removed, wherein the feature point matching is performed among the feature points with the same category, so as to obtain a distance observation value of a landmark point with a moving attribute of a non-moving object in the image.
Specifically, in some embodiments, the mapping module 603 is specifically configured to:
and adding a new road mark point belonging to a fixed object in the road mark points into the map, and marking the category and the mobile weight value of the new road mark point.
Specifically, in some embodiments, the positioning module 602 is specifically configured to:
carrying out feature point matching on the landmark points observed by the robot at the second moment and the landmark points in the map at the first moment, and determining the landmark points corresponding to the landmark points observed by the robot at the second moment in the map at the first moment, wherein the feature point matching is carried out among the feature points with the same category;
searching the position in the map at the first moment according to the distance observation value of the robot to the landmark point at the second moment;
and combining the moving weight value of the landmark points in the map to obtain the coincidence degree of the positioning distance and the observation distance of the robot at a certain position in the map, if the coincidence degree exceeds a preset threshold value, positioning the position as the position of the robot in the map, wherein the positioning distance is the distance between the position and each landmark point in the map, and the observation distance is the distance observation value of each corresponding landmark point of the robot.
In other embodiments of the autonomous positioning and mapping apparatus 600, please refer to fig. 7, which further comprises:
a loop detection module 604, configured to correct the position according to the landmark point pose obtained at the historical position and the landmark point pose obtained between the historical position and the position if the position coincides with the historical position.
It should be noted that the autonomous positioning and map building apparatus can execute the autonomous positioning and map building method provided by the embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the embodiment of the autonomous positioning and map building apparatus, reference may be made to the autonomous positioning and map building method provided in the embodiment of the present invention.
Fig. 8 is a schematic diagram of a hardware structure of a robot 10 according to an embodiment of the present invention, and as shown in fig. 8, the robot 10 includes:
one or more processors 11 and a memory 12, with one processor 11 being an example in fig. 8.
The processor 11 and the memory 12 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 12, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the autonomous positioning and map building method in the embodiment of the present invention (for example, the observed distance obtaining module 601, the positioning module 602, and the map building module 603 shown in fig. 6). The processor 11 executes various functional applications of the robot and data processing, i.e., the autonomous positioning and map building method of the above-described method embodiment, by running non-volatile software programs, instructions, and modules stored in the memory 12.
The memory 12 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the autonomous positioning and map building apparatus, and the like. Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 12 optionally includes memory located remotely from the processor 11, which may be connected to the autonomous positioning and mapping apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 12 and, when executed by the one or more processors 11, perform the autonomous positioning and mapping method of any of the above-described method embodiments, e.g., performing the method steps 101-103 of fig. 3, 1011-1014 of fig. 4, 101-104 of fig. 5, described above; the functions of the modules 601 and 603 in fig. 6 and the modules 601 and 604 in fig. 7 are realized.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as one processor 11 in fig. 8, to enable the one or more processors to perform the autonomous positioning and map building method in any of the above method embodiments, such as performing the above-described method steps 101 to 103 in fig. 3, method steps 1011 to 1014 in fig. 4, and method steps 101 to 104 in fig. 5; the functions of the modules 601 and 603 in fig. 6 and the modules 601 and 604 in fig. 7 are realized.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (13)

1. An autonomous positioning and mapping method, applied to a robot, characterized in that it comprises:
acquiring an image in a visual range of the robot through a binocular camera device;
identifying the image, identifying the category of each region in the image, determining a movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object;
extracting feature points based on the image and removing the feature points belonging to the moving object;
determining a distance observation value of the robot to a landmark point with a moving attribute of a non-moving object in the image based on the image with the characteristic point removed;
acquiring the position of the robot in a map;
and adding a new road mark point belonging to a fixed object in the road mark points into the map, and obtaining the pose of the new road mark point according to the position and the distance observation value.
2. The method of claim 1, wherein the determining, based on the image from which the feature points are removed, a distance observation of the robot for a landmark point in the image whose moving attribute is a non-moving object comprises:
respectively acquiring a first image and a second image in a visual range through a binocular camera device;
respectively carrying out image recognition on the first image and the second image, recognizing the category of each region in the images, determining a movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object;
extracting feature points based on the first image and the second image, and removing the feature points belonging to a moving object;
and performing feature point matching on the basis of the first image and the second image after the feature points are removed, wherein the feature point matching is performed among the feature points with the same category, so as to obtain a distance observation value of a landmark point with a moving attribute of a non-moving object in the image.
3. The method of claim 2, wherein adding a new one of the waypoints belonging to a fixed object to the map comprises:
and adding a new road mark point belonging to a fixed object in the road mark points into the map, and marking the category and the mobile weight value of the new road mark point.
4. The method of claim 3, wherein said obtaining the location of the robot in the map comprises:
carrying out feature point matching on the landmark points observed by the robot at the second moment and the landmark points in the map at the first moment, and determining the landmark points corresponding to the landmark points observed by the robot at the second moment in the map at the first moment, wherein the feature point matching is carried out among the feature points with the same category;
searching the position in the map at the first moment according to the distance observation value of the robot to the landmark point at the second moment;
and combining the moving weight value of the landmark points in the map to obtain the coincidence degree of the positioning distance and the observation distance of the robot at a certain position in the map, if the coincidence degree exceeds a preset threshold value, positioning the position as the position of the robot in the map, wherein the positioning distance is the distance between the position and each landmark point in the map, and the observation distance is the distance observation value of each corresponding landmark point of the robot.
5. The method according to any one of claims 1-4, further comprising:
and if the position is coincident with the historical position, correcting the position and the position of the landmark point obtained between the historical position and the position according to the position of the landmark point obtained from the historical position.
6. An autonomous positioning and mapping apparatus applied to a robot, the apparatus comprising:
the observation distance acquisition module is used for acquiring images in the visual range of the robot through a binocular camera device;
identifying the image, identifying the category of each region in the image, determining a movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object;
extracting feature points based on the image and removing the feature points belonging to the moving object;
determining a distance observation value of the robot to a landmark point with a moving attribute of a non-moving object in the image based on the image with the characteristic point removed;
the positioning module is used for acquiring the position of the robot in a map;
and the map building module is used for adding a new road mark point belonging to a fixed object in the road mark points into the map and obtaining the pose of the new road mark point according to the position and the distance observation value.
7. The apparatus according to claim 6, wherein the observation distance obtaining module is specifically configured to:
respectively acquiring a first image and a second image in a visual range through a binocular camera device;
respectively carrying out image recognition on the first image and the second image, recognizing the category of each region in the images, determining a movement attribute according to the category, and marking the category and the movement attribute for each region, wherein the movement attribute comprises a moving object, a movable object and a fixed object;
extracting feature points based on the first image and the second image, and removing the feature points belonging to a moving object;
and performing feature point matching on the basis of the first image and the second image after the feature points are removed, wherein the feature point matching is performed among the feature points with the same category, so as to obtain a distance observation value of a landmark point with a moving attribute of a non-moving object in the image.
8. The apparatus of claim 7, wherein the mapping module is specifically configured to:
and adding a new road mark point belonging to a fixed object in the road mark points into the map, and marking the category and the mobile weight value of the new road mark point.
9. The apparatus of claim 8, wherein the positioning module is specifically configured to:
carrying out feature point matching on the landmark points observed by the robot at the second moment and the landmark points in the map at the first moment, and determining the landmark points corresponding to the landmark points observed by the robot at the second moment in the map at the first moment, wherein the feature point matching is carried out among the feature points with the same category;
searching the position in the map at the first moment according to the distance observation value of the robot to the landmark point at the second moment;
and combining the moving weight value of the landmark points in the map to obtain the coincidence degree of the positioning distance and the observation distance of the robot at a certain position in the map, if the coincidence degree exceeds a preset threshold value, positioning the position as the position of the robot in the map, wherein the positioning distance is the distance between the position and each landmark point in the map, and the observation distance is the distance observation value of each corresponding landmark point of the robot.
10. The apparatus of any one of claims 6-9, further comprising:
and the loop detection module is used for correcting the position and the road sign point pose obtained between the historical position and the position according to the road sign point pose obtained from the historical position if the position is coincident with the historical position.
11. A robot, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a robot, cause the robot to perform the method of any of claims 1-5.
13. A computer program product, characterized in that the computer program product comprises a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a robot, cause the robot to perform the method of any of claims 1-5.
CN201880001385.XA 2018-07-26 2018-07-26 Autonomous positioning and map building method and device and robot Active CN109074085B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097134 WO2020019221A1 (en) 2018-07-26 2018-07-26 Method, apparatus and robot for autonomous positioning and map creation

Publications (2)

Publication Number Publication Date
CN109074085A CN109074085A (en) 2018-12-21
CN109074085B true CN109074085B (en) 2021-11-09

Family

ID=64789340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880001385.XA Active CN109074085B (en) 2018-07-26 2018-07-26 Autonomous positioning and map building method and device and robot

Country Status (2)

Country Link
CN (1) CN109074085B (en)
WO (1) WO2020019221A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11255982B2 (en) 2018-11-30 2022-02-22 Saint-Gobain Ceramics & Plastics, Inc. Radiation detection apparatus having a reflector
CN110068824B (en) * 2019-04-17 2021-07-23 北京地平线机器人技术研发有限公司 Sensor pose determining method and device
CN110046677B (en) * 2019-04-26 2021-07-06 山东大学 Data preprocessing method, map construction method, loop detection method and system
CN110175540A (en) * 2019-05-11 2019-08-27 深圳市普渡科技有限公司 Road sign map structuring system and robot
CN112629546B (en) * 2019-10-08 2023-09-19 宁波吉利汽车研究开发有限公司 Position adjustment parameter determining method and device, electronic equipment and storage medium
CN110579215B (en) * 2019-10-22 2021-05-18 上海智蕙林医疗科技有限公司 Positioning method based on environmental feature description, mobile robot and storage medium
CN111553945B (en) * 2020-04-13 2023-08-11 东风柳州汽车有限公司 Vehicle positioning method
CN112464989B (en) * 2020-11-02 2024-02-20 北京科技大学 Closed loop detection method based on target detection network
CN112683273A (en) * 2020-12-21 2021-04-20 广州慧扬健康科技有限公司 Adaptive incremental mapping method, system, computer equipment and storage medium
CN112325873B (en) * 2021-01-04 2021-04-06 炬星科技(深圳)有限公司 Environment map autonomous updating method, equipment and computer readable storage medium
CN112801193B (en) * 2021-02-03 2023-04-07 拉扎斯网络科技(上海)有限公司 Positioning data processing method and device, electronic equipment and medium
CN113238550B (en) * 2021-04-12 2023-10-27 大连海事大学 Mobile robot vision homing method based on road sign self-adaptive correction
CN113108798A (en) * 2021-04-21 2021-07-13 浙江中烟工业有限责任公司 Multi-storage robot indoor map positioning system based on laser radar
CN114536326B (en) * 2022-01-19 2024-03-22 深圳市灵星雨科技开发有限公司 Road sign data processing method, device and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (en) * 2006-09-29 2007-07-18 浙江大学 Method for moving robot simultanously positioning and map structuring at unknown environment
KR20090043460A (en) * 2007-10-29 2009-05-06 재단법인서울대학교산학협력재단 System and method for stabilization contol using an inertial sensor
CN102656532A (en) * 2009-10-30 2012-09-05 悠进机器人股份公司 Map generating and updating method for mobile robot position recognition
EP2527943A1 (en) * 2011-05-24 2012-11-28 BAE Systems Plc. Vehicle navigation
CN104062973A (en) * 2014-06-23 2014-09-24 西北工业大学 Mobile robot SLAM method based on image marker identification
CN105334858A (en) * 2015-11-26 2016-02-17 江苏美的清洁电器股份有限公司 Floor sweeping robot and indoor map establishing method and device thereof
WO2016077703A1 (en) * 2014-11-13 2016-05-19 Worcester Polytechnic Institute Gyroscope assisted scalable visual simultaneous localization and mapping
CN106056643A (en) * 2016-04-27 2016-10-26 武汉大学 Point cloud based indoor dynamic scene SLAM (Simultaneous Location and Mapping) method and system
CN106908040A (en) * 2017-03-06 2017-06-30 哈尔滨工程大学 A kind of binocular panorama visual robot autonomous localization method based on SURF algorithm
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN107832661A (en) * 2017-09-27 2018-03-23 南通大学 A kind of Localization Approach for Indoor Mobile of view-based access control model road sign
CN107991680A (en) * 2017-11-21 2018-05-04 南京航空航天大学 SLAM methods based on laser radar under dynamic environment
CN108225327A (en) * 2017-12-31 2018-06-29 芜湖哈特机器人产业技术研究院有限公司 A kind of structure and localization method of top mark map
CN111108457A (en) * 2017-09-29 2020-05-05 罗伯特·博世有限公司 Method, device and computer program for operating a robot control system
CN111164648A (en) * 2017-10-11 2020-05-15 日立汽车系统株式会社 Position estimation device and position estimation method for moving body
CN111837083A (en) * 2018-01-12 2020-10-27 佳能株式会社 Information processing apparatus, information processing system, information processing method, and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105852B (en) * 2011-11-14 2016-03-30 联想(北京)有限公司 Displacement calculates method and apparatus and immediately locates and map constructing method and equipment
KR20130096539A (en) * 2012-02-22 2013-08-30 한국전자통신연구원 Autonomous moving appartus and method for controlling thereof
CN104916216A (en) * 2015-06-26 2015-09-16 深圳乐行天下科技有限公司 Map construction method and system thereof
US10884417B2 (en) * 2016-11-07 2021-01-05 Boston Incubator Center, LLC Navigation of mobile robots based on passenger following

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000507A (en) * 2006-09-29 2007-07-18 浙江大学 Method for moving robot simultanously positioning and map structuring at unknown environment
KR20090043460A (en) * 2007-10-29 2009-05-06 재단법인서울대학교산학협력재단 System and method for stabilization contol using an inertial sensor
CN102656532A (en) * 2009-10-30 2012-09-05 悠进机器人股份公司 Map generating and updating method for mobile robot position recognition
EP2527943A1 (en) * 2011-05-24 2012-11-28 BAE Systems Plc. Vehicle navigation
CN104062973A (en) * 2014-06-23 2014-09-24 西北工业大学 Mobile robot SLAM method based on image marker identification
WO2016077703A1 (en) * 2014-11-13 2016-05-19 Worcester Polytechnic Institute Gyroscope assisted scalable visual simultaneous localization and mapping
CN105334858A (en) * 2015-11-26 2016-02-17 江苏美的清洁电器股份有限公司 Floor sweeping robot and indoor map establishing method and device thereof
CN106056643A (en) * 2016-04-27 2016-10-26 武汉大学 Point cloud based indoor dynamic scene SLAM (Simultaneous Location and Mapping) method and system
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN106908040A (en) * 2017-03-06 2017-06-30 哈尔滨工程大学 A kind of binocular panorama visual robot autonomous localization method based on SURF algorithm
CN107832661A (en) * 2017-09-27 2018-03-23 南通大学 A kind of Localization Approach for Indoor Mobile of view-based access control model road sign
CN111108457A (en) * 2017-09-29 2020-05-05 罗伯特·博世有限公司 Method, device and computer program for operating a robot control system
CN111164648A (en) * 2017-10-11 2020-05-15 日立汽车系统株式会社 Position estimation device and position estimation method for moving body
CN107991680A (en) * 2017-11-21 2018-05-04 南京航空航天大学 SLAM methods based on laser radar under dynamic environment
CN108225327A (en) * 2017-12-31 2018-06-29 芜湖哈特机器人产业技术研究院有限公司 A kind of structure and localization method of top mark map
CN111837083A (en) * 2018-01-12 2020-10-27 佳能株式会社 Information processing apparatus, information processing system, information processing method, and program

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Data association in stochastic mapping using the joint compatibility test;J. Neira;《IEEE Transactions on Robotics and Automation》;20011231;第890-897页 *
Feature Based Landmark Extraction for Real Time Visual SLAM;Natesh Srinivasan;《2010 International Conference on Advances in Recent Technologies in Communication and Computing》;20101031;第16-17页 *
基于仿生的机器人室内地图构建方法的研究;李伟;《东北师大学报(自然科学版)》;20180630;第84-87页 *
自主导航农业车辆的全景视觉同时定位与地图创建;李盛辉;《江苏农业学报》;20170630;第598-609页 *

Also Published As

Publication number Publication date
CN109074085A (en) 2018-12-21
WO2020019221A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
CN109074085B (en) Autonomous positioning and map building method and device and robot
CN109506658B (en) Robot autonomous positioning method and system
US11204247B2 (en) Method for updating a map and mobile robot
CN107967457B (en) Site identification and relative positioning method and system adapting to visual characteristic change
CN111199564B (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
US11086016B2 (en) Method and apparatus for tracking obstacle
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
CN112734852B (en) Robot mapping method and device and computing equipment
CN104819726B (en) navigation data processing method, device and navigation terminal
CN109425348B (en) Method and device for simultaneously positioning and establishing image
CN111830953B (en) Vehicle self-positioning method, device and system
CN111912416B (en) Method, device and equipment for positioning equipment
CN109141444B (en) positioning method, positioning device, storage medium and mobile equipment
EP2738517B1 (en) System and methods for feature selection and matching
CN108038139B (en) Map construction method and device, robot positioning method and device, computer equipment and storage medium
CN111274847B (en) Positioning method
CN104517275A (en) Object detection method and system
CN110827353B (en) Robot positioning method based on monocular camera assistance
CN111652929A (en) Visual feature identification and positioning method and system
CN111998862A (en) Dense binocular SLAM method based on BNN
CN111723724B (en) Road surface obstacle recognition method and related device
KR100998709B1 (en) A method of robot localization using spatial semantics of objects
CN113325415B (en) Fusion method and system of vehicle radar data and camera data
CN109901589B (en) Mobile robot control method and device
CN116762094A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210125

Address after: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address