CN115016507A - Mapping method, positioning method, device, robot and storage medium - Google Patents

Mapping method, positioning method, device, robot and storage medium Download PDF

Info

Publication number
CN115016507A
CN115016507A CN202210888954.5A CN202210888954A CN115016507A CN 115016507 A CN115016507 A CN 115016507A CN 202210888954 A CN202210888954 A CN 202210888954A CN 115016507 A CN115016507 A CN 115016507A
Authority
CN
China
Prior art keywords
obstacle
contour
information
point cloud
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210888954.5A
Other languages
Chinese (zh)
Inventor
何科君
杨文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202210888954.5A priority Critical patent/CN115016507A/en
Publication of CN115016507A publication Critical patent/CN115016507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a mapping method, a positioning device, a robot and a storage medium. The graph establishing method comprises the following steps: acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle included in the local environment area based on the laser point cloud image; obtaining the barrier type of the barrier, and determining confidence information of each position point included in the target contour based on the barrier type of the barrier and the target contour, wherein the barrier type comprises a static barrier type and a dynamic barrier type; and establishing a confidence map of the local environment area based on the position information and the confidence information of each position point. The method can improve the positioning accuracy of the robot.

Description

Mapping method, positioning method, device, robot and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a mapping method, a positioning method, a device, a robot, and a storage medium.
Background
With the continuous development of science and technology, robots gradually come into the lives of people. With the increasing application scenes of the robot, the requirements on the mapping and positioning technology of the robot are higher and higher. Among them, the laser-based SLAM technique is a commonly used technique for robot mapping and positioning.
Currently, laser-based SLAM technology includes 2 steps: and (5) mapping and positioning. During drawing, the pose information of the laser equipment corresponding to each frame is obtained mainly based on the laser data obtained from the previous frame and the next frame, and then a probability map of the environment where the robot is located is generated. When positioning is realized, the corresponding pose information of the robot is determined mainly by matching the laser data of each frame with a probability map.
However, under the condition that the scene arrangement is changed and a large number of dynamic obstacles exist, the pose information of the robot obtained by the method is low in reliability and low in positioning accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide a mapping method, a positioning apparatus, a robot, and a storage medium, which can improve the positioning accuracy of the robot.
In a first aspect, the present application provides a method for creating a map, including:
acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle included in the local environment area based on the laser point cloud image;
obtaining the barrier type of the barrier, and determining confidence information of each position point included in the target contour based on the barrier type of the barrier and the target contour, wherein the barrier type comprises a static barrier type and a dynamic barrier type;
and establishing a confidence map of the local environment area based on the position information and the confidence information of each position point.
In one embodiment, the determining confidence information of each position point included in the target contour based on the obstacle type of the obstacle and the target contour includes:
if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value;
if the type of the obstacle is a dynamic obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a second numerical value;
wherein the first value is greater than the second value.
In one embodiment, the laser point cloud image comprises a plurality of frames of time-sequential continuous images; determining a target contour of an obstacle included in a local environmental area based on a laser point cloud image, including:
determining the initial contour of each obstacle according to the first frame of image in the multi-frame images;
and expanding the initial contour based on the initial contour and the laser point cloud image to obtain the target contour of the obstacle.
In one embodiment, the expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle includes:
judging whether a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in a candidate image or not aiming at each contour pixel point in the initial contour, wherein the candidate image is an image except for a first frame image in a laser point cloud image;
and if the neighborhood pixel points corresponding to the contour pixel points have connection pixel points in the candidate image, expanding the initial contour by using the connection pixel points to obtain the target contour.
In one embodiment, the expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle includes:
acquiring a first outline of an obstacle corresponding to a reference image, wherein the reference image is a certain frame image except a first frame image in a laser point cloud image;
acquiring a fitting curve corresponding to a second contour, wherein the second contour is the contour of an obstacle obtained on the basis of an image which is positioned in front of the reference image in time sequence in the laser point cloud image;
and judging whether an intersecting pixel point intersecting with the fitting curve exists in the first contour, if so, expanding the initial contour by using the intersecting pixel point to obtain a target contour.
In one embodiment, the method further comprises:
based on the laser point cloud image, a probability grid map corresponding to the local environment area is established, and the probability grid map comprises the occupation probability information of each position point in the local environment area.
In a second aspect, the present application further provides a positioning method. The method comprises the following steps:
acquiring a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image;
determining confidence information and occupation probability information of each position point in a target contour of the obstacle based on a confidence map and a probability grid map of a local environment area, wherein the confidence information is determined based on the obstacle type of the obstacle, and the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and determining the current pose information of the robot according to the confidence coefficient information, the occupation probability information and the current laser point cloud image.
In one embodiment, the determining the current pose information of the robot according to the confidence information, the occupation probability information and the current laser point cloud image includes:
aiming at each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence coefficient information of the position point as the matching score of the position point;
and determining the current pose information of the robot based on the matching score and the current laser point cloud image.
In a third aspect, the application also provides a map building device. The device includes:
the acquisition module is used for acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle in the local environment area based on the laser point cloud image;
the determining module is used for acquiring the obstacle type of the obstacle, and determining the confidence information of each position point included in the target contour based on the obstacle type of the obstacle and the target contour, wherein the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and the mapping module is used for establishing a confidence map of the local environment area based on the position information and the confidence information of each position point.
In a fourth aspect, the present application further provides a positioning apparatus. The device includes:
the acquisition module is used for acquiring a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image;
the determining module is used for determining confidence information and occupation probability information of each position point in a target contour of the obstacle based on a confidence map and a probability grid map of a local environment area, wherein the confidence information is determined based on the obstacle type of the obstacle, and the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and the positioning module is used for determining the current pose information of the robot according to the confidence coefficient information, the occupation probability information and the current laser point cloud image.
In a fifth aspect, the present application further provides a robot. The robot comprises a laser sensor, a memory and a processor, wherein the laser sensor is used for shooting a local area in a place where the robot is located to obtain a laser point cloud image, a computer program which can be run on the processor is stored in the memory, and the processor is used for implementing the steps of the mapping method according to the first aspect or the steps of the positioning method according to the second aspect when executing the computer program.
In a sixth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the mapping method according to the first aspect described above, or the steps of the positioning method according to the second aspect described above. The mapping method, the positioning method, the device, the robot and the storage medium determine the target contour and the type of the obstacle in the local environment area based on the acquired laser point cloud image obtained by shooting the local environment area of the space where the robot is located by the laser sensor, wherein the type of the obstacle comprises a static obstacle type and a dynamic obstacle type; based on the obstacle type of the obstacle and the target contour, determining confidence information of each position point included by the target contour for establishing a confidence map of the local environment area. Different confidence information is set for the outlines of different obstacles according to different obstacle types, so that the dynamic obstacles and the static obstacles in the local environment area are distinguished, the scene distribution condition of the obstacles in the local environment area can be known by the robot based on the established confidence map during positioning, and the accuracy of the positioning result and the robustness in the dynamic environment are improved.
Drawings
FIG. 1 is a schematic diagram of a robot in one embodiment;
FIG. 2 is a flow diagram illustrating a method of creating a map in one embodiment;
FIG. 3 is a schematic view of a spatial scene in which a robot is located in one embodiment;
FIG. 4 is a schematic flow chart of step 101 in one embodiment;
FIG. 5 is a flow chart illustrating step 202 in one embodiment;
FIG. 6 is a diagram illustrating an exemplary 8 region of pixels;
FIG. 7 is a schematic diagram of a spatial scenario in which a robot is located in another embodiment;
FIG. 8 is a schematic flow chart of step 202 in another embodiment;
FIG. 9 is a diagram showing the result of contour expansion in another embodiment;
FIG. 10 is a schematic flow chart diagram of a mapping method in another embodiment;
FIG. 11 is a flow diagram illustrating a positioning method in one embodiment;
FIG. 12 is a flowchart illustrating step 603 according to an embodiment;
FIG. 13 is a schematic flow chart diagram of a positioning method in another embodiment;
FIG. 14 is a block diagram showing the construction of a drawing apparatus according to an embodiment;
FIG. 15 is a block diagram of the positioning device in one embodiment;
FIG. 16 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the mapping method provided by the embodiment of the application, the execution main body may be a mapping device, the mapping device is arranged on the robot shown in fig. 1, and part or all of the mapping device can be implemented as a terminal of the robot in a software, hardware or software and hardware combination mode. The terminal can be a personal computer, a notebook computer, a media player, a smart television, a smart phone, a tablet computer, a portable wearable device and the like.
As shown in fig. 1, the robot is provided with a laser sensor 1. Alternatively, the number of the laser sensors 1 may be 1 or more. The laser sensors can be arranged on the upper part, the middle part, the lower part, the left side or the right side of the robot, the number of the laser sensors is multiple, the installation positions of the laser sensors are different, and the realization mode of one robot is only exemplarily shown in fig. 1. In practical applications, the number and the installation positions of the laser sensors can be determined according to specific use scenarios and the shape and the structure of the robot, and are not limited herein.
Please refer to fig. 2, which illustrates a flowchart of a mapping method according to an embodiment of the present application. The embodiment is illustrated by applying the method to a terminal, and it can be understood that the method can also be applied to a system comprising the terminal and a server, and is implemented by interaction between the terminal and the server. As shown in fig. 2, the interaction method of the mobile robot may include the steps of:
step 101, acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle included in the local environment area based on the laser point cloud image.
Optionally, the space where the robot is located is a closed space, such as a room. The robot moves in a closed space and images the space using the laser sensor 1. Each time a certain number of images are taken, a small map submap is constructed starting from the taken images.
The number of obstacles included in the local environment area may be 1 or more.
Optionally, the laser point cloud image is processed by using a related image processing algorithm to obtain a target contour of each obstacle. Specifically, the image processing algorithm is an image segmentation algorithm, an image edge extraction algorithm, a machine learning algorithm, and the like.
Step 102, obtaining the obstacle type of the obstacle, and determining the confidence information of each position point included in the target contour based on the obstacle type of the obstacle and the target contour.
Wherein the obstacle type includes a static obstacle type and a dynamic obstacle type.
The static obstacle is an obstacle which cannot move autonomously or is difficult to move, for example, corridors, walls, load-bearing columns, tables and the like can be determined as the static obstacle; the dynamic barrier may be an automatically movable object (e.g., a robot) or a portable object (e.g., a wheeled chair, luggage, etc.) as well as a pedestrian.
Taking the spatial scene shown in fig. 3 as an example, when a robot (reference numeral 7 in fig. 3) performs laser point cloud image shooting by using a laser sensor, the obstacles that can be collected are objects corresponding to reference numerals 1 to 7 in the drawing, wherein 1 and 5 are tables, 2 is a load-bearing column, 3 and 4 are chairs, and 6 is a wall. After the obstacle types are classified, the static obstacles are the obstacles corresponding to the reference numerals 1,2, 5 and 6 in fig. 3 (i.e., the objects marked by the multiple lines in the drawing), and the dynamic obstacles are the obstacles corresponding to the reference numerals 3 and 4 in fig. 3.
Optionally, a classification algorithm is used to classify the obstacles in the laser point cloud image, and the classification algorithm may be a Support Vector Machine (SVM) algorithm, a random forest algorithm, a deep learning algorithm, and the like. In order to improve the accuracy of the classification result, the classification result obtained by the classification algorithm can be sent to a worker, and the worker checks and confirms the classification result.
Optionally, the confidence information is a confidence value. Different confidence values are set for different obstacles in combination with the obstacle type and the obstacle volume of the obstacle.
Step 103, establishing a confidence map of the local environment area based on the position information and the confidence information of each position point.
Optionally, a map coordinate system is established, and based on the map coordinate system, coordinate conversion is performed on the position information of each position point to obtain a map coordinate point corresponding to each position point; and establishing a confidence map of the local environment area according to the map coordinate point and the corresponding confidence information.
Optionally, the place where the robot is located is divided into grids. And setting the confidence degree information corresponding to the obstacle as the value of the grid aiming at the grid corresponding to the obstacle.
In the embodiment, a target contour and an obstacle type of an obstacle included in a local environment area are determined based on a laser point cloud image obtained by shooting the local environment area of a space where a robot is located by an obtained laser sensor, wherein the obstacle type includes a static obstacle type and a dynamic obstacle type; based on the obstacle type of the obstacle and the target contour, determining confidence information of each position point included by the target contour for establishing a confidence map of the local environment area. Different confidence information is set for the target profiles of different obstacles according to different obstacle types, so that the dynamic obstacles and the static obstacles in the local environment area are distinguished, the scene distribution condition of the obstacles in the local environment area can be known by the robot based on the established confidence map during positioning, and the accuracy of the positioning result and the robustness in the dynamic environment are improved.
In the embodiment of the present application, based on the embodiment shown in fig. 2, the embodiment relates to an implementation process for determining confidence information of each position point included in a target contour based on an obstacle type of an obstacle and the target contour in step 102, where the implementation process includes:
if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value; and if the type of the obstacle is a dynamic obstacle type, setting the confidence information of the position point corresponding to the obstacle in the target contour as a second numerical value, wherein the first numerical value is larger than the second numerical value.
Alternatively, the first and second values may be manually determined based on a plurality of experimental results.
Optionally, the terminal stores a mapping relationship table of the types of obstacles and the confidence level information. And for each obstacle, after obtaining the obstacle type of the obstacle, the processor calls the mapping relation table to obtain corresponding confidence degree information.
Optionally, a difference between the first value and the second value is not less than a preset threshold. For example, the preset threshold is not less than 1.
In the embodiment, the higher confidence value is set for the static obstacle, and the lower confidence information is set for the dynamic obstacle, so that the probability that the static obstacle information is adopted when the robot carries out positioning is facilitated, and the robustness of the method and the reliability of the positioning result of the robot are improved.
In one embodiment, the laser point cloud image comprises a plurality of frames of temporally successive images. As shown in fig. 4, based on any of the above embodiments, the embodiment relates to an implementation process of determining a target contour of an obstacle included in a local area based on a laser point cloud image in step 101, the implementation process includes the following steps:
step 201, determining an initial contour of each obstacle according to a first frame image in a plurality of frame images.
Optionally, the number of image frames included in the laser point cloud image is 40-60 frames.
Optionally, in order to reduce the calculation amount of the processor, after the first frame image is obtained, the deep learning algorithm is used to perform contour extraction and obstacle classification processing on the obstacles in the first frame image, and the processed result is sent to the staff for checking and determining. The subsequent frame is only required to be subjected to contour expansion by using an image processing algorithm on the basis of the result of the first frame image.
And 202, expanding the initial contour of each obstacle based on the initial contour and the laser point cloud image to obtain a target contour of each obstacle.
Optionally, a region growing algorithm or a clustering algorithm is used to expand the initial contour to obtain a target contour of the obstacle.
In the embodiment, the initial contour is expanded based on the initial contour and the laser point cloud image to obtain the target contour of the obstacle, so that the complete contour area of the obstacle is obtained, and the robustness of the method is further improved.
Further, in an implementation manner, as shown in fig. 5, the step 202 expands the initial contour based on the initial contour and the laser point cloud image for each obstacle, and an implementation process of obtaining a target contour of the obstacle includes steps 301 and 302:
step 301, judging whether a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in the candidate image or not according to each contour pixel point in the initial contour.
And the candidate image is an image except the first frame image in the laser point cloud image.
Optionally, the neighborhood pixel point is a pixel point corresponding to 4 neighborhoods of the contour pixel point, that is, when the contour pixel point is (x, y), the pixel points corresponding to 4 neighborhoods include: (x-1, y), (x +1, y), (x, y-1), and (x, y + 1).
Optionally, the neighborhood pixel point isThe pixel points corresponding to 8 neighborhoods of the contour pixel point, that is, when the contour pixel point is (x, y), the pixel points corresponding to 8 neighborhoods thereof include: (x-1, y-1), (x-1, y +1), (x, y-1), (x, y +1), (x +1, y-1), (x +1, y) and (x +1, y +1), as shown in fig. 6, for pixel P, the corresponding neighborhood pixel is labeled P i (i ═ 1,2, …, 8).
Optionally, whether a connection pixel exists in each neighborhood pixel of the contour pixel in a subsequent frame is respectively judged according to each contour pixel included in the initial contour; and if the distance between the pixel point corresponding to a certain position point in the subsequent frame and the neighborhood pixel point is smaller than a preset threshold value, taking the pixel point corresponding to the position point as a connection pixel point.
Further, whether a neighborhood pixel point of the contour pixel point corresponding to the obstacle in the candidate image has a connection pixel point in a subsequent image of the candidate image or not is judged according to the contour pixel point corresponding to the obstacle in the candidate image. Wherein the subsequent image is an image chronologically subsequent to the candidate image.
Optionally, a connected pixel point set is obtained based on all connected pixel points obtained in the above process.
Step 302, if a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in the candidate image, the initial contour is expanded by using the connection pixel point to obtain a target contour.
Optionally, based on the obtained connection pixel point set and the contour pixel points of the obstacle corresponding to the multi-frame image, a target contour is obtained by using a region generation algorithm.
Optionally, the initial contour and the laser point cloud image including the connection pixel points corresponding to each frame are connected to expand the initial contour, so as to obtain a target contour.
In the embodiment, whether a connection pixel point exists in a candidate image of a neighborhood pixel point corresponding to a contour pixel point is judged for each contour pixel point in an initial contour, if the connection pixel point exists in the candidate image of the neighborhood pixel point corresponding to the contour pixel point, the initial contour is expanded by using the connection pixel point to obtain a target contour, and the connection pixel point corresponding to each contour pixel point is obtained by traversing each contour pixel point, so that the integrity of the contour expansion of an obstacle is improved, and the reliability of a confidence map is improved.
Further, for some obstacles in the laser point cloud image, there may be a case where the obstacles are blocked by other objects. Taking the scene of the robot shown in fig. 7 as an example, regarding the obstacle b, during the driving process of the robot (shown by a triangle line in the figure), the object c may block the area where the obstacle b is located, and further the outline of the obstacle b is broken.
Therefore, for the above situation, in an implementation manner, based on any of the above embodiments, as shown in fig. 8, the implementation process of expanding the initial contour based on the initial contour and the laser point cloud image in step 202 to obtain the target contour of the obstacle includes steps 401, 402, and 403:
step 401, a first contour of an obstacle corresponding to a reference image is obtained.
The reference image is a certain frame image except the first frame image in the laser point cloud image.
Optionally, based on a time sequence order of each frame of image, each frame of image except the first frame of image in the laser point cloud image is sequentially used as a reference image, and the time sequence order is a shooting order corresponding to each frame of image.
Optionally, for each obstacle in the reference image, a point cloud clustering algorithm is used to obtain a first contour of each obstacle.
Step 402, obtaining a fitted curve corresponding to the second contour.
And the second contour is the contour of an obstacle obtained on the basis of an image which is positioned in front of the reference image in time sequence in the laser point cloud image.
Optionally, the second contour is a contour obtained based on steps 301 to 302.
Optionally, a curve fitting algorithm is used to fit the contour pixel points included in the second contour to obtain the fitting curve, wherein the curve fitting algorithm includes straight line fitting, unitary multiple function fitting and bezier curve fitting.
And 403, judging whether an intersecting pixel point intersecting the fitting curve exists in the first contour, and if the intersecting pixel point intersecting the fitting curve exists in the first contour, expanding the initial contour by using the intersecting pixel point to obtain a target contour.
Optionally, if the first contour intersects with the fitting curve, taking a pixel point corresponding to the intersection position as the intersection pixel point; or taking the pixel points within a preset range of the distance intersection position as the intersection pixel points.
Optionally, the first contour is connected with the intersected pixel points based on a fitting curve to expand the initial contour, so as to obtain a target contour.
As shown in fig. 9, for the scene shown in fig. 7, the contour of the obstacle b is expanded by using the method provided by this embodiment, that is, the straight line fitting method, so that the broken left and right portions are connected to each other, and the target contour of the obstacle b is finally obtained.
In the embodiment, aiming at the condition that the obstacle outline is fractured due to the shielding problem, the expansion and the supplement of the obstacle outline information are realized by adopting a curve fitting mode, and the reliability of the confidence map is improved.
In one embodiment, based on the embodiment shown in fig. 2, the mapping method further includes the following steps:
based on the laser point cloud image, a probability grid map corresponding to the local environment area is established, and the probability grid map comprises the occupation probability information of each position point in the local environment area.
The occupation probability information is obtained based on a probability grid map corresponding to the local area. The probability grid map is a small map (submap) established based on a laser point cloud image captured by a laser sensor. The probability grid map is an important map format, the place where the robot is located is divided into grids, and each grid has only two states: occupied or idle. The value in the grid is an occupancy probability value (i.e., occupancy probability information) that is used to represent the probability that the grid is occupied.
In the embodiment, the probability grid map is established, so that the robot can be conveniently positioned based on the probability grid map.
In one embodiment, as shown in fig. 10, there is provided a mapping method, including:
step 501, obtaining a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, wherein the laser point cloud image comprises a plurality of frames of continuous images in time sequence.
Step 502, determining an initial contour of each obstacle according to a first frame image in the multi-frame images.
Step 503, for each contour pixel point in the initial contour, determining whether a field pixel point corresponding to the contour pixel point has a connection pixel point in a candidate image, where the candidate image is an image of the laser point cloud image except for the first frame image.
Step 504, if the field pixel points corresponding to the contour pixel points have connection pixel points in the candidate image, the initial contour is expanded by using the connection pixel points to obtain a target contour.
And 505, acquiring a first outline of the obstacle corresponding to a reference image, wherein the reference image is a certain frame image except the first frame image in the laser point cloud image.
Step 506, a fitting curve corresponding to a second contour is obtained, wherein the second contour is a contour of the obstacle obtained on the basis of an image which is located in front of the reference image in time sequence in the laser point cloud image.
And 507, judging whether an intersecting pixel point intersecting with the fitting curve exists in the first contour, and if the intersecting pixel point intersecting with the fitting curve exists in the first contour, expanding the initial contour by using the intersecting pixel point to obtain a target contour.
And step 508, acquiring the obstacle types of the obstacles, wherein the obstacle types comprise a static obstacle type and a dynamic obstacle type.
Step 509, if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value; and if the type of the obstacle is the dynamic obstacle type, setting the confidence information of the position point corresponding to the obstacle in the target contour as a second numerical value.
Wherein the first value is greater than the second value.
Step 510, establishing a confidence map of the local environment area based on the position information and the confidence information of each position point.
And 511, establishing a probability grid map corresponding to the local environment area based on the laser point cloud image.
The probability grid map comprises occupation probability information of each position point in the local environment area.
In the embodiment, different confidence information is respectively set for the outlines of different obstacles according to different obstacle types, so that the dynamic obstacles and the static obstacles in a local area are distinguished; the method and the device achieve the purpose of giving higher confidence coefficient to the static obstacle and giving lower confidence coefficient to the dynamic obstacle, enable the robot to know the scene distribution condition of a local environment region when positioning, and improve the accuracy of the positioning result and the robustness in the dynamic environment. In addition, the reliability of the method is further improved by expanding the outline of the obstacle.
In the laser-based SLAM technology, a positioning process is further included, that is, laser data of each frame is matched with a constructed map to determine pose information corresponding to the robot. Therefore, please refer to fig. 11 for the positioning process. As shown in fig. 11, an embodiment of the present application provides a positioning method, including the following steps:
step 601, obtaining a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image.
Step 602, determining confidence information and occupancy probability information of each position point in the target contour of the obstacle based on the confidence map and the probability grid map of the local environment area.
Wherein the confidence information is determined based on obstacle types of the obstacles, the obstacle types including static obstacle types and dynamic obstacle types.
Specifically, based on a confidence map of a local environment region, determining confidence information of each position point in a target contour of an obstacle included in the local environment region; and determining the occupation probability information of each position point in the target contour of the obstacle included in the local environment area based on the probability grid map of the local environment area.
And 603, determining the current pose information of the robot according to the confidence information, the occupation probability information and the current laser point cloud image.
The pose information includes position information and orientation information of the robot.
Optionally, a target contour region of the obstacle corresponding to the region with higher confidence information is selected as a reference object, and then the front laser point cloud image is matched with the probability grid map based on the occupation probability information and the current laser point cloud image to obtain the current pose information of the robot.
In the embodiment, the confidence information and the occupation probability information of each position point in the target contour of the barrier are determined based on the confidence map and the probability grid map of the local environment area, and the confidence information, the occupation probability information and the current laser point cloud image are fully considered in the current pose information of the robot, so that the scene distribution condition of the barrier in the local environment area is determined according to the confidence information in the local environment area, and the accuracy of the positioning result and the robustness in a dynamic environment are improved.
In one embodiment, the confidence information corresponding to the static obstacle type is greater than the confidence information for the dynamic obstacle type. As shown in fig. 12, based on the embodiment shown in fig. 11, the embodiment relates to determining the current pose information of the robot according to the confidence information, the occupancy probability information, and the current laser point cloud image in step 603, and includes steps 701 and 702:
step 701, regarding each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence information of the position point as the matching score of the position point.
Specifically, the expression corresponding to the matching score is as follows:
score(x,y)=p(x,y)*w(x,y),
wherein score (x, y) is represented as a matching score corresponding to the position point (x, y), p (x, y) is represented as occupation probability information corresponding to the position point (x, y), and w (x, y) is represented as confidence information corresponding to the position point (x, y).
And step 702, determining the current pose information of the robot based on the matching score and the current laser point cloud image.
Optionally, for each obstacle, the matching score corresponding to the obstacle is calculated by using the following formula:
Figure BDA0003766784260000141
wherein, Sco re A matching score corresponding to the obstacle is represented, j represents the number of position points included in the obstacle, and score (x, y) represents a matching score corresponding to the position point (x, y) included in the obstacle.
Optionally, the obstacles are arranged according to the sequence of the matching scores of the obstacles from high to low, a plurality of obstacles at the front of the sequencing position are selected as reference objects, and the pose of the robot is determined based on the matching scores and the current laser point cloud image.
In the embodiment, the product of the occupation probability information of the position point and the confidence coefficient information of the position point is used as the matching score of the position point, so that the purposes of giving a higher matching score to the static obstacle and giving a lower matching score to the dynamic obstacle are achieved, the probability of positioning the robot based on the static obstacle is improved, and the method is simple and has small calculation amount.
In one embodiment, as shown in fig. 13, there is provided a positioning method including:
step 801, acquiring a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image.
Step 802, determining confidence information and occupation probability information of each position point in the target contour of the obstacle based on the confidence map and the probability grid map of the local environment area.
Wherein the confidence information is determined based on the obstacle type of the obstacle, the obstacle type including a static obstacle type and a dynamic obstacle type; the confidence information corresponding to the static obstacle type is greater than the confidence information of the dynamic obstacle type.
And 803, regarding each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence coefficient information of the position point as the matching score of the position point.
And step 804, determining the current pose information of the robot based on the matching score and the current laser point cloud image.
In the embodiment, the purposes of endowing the static barrier with a higher matching score and endowing the dynamic barrier with a lower matching score are achieved, the probability of positioning the robot based on the static barrier is improved, and further the accuracy of the positioning result and the robustness in a dynamic environment are improved.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a mapping apparatus for implementing the mapping method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the method, so the specific limitations in one or more embodiments of the positioning device provided below can refer to the limitations on the mapping method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 14, there is provided a map building apparatus including: an obtaining module 100, a determining module 200 and a mapping module 300, wherein:
the acquisition module 100 is configured to acquire a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determine a target contour of an obstacle included in the local environment area based on the laser point cloud image;
the determining module 200 is configured to obtain an obstacle type of an obstacle, and determine confidence information of each position point included in a target contour based on the obstacle type of the obstacle and the target contour, where the obstacle type includes a static obstacle type and a dynamic obstacle type;
the mapping module 300 is configured to establish a confidence map of the local environment region based on the location information and the confidence information of each location point.
In an embodiment, the determining module 200 is specifically configured to:
if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value; if the type of the obstacle is a dynamic obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a second numerical value; wherein the first value is greater than the second value.
In one embodiment, the laser point cloud image comprises a plurality of frames of temporally successive images; the obtaining module 100 is specifically configured to:
determining the initial contour of each obstacle according to the first frame of image in the multi-frame images;
and expanding the initial contour based on the initial contour and the laser point cloud image to obtain the target contour of the obstacle.
In an embodiment, the obtaining module 100 is further specifically configured to:
judging whether a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in a candidate image or not aiming at each contour pixel point in the initial contour, wherein the candidate image is an image except for a first frame image in a laser point cloud image;
and if the neighborhood pixel points corresponding to the contour pixel points have connection pixel points in the candidate image, expanding the initial contour by using the connection pixel points to obtain the target contour.
In an embodiment, the obtaining module 100 is further specifically configured to:
acquiring a first outline of an obstacle corresponding to a reference image, wherein the reference image is a certain frame image except a first frame image in a laser point cloud image;
acquiring a fitting curve corresponding to a second contour, wherein the second contour is the contour of an obstacle obtained on the basis of an image which is positioned in front of the reference image in time sequence in the laser point cloud image;
and judging whether an intersecting pixel point intersecting with the fitting curve exists in the first contour, if so, expanding the initial contour by using the intersecting pixel point to obtain a target contour.
In one embodiment, the mapping apparatus is further configured to:
based on the laser point cloud image, a probability grid map corresponding to the local environment area is established, and the probability grid map comprises the occupation probability information of each position point in the local environment area.
The modules in the mapping device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Based on the same inventive concept, the embodiment of the present application further provides a positioning apparatus for implementing the above-mentioned positioning method. The solution of the problem provided by the device is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the positioning device provided below can refer to the limitations on the positioning method in the above, and are not described herein again.
In one embodiment, as shown in fig. 15, there is provided a positioning device including: an obtaining module 400, a determining module 500 and a positioning module 600, wherein:
the acquisition module 400 is configured to acquire a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determine an obstacle included in the current laser point cloud image based on the current laser point cloud image;
the determining module 500 is configured to determine confidence information and occupancy probability information of each position point in a target contour of an obstacle based on a confidence map and a probability grid map of a local environment region, where the confidence information is determined based on an obstacle type of the obstacle, and the obstacle type includes a static obstacle type and a dynamic obstacle type;
and the positioning module 600 is configured to determine current pose information of the robot according to the confidence information, the occupancy probability information, and the current laser point cloud image.
In an embodiment, the confidence information corresponding to the static obstacle type is greater than the confidence information corresponding to the dynamic obstacle type, and the positioning module 600 is specifically configured to:
aiming at each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence coefficient information of the position point as the matching score of the position point;
and determining the current pose information of the robot based on the matching score and the current laser point cloud image.
The modules in the positioning device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, and a communication interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a positioning method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a robot is provided. The robot comprises a laser sensor, a memory and a processor, wherein the laser sensor is used for shooting a local area in a place where the robot is located to obtain a laser point cloud image, a computer program which can be run on the processor is stored in the memory, and the processor is used for realizing the following steps when executing the computer program:
acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle included in the local environment area based on the laser point cloud image;
obtaining the barrier type of the barrier, and determining confidence information of each position point included in the target contour based on the barrier type of the barrier and the target contour, wherein the barrier type comprises a static barrier type and a dynamic barrier type;
and establishing a confidence map of the local environment area based on the position information and the confidence information of each position point.
In one embodiment, the determining confidence information of each position point included in the target contour based on the type of the obstacle and the target contour includes:
if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value; if the type of the obstacle is a dynamic obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a second numerical value; wherein the first value is greater than the second value.
In one embodiment, the laser point cloud image comprises a plurality of frames of temporally successive images; determining a target profile of an obstacle included in a local environmental region based on a laser point cloud image, including:
determining the initial contour of each obstacle according to the first frame of image in the multi-frame images;
and expanding the initial contour based on the initial contour and the laser point cloud image to obtain the target contour of the obstacle.
In one embodiment, the expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle includes:
judging whether a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in a candidate image or not aiming at each contour pixel point in the initial contour, wherein the candidate image is an image except for a first frame image in a laser point cloud image; and if the neighborhood pixel points corresponding to the contour pixel points have connection pixel points in the candidate image, expanding the initial contour by using the connection pixel points to obtain the target contour.
In one embodiment, the expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle includes:
acquiring a first outline of an obstacle corresponding to a reference image, wherein the reference image is a certain frame image except a first frame image in a laser point cloud image; acquiring a fitting curve corresponding to a second contour, wherein the second contour is the contour of an obstacle obtained on the basis of an image which is positioned in front of the reference image in time sequence in the laser point cloud image; and judging whether an intersecting pixel point intersecting with the fitting curve exists in the first contour, if so, expanding the initial contour by using the intersecting pixel point to obtain a target contour.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
based on the laser point cloud image, a probability grid map corresponding to the local environment area is established, and the probability grid map comprises the occupation probability information of each position point in the local environment area.
In one embodiment, a robot is provided. The robot comprises a laser sensor, a memory and a processor, wherein the laser sensor is used for shooting a local area in a place where the robot is located to obtain a laser point cloud image, a computer program which can be run on the processor is stored in the memory, and the processor is used for realizing the following steps when executing the computer program:
acquiring a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image;
determining confidence information and occupation probability information of each position point in a target contour of the obstacle based on a confidence map and a probability grid map of a local environment area, wherein the confidence information is determined based on the obstacle type of the obstacle, and the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and determining the current pose information of the robot according to the confidence coefficient information, the occupation probability information and the current laser point cloud image.
In one embodiment, the confidence information corresponding to the static obstacle type is greater than the confidence information corresponding to the dynamic obstacle type, and the determining the current pose information of the robot according to the confidence information, the occupancy probability information and the current laser point cloud image includes:
aiming at each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence coefficient information of the position point as the matching score of the position point; and determining the current pose information of the robot based on the matching score and the current laser point cloud image.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of an embodiment of the method of mapping as described above, or the steps of an embodiment of the method of positioning as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (12)

1. A method for creating a map, the method comprising:
acquiring a laser point cloud image obtained by shooting a local environment area of a space where a robot is located by a laser sensor, and determining a target contour of an obstacle included in the local environment area based on the laser point cloud image;
obtaining the obstacle type of the obstacle, and determining confidence information of each position point included by the target contour based on the obstacle type of the obstacle and the target contour, wherein the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and establishing a confidence map of the local environment area based on the position information of each position point and the confidence information.
2. The method of claim 1, wherein determining confidence information for each location point included in the target contour based on the type of obstacle and the target contour comprises:
if the type of the obstacle is a static obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a first numerical value;
if the type of the obstacle is a dynamic obstacle type, setting confidence information of a position point corresponding to the obstacle in the target contour as a second numerical value;
wherein the first value is greater than the second value.
3. The method according to any one of claims 1 or 2, wherein the laser point cloud image comprises a plurality of frames of temporally successive images; the determining a target contour of an obstacle included in the local environmental region based on the laser point cloud image includes:
determining the initial contour of each obstacle according to the first frame of image in the multi-frame images;
and expanding the initial contour based on the initial contour and the laser point cloud image for each obstacle to obtain a target contour of the obstacle.
4. The method of claim 3, wherein expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle comprises:
judging whether a neighborhood pixel point corresponding to the contour pixel point has a connection pixel point in a candidate image or not aiming at each contour pixel point in the initial contour, wherein the candidate image is an image except the first frame image in the laser point cloud image;
and if the neighborhood pixel points corresponding to the contour pixel points have connection pixel points in the candidate image, expanding the initial contour by using the connection pixel points to obtain the target contour.
5. The method of claim 3, wherein expanding the initial contour based on the initial contour and the laser point cloud image to obtain a target contour of the obstacle comprises:
acquiring a first outline of the obstacle corresponding to a reference image, wherein the reference image is a certain frame image except the first frame image in the laser point cloud image;
acquiring a fitting curve corresponding to a second contour, wherein the second contour is the contour of the obstacle obtained on the basis of an image which is positioned in front of the reference image in time sequence in the laser point cloud image;
and judging whether an intersecting pixel point intersecting with the fitting curve exists in the first contour, if so, expanding the initial contour by using the intersecting pixel point to obtain the target contour.
6. The method of claim 1, further comprising:
and establishing a probability grid map corresponding to the local environment area based on the laser point cloud image, wherein the probability grid map comprises the occupation probability information of each position point in the local environment area.
7. A method of positioning, the method comprising:
acquiring a current laser point cloud image obtained by shooting a local environment area of a space where a robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image;
determining confidence information and occupancy probability information of each position point in a target contour of the obstacle based on a confidence map and a probability grid map of the local environment area, wherein the confidence information is determined based on obstacle types of the obstacle, and the obstacle types comprise a static obstacle type and a dynamic obstacle type;
and determining the current pose information of the robot according to the confidence coefficient information, the occupation probability information and the current laser point cloud image.
8. The method of claim 7, wherein the confidence information corresponding to a static obstacle type is greater than the confidence information for a dynamic obstacle type; determining the current pose information of the robot according to the confidence information, the occupation probability information and the current laser point cloud image, wherein the determining comprises the following steps:
for each position point in the target contour, taking the product of the occupation probability information of the position point and the confidence coefficient information of the position point as a matching score of the position point;
and determining the current pose information of the robot based on the matching score and the current laser point cloud image.
9. An apparatus for creating a map, the apparatus comprising:
the acquisition module is used for acquiring a laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining a target contour of an obstacle in the local environment area based on the laser point cloud image;
the determining module is used for acquiring the obstacle type of the obstacle, and determining confidence information of each position point included by the target contour based on the obstacle type of the obstacle and the target contour, wherein the obstacle type comprises a static obstacle type and a dynamic obstacle type;
and the mapping module is used for establishing a confidence map of the local environment area based on the position information of each position point and the confidence information.
10. A positioning device, characterized in that the device comprises:
the acquisition module is used for acquiring a current laser point cloud image obtained by shooting a local environment area of a space where the robot is located by a laser sensor, and determining an obstacle included in the current laser point cloud image based on the current laser point cloud image;
a determining module, configured to determine confidence information and occupancy probability information of each position point in a target contour of the obstacle based on a confidence map and a probability grid map of the local environment region, where the confidence information is determined based on an obstacle type of the obstacle, and the obstacle type includes a static obstacle type and a dynamic obstacle type;
and the positioning module is used for determining the current pose information of the robot according to the confidence coefficient information, the occupation probability information and the current laser point cloud image.
11. A robot, characterized in that the robot comprises a laser sensor, a memory and a processor, the laser sensor is used for shooting a local area in a place where the robot is located to obtain a laser point cloud image, the memory is stored with a computer program which can be run on the processor, and the processor is used for implementing the steps of the mapping method according to any one of claims 1 to 6 or the steps of the positioning method according to any one of claims 7 to 8 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the mapping method according to one of claims 1 to 6 or the steps of the positioning method according to one of claims 7 to 8.
CN202210888954.5A 2022-07-27 2022-07-27 Mapping method, positioning method, device, robot and storage medium Pending CN115016507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210888954.5A CN115016507A (en) 2022-07-27 2022-07-27 Mapping method, positioning method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210888954.5A CN115016507A (en) 2022-07-27 2022-07-27 Mapping method, positioning method, device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN115016507A true CN115016507A (en) 2022-09-06

Family

ID=83065796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210888954.5A Pending CN115016507A (en) 2022-07-27 2022-07-27 Mapping method, positioning method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN115016507A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle
CN115972217A (en) * 2023-03-20 2023-04-18 深圳鹏行智能研究有限公司 Monocular camera-based map building method and robot
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot
CN115972217A (en) * 2023-03-20 2023-04-18 深圳鹏行智能研究有限公司 Monocular camera-based map building method and robot

Similar Documents

Publication Publication Date Title
CN115016507A (en) Mapping method, positioning method, device, robot and storage medium
CN109978756B (en) Target detection method, system, device, storage medium and computer equipment
US9298990B2 (en) Object tracking method and device
CN112288770A (en) Video real-time multi-target detection and tracking method and device based on deep learning
Boniardi et al. Robot localization in floor plans using a room layout edge extraction network
JP6547744B2 (en) Image processing system, image processing method and program
CN112419368A (en) Method, device and equipment for tracking track of moving target and storage medium
CN113808253A (en) Dynamic object processing method, system, device and medium for scene three-dimensional reconstruction
CN111652181B (en) Target tracking method and device and electronic equipment
US11335025B2 (en) Method and device for joint point detection
EP3942462B1 (en) Convolution neural network based landmark tracker
CN114663502A (en) Object posture estimation and image processing method and related equipment
US11790661B2 (en) Image prediction system
CN111968134A (en) Object segmentation method and device, computer readable storage medium and computer equipment
CA3136990A1 (en) A human body key point detection method, apparatus, computer device and storage medium
CN112241646A (en) Lane line recognition method and device, computer equipment and storage medium
US12033352B2 (en) Methods and systems for generating end-to-end model to estimate 3-dimensional(3-D) pose of object
CN111507219A (en) Action recognition method and device, electronic equipment and storage medium
CN115393538A (en) Visual SLAM method and system for indoor dynamic scene based on deep learning
CN116481517B (en) Extended mapping method, device, computer equipment and storage medium
CN112884804A (en) Action object tracking method and related equipment
CN112132025A (en) Emergency lane image processing method and device, computer equipment and storage medium
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
Zieliński et al. Keyframe-based dense mapping with the graph of view-dependent local maps
Bianco et al. Sensor placement optimization in buildings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination