CN116088503A - Dynamic obstacle detection method and robot - Google Patents

Dynamic obstacle detection method and robot Download PDF

Info

Publication number
CN116088503A
CN116088503A CN202211624831.7A CN202211624831A CN116088503A CN 116088503 A CN116088503 A CN 116088503A CN 202211624831 A CN202211624831 A CN 202211624831A CN 116088503 A CN116088503 A CN 116088503A
Authority
CN
China
Prior art keywords
map
target
obstacle
grid
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211624831.7A
Other languages
Chinese (zh)
Inventor
周晓帆
唐玉坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202211624831.7A priority Critical patent/CN116088503A/en
Publication of CN116088503A publication Critical patent/CN116088503A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision

Abstract

The application relates to a dynamic obstacle detection method and a robot. The method is applied to a robot, and a sensing sensor for detecting environmental information is carried on the robot; the method comprises the steps of obtaining an initial probability map corresponding to a target area, and obtaining point cloud data obtained by detecting the target area by a perception sensor; converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not; updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area; clustering the pixel points in the map mask to obtain candidate barriers; and filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle. The method can improve the detection accuracy of the dynamic obstacle.

Description

Dynamic obstacle detection method and robot
Technical Field
The application relates to the technical field of robots, in particular to a dynamic obstacle detection method and a robot.
Background
With the acceleration of modern life pace and the improvement of technological level, robots cover home, municipal, airport, hotel and other scenes. In the movement of the robot, it is required to avoid an obstacle, which may be classified into a static obstacle and a dynamic obstacle. Compared with a static obstacle, the robot has the advantages that the position of the dynamic obstacle is not fixed, the movement state is complex, the risk is high, and if the dynamic obstacle is not recognized, the robot can collide with the dynamic obstacle easily.
In the conventional technology, a method for detecting a dynamic obstacle adopts an adjacent frame comparison method, and static obstacles and dynamic obstacles are distinguished by analyzing absolute position coordinates of the same obstacle in adjacent sampling periods.
However, the detection result obtained by the method of detecting a dynamic obstacle based on the adjacent frame comparison method is easily affected by noise data. Therefore, the conventional detection method of the dynamic obstacle has the problem that the detection result is not accurate enough.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a dynamic obstacle detection method, apparatus, robot, computer-readable storage medium, and computer program product that can improve detection accuracy.
In a first aspect, the present application provides a dynamic obstacle detection method applied to a robot, where a sensor for detecting environmental information is mounted on the robot. The method comprises the following steps:
acquiring an initial probability map corresponding to a target area, and acquiring point cloud data obtained by detecting the target area by a perception sensor;
converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
Updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
clustering the pixel points in the map mask to obtain candidate barriers;
and filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle.
In one embodiment, converting the point cloud data into a map mask includes obtaining a global static map corresponding to a target area; performing grid division on a preset horizontal area centered by a robot to construct a preset grid map; converting the point cloud data from a robot coordinate system to a map coordinate system where a global static map is located, and projecting the converted point cloud data to a preset grid map to obtain a two-dimensional point cloud grid map; and setting the pixel value of a first grid with the point cloud projection in the two-dimensional point cloud grid map as a first value, and setting the pixel value of a second grid without the point cloud projection in the two-dimensional point cloud grid map as a second value, so as to obtain a map mask.
In one embodiment, updating the initial probability map based on the map mask to obtain the target probability map includes determining a third grid corresponding to the first grid in the initial probability map, and increasing a probability value of the third grid by a preset value; determining a fourth grid corresponding to the second grid in the initial probability map, and attenuating the probability value of the fourth grid by a preset value; and obtaining a target probability map based on the result after the probability value adjustment.
In one embodiment, filtering the candidate obstacle based on the target probability map to obtain a dynamic obstacle, including filtering the candidate obstacle in the shape of the obstacle to obtain a first filtering result; carrying out probability filtering processing on the first filtering result based on the target probability map to obtain a second filtering result; carrying out shielding filtering treatment on the second filtering result to obtain a third filtering result; and carrying out static obstacle filtering processing on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle.
In one embodiment, probability filtering processing is performed on the first filtering result based on the target probability map to obtain a second filtering result, wherein the probability value of a grid corresponding to each pixel point in the current candidate obstacle in the target probability map is determined for the current candidate obstacle in the first filtering result; determining a comprehensive probability value of the current candidate obstacle according to the determined probability value; and screening based on the comprehensive probability value of the candidate obstacle to obtain a second filtering result.
In one embodiment, the static obstacle filtering processing is performed on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle, including obtaining the global static map corresponding to the target area; calculating the occurrence proportion of the pixel points of each candidate obstacle in the third filtering result in the global static map; and removing candidate barriers with the proportion larger than a preset proportion in the third filtering result to obtain dynamic barriers.
In one embodiment, after filtering the candidate obstacles to obtain the dynamic obstacle, the method further includes obtaining an actual position of each dynamic obstacle at the current moment, and obtaining an estimated position of each historical target in the historical target list at the current moment; the historical target list is a set of dynamic obstacles detected in a historical time period; updating a historical target list based on the actual position and the estimated position; and screening out the target dynamic barrier meeting the stable motion condition from the updated historical target list to track.
In one embodiment, updating the historical targets list based on the actual position and the estimated position, including for a current dynamic obstacle in each dynamic obstacle, matching the current dynamic obstacle and each historical target according to the actual position of the current dynamic obstacle and the estimated position of each historical target; under the condition that the matching is successful, taking the actual position of the current dynamic obstacle as the actual position of a historical target on the matching at the current moment; under the condition of failure in matching, adding the current dynamic barrier as a new added target to a historical target list; and obtaining an updated historical target list based on at least one of the actual position of the historical target or the newly-added target on the matching of the current moment.
In one embodiment, screening out a target dynamic obstacle meeting a stable motion condition from an updated historical target list for tracking, wherein the screening comprises calculating the actual speed and the actual acceleration of each intermediate target at the current moment according to the actual position of each intermediate target in the updated historical target list at the current moment; and screening the target dynamic barrier from the updated historical target list to track under the condition that at least one of the matching success times, the matching failure times, the actual speed and the actual acceleration of each intermediate target meets the preset condition.
In a second aspect, the present application further provides a dynamic obstacle detection device, which is applied to a robot, and a sensor for detecting environmental information is mounted on the robot. The device comprises:
the acquisition module is used for acquiring an initial probability map corresponding to the target area and acquiring point cloud data obtained by detecting the target area by the sensing sensor;
the conversion module is used for converting the point cloud data into a map mask, and the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
the updating module is used for updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
The clustering module is used for carrying out clustering processing on the pixel points in the map mask to obtain candidate barriers;
and the filtering module is used for filtering the candidate barriers based on the target probability map to obtain dynamic barriers.
In a third aspect, the present application further provides a robot, on which a sensing sensor for detecting environmental information is mounted. The robot comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring an initial probability map corresponding to a target area, and acquiring point cloud data obtained by detecting the target area by a perception sensor;
converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
clustering the pixel points in the map mask to obtain candidate barriers;
and filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an initial probability map corresponding to a target area, and acquiring point cloud data obtained by detecting the target area by a perception sensor;
converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
clustering the pixel points in the map mask to obtain candidate barriers;
and filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an initial probability map corresponding to a target area, and acquiring point cloud data obtained by detecting the target area by a perception sensor;
Converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
clustering the pixel points in the map mask to obtain candidate barriers;
and filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle.
According to the dynamic obstacle detection method, the device, the robot, the storage medium and the computer program product, the initial probability map corresponding to the target area is obtained, the point cloud data obtained by detecting the target area by the sensing sensor is obtained, and the point cloud data can accurately represent the position information of the obstacle in the target area. The method comprises the steps of converting point cloud data into a map mask, enabling pixel values of each pixel point in the map mask to represent whether point cloud projection exists or not, updating an initial probability map based on the map mask to obtain a target probability map, and clustering the pixel points in the map mask to obtain candidate obstacles. Because the target probability map characterizes the probability value of each grid occupied by the obstacle in the target area, the candidate obstacles can be distinguished from the dynamic obstacles by filtering the candidate obstacles based on the target probability map, so that the dynamic obstacles are obtained, and the obtained dynamic obstacle detection result has higher accuracy rate due to the combination of the point cloud data, the initial probability map and the target probability map, so that the aim of improving the detection accuracy rate of the dynamic obstacles can be fulfilled.
Drawings
FIG. 1 is a flow chart of a dynamic obstacle detection method according to an embodiment;
FIG. 2A is a diagram of a map mask corresponding to a previous time in one embodiment;
FIG. 2B is a schematic diagram of a target probability map corresponding to a previous time in one embodiment;
FIG. 2C is a diagram of a map mask corresponding to a current time in one embodiment;
FIG. 2D is a schematic diagram of a target probability map corresponding to a current time in one embodiment;
FIG. 3 is a flow chart of a filtering process for candidate obstacles in one embodiment;
FIG. 4 is a flow chart of a dynamic obstacle detection method according to another embodiment;
FIG. 5 is a block diagram of a dynamic obstacle detecting device according to one embodiment;
fig. 6 is an internal structural view of the robot in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be understood that the terms "first," "second," "third," "fourth," and the like as used in the embodiments of the present application do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The singular forms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one, unless the context clearly dictates otherwise.
In one embodiment, as shown in fig. 1, a dynamic obstacle detection method is provided, and this embodiment is exemplified by applying the method to a robot, where a sensor for detecting environmental information is mounted on the robot, it will be understood that the method may also be applied to a server, and may also be applied to a system including the robot and the server, and implemented through interaction between the robot and the server. In this embodiment, the method includes the steps of:
step 102, acquiring an initial probability map corresponding to the target area, and acquiring point cloud data obtained by detecting the target area by a perception sensor.
Wherein the target area is the surrounding of the robot, in which static obstacles, such as walls, tables and doors are arranged.
The initial probability map corresponding to the target area is a pre-constructed grid map (also referred to as a two-dimensional occupied grid map), in which the probability value of each grid occupied by an obstacle is a fixed initial value, for example, the probability value is 128.
The point cloud data is used to characterize the position information of the obstacle in the target area, and may be represented by three-dimensional coordinates (x, y, z) or two-dimensional coordinates (x, y). Wherein, the horizontal direction is indicated by the letter x, the vertical direction is indicated by the letter y, and the vertical direction is indicated by the letter z.
In one embodiment, the point cloud data may be a three-dimensional point cloud, that is, a point set represented by three-dimensional coordinates, for example, (x, y, z), obtained by detecting environmental information of a target area by a sensor mounted on a robot. The perception sensor comprises a depth camera, a single line lidar or a multi-line lidar. For example, three-dimensional point cloud data may be obtained by acquiring a depth map of environmental information of a target area by a depth camera (for example, RGBD camera) mounted on a robot chassis and converting the depth map into a three-dimensional point cloud, or may be obtained by acquiring environmental information of the target area by a multi-line lidar (also called a three-dimensional lidar) mounted on the robot chassis.
A depth camera is a camera that obtains the physical distance between an object in a scene and a camera, and is generally composed of various lenses and optical sensors. RGBD camera adds a depth measurement function to the function of RGB (Red-Green-Blue) common camera. The multi-line laser radar emits laser beams in a fixed direction, the emitted laser beams can be reflected when encountering an obstacle, and therefore the obstacle can be detected, and the distance between the robot and the obstacle and the height information of the obstacle are determined.
In another embodiment, the point cloud data may be a two-dimensional point cloud, that is, a point set represented by two-dimensional coordinates, for example, (x, y), and the environmental information of the target area is detected by a sensor mounted on the robot. For example, the perception sensor is a single line lidar. The two-dimensional point cloud data can be obtained by acquiring the environmental information of a target area through a single-line laser radar installed on a robot chassis. The single-line laser radar, also called a two-dimensional laser radar, emits laser beams in a fixed direction, and the emitted laser beams encounter an obstacle and are reflected, so that the obstacle can be detected and the distance between the robot and the obstacle can be determined.
The method includes the steps that an initial probability map corresponding to a target area is obtained by a robot, when the robot walks, obstacles in surrounding environments begin to be detected, point cloud data obtained by detecting environment information of a view angle area of the robot are obtained in real time through a sensing sensor installed on a chassis of the robot, after a preset time period, a plurality of groups of point cloud data are obtained, and therefore point cloud data obtained by detecting the target area by the robot are obtained. The Field of View (FOV) area is an area that can be scanned in front of the robot, and belongs to a part of the target area.
Step 104, converting the point cloud data into a map mask, wherein a pixel value of each pixel point in the map mask represents whether a point cloud projection exists.
Wherein the map mask is a binary image corresponding to the point cloud data, the pixel value of each pixel point in the map mask is a first value (e.g., 1) or a second value (e.g., 0), and the first value represents that there is a point cloud projection and the second value represents that there is no point cloud projection.
The point cloud projection is the projection of the point cloud data in a preset grid map. The preset grid map is a grid map obtained by performing grid division construction on a preset horizontal area centered on the robot in the case where the point cloud data is acquired. Each grid in the preset grid map and each grid in the initial probability map are in one-to-one correspondence, and all represent the same position in the preset horizontal area. Preferably, each grid in the preset grid map is the same size as each grid in the initial probability map.
The preset horizontal area is a plane area centered on the robot and in a coordinate system of a horizontal direction and a vertical direction, belongs to a part of the target area, and the length and width of the preset horizontal area and the grid size are preset according to the detection requirement, which is not limited in the embodiment of the present application. For example, the preset horizontal area may be a rectangular area with a length of 1m and a width of 0.4m, which is constructed with the robot as a center, with a distance of 0.2m in front of the robot, 0.2m in rear of the robot, 0.5m in left of the robot, and 0.5m in right of the robot, and the grid size of 0.05m x 0.05m.
Illustratively, a robot obtains a global static map corresponding to a target area; performing grid division on a preset horizontal area centered by a robot to construct a preset grid map; converting the point cloud data from a robot coordinate system to a map coordinate system where a global static map corresponding to a target area is located, and projecting the converted point cloud data to a preset grid map to obtain a two-dimensional point cloud grid map; and setting the pixel value of a first grid with the point cloud projection in the two-dimensional point cloud grid map as a first value, and setting the pixel value of a second grid without the point cloud projection in the two-dimensional point cloud grid map as a second value, so as to obtain a map mask.
The global static map corresponding to the target area is a pre-constructed scale map, and has a real physical size, such as a grid map. The global static map includes plane position information of static obstacles in a target area, and the position of each static obstacle in the target area can be represented by coordinates in a map coordinate system where the global static map is located, and the map coordinate system is a two-dimensional coordinate system.
In one embodiment, pre-constructing the global static map includes continuously scanning the target area by the single-line lidar for a specific period of time to obtain the global static map without including dynamic obstacles in the target area.
Each grid in the initial probability map corresponding to the target region has a one-to-one correspondence with each location in the global static map. For example, when the global static map is a grid map, and each grid in the global static map is the same size as each grid in the initial probability map, each grid in the initial probability map and each grid in the global static map are in one-to-one correspondence, each representing the same position in the target area, for example, for a table in the target area, the position of which is a grid of the first row and the first column in the initial probability map, and is also a grid of the first row and the first column in the global static map, and the position of the table may be represented by coordinates of the center point of the grid of the first row and the first column in the global static map.
The two-dimensional point cloud grid map is a grid map obtained by projecting converted point cloud data to a preset grid map and comprises a first grid and a second grid, namely each grid in the two-dimensional point cloud grid map is the first grid or the second grid. The first grid is a grid in which point cloud projections exist in the two-dimensional point cloud grid map, and the second grid is a grid in which point cloud projections do not exist in the two-dimensional point cloud grid map.
In one embodiment, the point cloud data is a three-dimensional point cloud, the point cloud data is converted from a robot coordinate system to a map coordinate system where a global static map is located, and the converted point cloud data is projected to a preset grid map to obtain a two-dimensional point cloud grid map, including: discarding coordinates representing a third direction (z, vertical direction) in the point cloud data represented by the three-dimensional coordinates (x, y, z), wherein the third direction is a direction perpendicular to the first direction (x, horizontal direction) and the second direction (y, vertical direction), obtaining a two-dimensional point cloud represented by the two-dimensional coordinates (x, y), converting the two-dimensional point cloud from a robot coordinate system to a map coordinate system where a global static map is located according to a conversion matrix, and projecting the converted two-dimensional point cloud to a preset grid map to obtain the two-dimensional point cloud grid map.
In one embodiment, the point cloud data is a two-dimensional point cloud, the converting the point cloud data from a robot coordinate system to a map coordinate system where a global static map is located, and projecting the converted point cloud data to a preset grid map to obtain a two-dimensional point cloud grid map, including: according to the transformation matrix, transforming the point cloud data from a robot coordinate system to a map coordinate system where the global static map is located, and projecting the transformed point cloud data to a preset grid map to obtain a two-dimensional point cloud grid map.
The point cloud data is acquired through a depth camera, a multi-line laser radar or a single-line laser radar installed on a robot chassis, so that position information represented by the point cloud data is in a robot coordinate system, and in order to obtain a map mask to update an initial probability map corresponding to a target area based on the map mask, the point cloud data needs to be converted from the robot coordinate system to a map coordinate system where a global static map is located.
In one embodiment, converting point cloud data from a robot coordinate system to a map coordinate system where a global static map is located, comprises: the robot acquires positioning information of the robot in the global static map at the moment of generating the point cloud data, wherein the positioning information comprises the position and the course angle of the robot; determining a conversion matrix between a map coordinate system where the global static map is located and a robot coordinate system according to the positioning information; and converting the point cloud data from the robot coordinate system to the map coordinate system according to the conversion matrix.
In one embodiment, acquiring positioning information of a robot in a global static map at a time point of generating point cloud data includes: the robot acquires positioning information of the robot in the global static map at the moment of generating the point cloud data through a positioning module arranged in the robot; the positioning module comprises at least one of a first positioning unit, a second positioning unit, a third positioning unit or a fourth positioning unit, and the positioning information is obtained by weighting and summing at least one of the first positioning information, the second positioning information, the third positioning information or the fourth positioning information.
The first positioning unit is a wheel type odometer, the second positioning unit is an inertial measurement unit (Inertial Measurement Unit, IMU for short), the third positioning unit is a global positioning system (Global Positioning System, GPS for short), and the fourth positioning unit is Real-time differential positioning (RTK for short). The first positioning information is the positioning information of the robot in the global static map, which is acquired through a first positioning unit installed inside the robot, the second positioning information is the positioning information of the robot in the global static map, which is acquired through a second positioning unit installed inside the robot, the third positioning information is the positioning information of the robot in the global static map, which is acquired through a third positioning unit installed inside the robot, and the fourth positioning information is the positioning information of the robot in the global static map, which is acquired through a fourth positioning unit installed inside the robot.
In this embodiment, the accuracy of the detected positioning information of the robot in the global static map can be improved by determining the positioning information of the robot in the global static map by using at least one of a wheel type odometer, an IMU, a GPS or an RTK, so that the purpose of improving the detection accuracy of the dynamic obstacle can be achieved.
Step 106, updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by an obstacle in the target area.
The target probability map is an updated initial probability map, wherein the probability value of each grid occupied by an obstacle is related to whether the pixel value of the pixel point in the map mask is a first value or a second value.
Illustratively, the robot updates the initial probability map based on the map mask, resulting in the target probability map.
And step 108, clustering the pixel points in the map mask to obtain candidate barriers.
Wherein the candidate obstacles comprise at least one obstacle, the obstacle can be a static obstacle or a dynamic obstacle, and each obstacle in the unlabeled candidate obstacles is a static obstacle or a dynamic obstacle.
Illustratively, the robot communicates image areas composed of pixel points with the same pixel value and adjacent positions in the map mask to obtain at least one communicating domain, and each communicating domain corresponds to one candidate obstacle.
Step 110, filtering the candidate obstacle based on the target probability map to obtain a dynamic obstacle.
Since the point cloud data includes position information of the static obstacle and the dynamic obstacle in the target area, filtering processing is required for candidate obstacles determined based on the point cloud data, and the static obstacle and the dynamic obstacle are distinguished to obtain the dynamic obstacle.
Illustratively, the robot performs filtering processing on the candidate obstacle based on the target probability map to obtain a dynamic obstacle.
In the dynamic obstacle detection method, the robot acquires the initial probability map corresponding to the target area, and acquires the point cloud data obtained by detecting the target area by the sensing sensor, wherein the point cloud data can accurately represent the position information of the obstacle in the target area. The method comprises the steps of converting point cloud data into a map mask, enabling pixel values of each pixel point in the map mask to represent whether point cloud projection exists or not, updating an initial probability map based on the map mask to obtain a target probability map, and clustering the pixel points in the map mask to obtain candidate obstacles. Because the target probability map characterizes the probability value of each grid occupied by the obstacle in the target area, the candidate obstacles can be distinguished from the dynamic obstacles by filtering based on the target probability map, so that the dynamic obstacle is obtained.
In one embodiment, updating the initial probability map based on the map mask to obtain the target probability map includes determining a third grid corresponding to the first grid in the initial probability map, and increasing a probability value of the third grid by a preset value; determining a fourth grid corresponding to the second grid in the initial probability map, and attenuating the probability value of the fourth grid by a preset value; and obtaining a target probability map based on the result after the probability value adjustment.
The initial probability map comprises a third grid and a fourth grid, namely each grid in the initial probability map is the third grid or the fourth grid. The third grid is a grid corresponding to a grid in which the point cloud projection exists in the two-dimensional point cloud grid map in the initial probability map, and the fourth grid is a grid corresponding to a grid in which the point cloud projection does not exist in the two-dimensional point cloud grid map in the initial probability map. The preset value is preset according to an empirical value, which is not limited in the embodiment of the present application.
The result of the probability value adjustment is a probability value obtained by adding a preset value to the probability value of the third grid and a probability value obtained by subtracting a preset value from the probability value of the fourth grid. The target probability map is a grid map after the initial probability map is updated.
For example, for a map mask corresponding to the current time, the robot determines a third grid corresponding to the first grid in the initial probability map, and increases a probability value of the third grid by a preset value; determining a fourth grid corresponding to the second grid in the initial probability map, and attenuating the probability value of the fourth grid by a preset value; and obtaining a target probability map based on the probability value obtained by adding the preset value to the probability value of the third grid and the probability value obtained by attenuating the preset value to the probability value of the fourth grid.
For example, the preset value is 1, as shown in fig. 2A-2D, fig. 2A is a map mask corresponding to a time immediately before the current time, and the pixel value of each pixel point (corresponding to each grid in the two-dimensional point cloud grid map) in the map mask is respectively: 1. 1, 0, fig. 2B is a target probability map corresponding to the previous time, and probability values of each grid are respectively: 128. 129, 130, 128, 129, 130. Fig. 2C is a map mask corresponding to the current time, where pixel values of each pixel point in the map mask are respectively: 1. 1, 0, fig. 2D is a target probability map corresponding to the current time, and probability values of each grid are respectively: 129. 130, 131, 127, 128, 129.
In this embodiment, by determining the third grid corresponding to the first grid and the fourth grid corresponding to the second grid in the initial probability map, increasing the probability value of the third grid by a preset value and attenuating the probability value of the fourth grid by a preset value, the object of updating the initial probability map to obtain the target probability map can be achieved when at least one pixel value of the two-dimensional point cloud grid map corresponding to the map mask changes.
In one embodiment, as shown in fig. 3, filtering the candidate obstacle based on the target probability map to obtain a dynamic obstacle includes: performing obstacle shape filtering processing on the candidate obstacles to obtain a first filtering result; carrying out probability filtering processing on the first filtering result based on the target probability map to obtain a second filtering result; carrying out shielding filtering treatment on the second filtering result to obtain a third filtering result; and carrying out static obstacle filtering processing on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle.
The first filtering result is a candidate obstacle after being subjected to obstacle shape filtering treatment, the second filtering result is a candidate obstacle after being subjected to obstacle shape filtering treatment and probability filtering treatment in sequence, the third filtering result is a candidate obstacle after being subjected to obstacle shape filtering treatment, probability filtering treatment and shielding filtering treatment in sequence, and the fourth filtering result is a candidate obstacle after being subjected to obstacle shape filtering treatment, probability filtering treatment, shielding filtering treatment and static obstacle filtering treatment in sequence.
Illustratively, the robot performs obstacle shape filtering processing on the candidate obstacle to obtain a first filtering result; carrying out probability filtering processing on the first filtering result based on the target probability map to obtain a second filtering result; carrying out shielding filtering treatment on the second filtering result to obtain a third filtering result; and carrying out static obstacle filtering processing on the third filtering result based on the global static map to obtain a dynamic obstacle.
Since the shape of a dynamic obstacle and the shape of a static obstacle generally have a certain difference, the candidate obstacle can be subjected to a filtering process from the viewpoint of the shape of the obstacle. In one embodiment, the filtering of the shape of the candidate obstacle to obtain a first filtering result includes filtering the shape of the candidate obstacle to obtain the first filtering result according to at least one of an aspect ratio of the candidate obstacle, an area of the candidate obstacle, or a number of pixels of the candidate obstacle.
Wherein the aspect ratio of the candidate obstacle is the ratio of the length and the width of the candidate obstacle. The number of pixels of the candidate obstacle is the number of pixels included in the candidate obstacle.
Illustratively, the robot screens out at least one of the candidate obstacles that satisfies the shape filtering condition as a first filtering result for a current candidate obstacle among the candidate obstacles; the shape filtering condition includes that the aspect ratio of the current candidate obstacle is within a first preset range, the area of the candidate obstacle is within a second preset range, and the number of pixels of the candidate obstacle is within a third preset range, where the first preset range, the second preset range and the third preset range are preset according to the detection requirement, and the embodiment of the present application is not limited to this.
Since the robot can only see the nearest obstacle and cannot see the following obstacle in any angular direction of the robot view angle, when there are a plurality of obstacles in front of the robot and at least one of the obstacles is blocked by another obstacle, the robot can avoid the other obstacle including the forefront obstacle and blocked by the forefront obstacle only by avoiding the forefront obstacle.
In one embodiment, the second filtering result is subjected to shielding filtering processing to obtain a third filtering result, and the method includes removing the shielded candidate obstacle in the second filtering result to obtain the third filtering result when the shielded candidate obstacle exists in the second filtering result.
Illustratively, in the case that the blocked candidate obstacle exists in the second filtering result, the robot rejects the blocked candidate obstacle in the second filtering result to obtain a third filtering result.
In this embodiment, the object of filtering the candidate obstacle based on the target probability map to obtain the dynamic obstacle can be achieved by sequentially performing the obstacle shape filtering process, the probability filtering process, the shielding filtering process, and the static obstacle filtering process on the candidate obstacle.
In one embodiment, probability filtering processing is performed on the first filtering result based on the target probability map to obtain a second filtering result, wherein the probability value of a grid corresponding to each pixel point in the current candidate obstacle in the target probability map is determined for the current candidate obstacle in the first filtering result; determining a comprehensive probability value of the current candidate obstacle according to the determined probability value; and screening based on the comprehensive probability value of the candidate obstacle to obtain a second filtering result.
Wherein the current candidate obstacle is any one of the candidate obstacles in the first filtering result. Because the dynamic obstacle moves in the target area, the point cloud projection corresponding to the dynamic obstacle acquired at each moment in the preset time period often appears in the first grids at different positions in the two-dimensional point cloud grid map, but rarely appears in the first grids at the same position in the two-dimensional point cloud grid map, so that the probability value of the third grid corresponding to the first grids at different positions is increased, and the probability value of the third grid at different positions in the target probability map is correspondingly increased by the preset value. Therefore, after the preset time period has elapsed, the probability value of each third grid corresponding to the dynamic obstacle in the target probability map is not large.
In contrast, since the static obstacle is always at the same position of the target area in the preset time period, the point cloud projection corresponding to the static obstacle obtained at each moment in the preset time period usually appears in the first grid at the same position in the two-dimensional point cloud grid map, so that the probability value of the third grid corresponding to the first grid at the position is always increased, and after the preset time period passes, the probability value of the third grid at the position in the target probability map is the superposition result of a plurality of preset values. Therefore, the probability value of the third grid corresponding to the static obstacle in the target probability map after the preset time period is greater than the probability value of the third grid corresponding to the dynamic obstacle in the target probability map. Thus, dynamic obstacles can be distinguished from static obstacles based on probability values of the grid in the target probability map.
For the current candidate obstacle in the first filtering result, the robot determines probability values of grids corresponding to pixel points in the current candidate obstacle in the target probability map; calculating the average value of the determined probability values to obtain the comprehensive probability value of the current candidate obstacle; under the condition that the comprehensive probability value of the current candidate obstacle is smaller than a preset probability value, reserving the current candidate obstacle; and eliminating the current candidate obstacle under the condition that the comprehensive probability value of the current candidate obstacle is not smaller than the preset probability value, so as to obtain a second filtering result. The preset probability value is preset according to the detection requirement, which is not limited in the embodiment of the present application.
In this embodiment, the probability value of the grid corresponding to each pixel point of each candidate obstacle in the target probability map is used to determine the comprehensive probability value of each candidate obstacle, and the second filtering result is obtained by screening according to the magnitude of the comprehensive probability value, so that the purpose of performing probability filtering processing on the first filtering result based on the target probability map to obtain the second filtering result can be achieved.
Because the global static map includes the position information of the static obstacle, by matching each candidate obstacle in the third filtering result with the global static map, when the overlapping proportion of the position of the pixel point of each candidate obstacle and the position of the static obstacle in the global static map is greater than a preset proportion, the possibility that the candidate obstacle is the static obstacle is high, and therefore, the dynamic obstacle and the static obstacle can be distinguished by the overlapping proportion of the position of the pixel point of each candidate obstacle and the position of the static obstacle in the global static map in the third filtering result.
In one embodiment, performing static obstacle filtering processing on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle, including: calculating the occurrence proportion of the pixel points of each candidate obstacle in the third filtering result in the global static map; and removing candidate barriers with the proportion larger than a preset proportion in the third filtering result to obtain dynamic barriers.
The preset proportion is preset according to the detection requirement, and the embodiment of the application is not limited to this.
For the current candidate obstacle in the third filtering result, the robot determines a grid position of a grid corresponding to each pixel point in the current candidate obstacle in the target probability map; determining map positions corresponding to the grid positions in the global static map based on the corresponding relation between each grid in the target grid map and each position in the global static map; judging whether each map position is the position of a static obstacle or not, and calculating the proportion of the map position corresponding to the position of the static obstacle to obtain the proportion of the pixel points of the current candidate obstacle in the global static map; and under the condition that the proportion is smaller than the preset proportion, reserving the current candidate obstacle, and under the condition that the proportion is not smaller than the preset proportion, eliminating the current candidate obstacle, thereby obtaining the dynamic obstacle.
In one embodiment, the robot may perform the filtering process directly based on the obtained global static map, or may perform the filtering process based on the updated global static map, which is not limited in the embodiment of the present application.
In some embodiments, the robot may update the pre-acquired global static map according to the target probability map, wherein the target probability map may be updated in real time and the global static map may be updated less frequently than the target probability map. And each grid of the global static map stores a numerical value, wherein the numerical value represents the confidence coefficient of the static obstacle, and if the confidence coefficient is larger than a certain threshold value, the numerical value represents that the grid corresponds to the static obstacle. Traversing each grid of the current target probability map, and if the probability value of the current grid is greater than a set threshold value, considering the current grid as belonging to a static obstacle. And triggering the updating of the global static map when the difference between the grid belonging to the static obstacle in the target probability map and the grid belonging to the static obstacle in the global static map exceeds a certain percentage. A specific updating manner may be to update the numerical value at the corresponding grid in the global static map based on the probability value of the target probability map.
In this embodiment, since the global static map includes the position information of the static obstacle in the target area, and each grid in the target probability map has a one-to-one correspondence with each position in the global static map, the target probability map characterizes the probability value that each grid in the target area is occupied by the obstacle, so the purpose of filtering the static obstacle based on the global static map for the third filtering result to obtain the dynamic obstacle can be achieved by calculating the ratio of the pixels of each candidate obstacle in the third filtering result to appear in the global static map and removing the candidate obstacle with the ratio of the third filtering result greater than the preset ratio.
In one embodiment, the application may further update the global static map according to the dynamic obstacle, which specifically includes: determining grid positions of grids corresponding to all pixel points in the dynamic barrier in the target probability map; determining map positions corresponding to the grid positions in the global static map based on the corresponding relation between each grid in the target grid map and each position in the global static map; judging whether each map position is the position of a dynamic obstacle, if so, deleting the position from the global static map, and if not, reserving the position to obtain the updated global static map.
In this embodiment, the detected dynamic obstacle is matched with the global static map, so that the purpose of updating the global static map according to the detected dynamic obstacle can be achieved, and the updated global static map can more accurately reflect the position information of the static obstacle in the target area, so that the purpose of improving the accuracy of dynamic obstacle detection is achieved.
According to the dynamic obstacle detection method provided by the embodiment, the map mask is obtained based on the point cloud data, the map mask is clustered to obtain candidate obstacles, and the dynamic obstacle is obtained by sequentially performing obstacle shape filtering processing, probability filtering processing, shielding filtering processing and static obstacle filtering processing on the candidate obstacles. Because the point cloud data is noisy, when the depth camera, the single-line laser radar or the multi-line laser radar is used for detecting the position information of the obstacle, an accurate value is difficult to detect, if the distance between the obstacle and the robot is 1.30 meters, the distance actually detected each time has slight fluctuation, for example, the last detection value is 1.41 meters, and the detection value at this time may be 1.35 meters. Therefore, in order to further improve the detection accuracy of the dynamic obstacle and realize the detection of the dynamic obstacle which moves stably, the method and the device can also detect the dynamic obstacle more accurately based on the track of the dynamic obstacle.
In one embodiment, after filtering the candidate obstacles to obtain dynamic obstacles, the method further includes obtaining actual positions of each dynamic obstacle at the current moment, and obtaining estimated positions of each historical target in the historical target list at the current moment; the historical target list is a set of dynamic obstacles detected in a historical time period; updating a historical target list based on the actual position and the estimated position; and screening out the target dynamic barrier meeting the stable motion condition from the updated historical target list to track.
Wherein the actual position of the dynamic obstacle is the position of the dynamic obstacle in the global static map. The history object list is a set of dynamic obstacles detected in a history period, and each dynamic obstacle detected at each history time is taken as one history object. The history object list includes the actual position and the actual speed of each history object.
The target dynamic obstacle is a dynamic obstacle to be tracked which is determined based on the dynamic obstacle and the historical target list, and because each historical target in the historical target list is used as a screening condition for detecting the dynamic obstacle, the screening condition for detecting the dynamic obstacle is additionally arranged on the basis of the dynamic obstacle, and the dynamic obstacle with unstable motion state in the dynamic obstacle, such as an animal temporarily appearing in a target area, can be filtered, and the dynamic obstacle in a stable motion state in the target area is detected and tracked, so that the obstacle avoidance purpose is achieved. Therefore, the target dynamic obstacle has higher detection accuracy than the dynamic obstacle.
Illustratively, the robot determines the position of each pixel point in each dynamic obstacle at the current moment in the global static map according to the corresponding relation between each pixel point in the map mask and each position in the global static map, so as to determine the actual position of each dynamic obstacle at the current moment; acquiring the actual position of each historical target in a target list at the moment previous to the current moment; filtering the actual positions of the historical targets at the previous moment to obtain estimated positions of the historical targets at the current moment; updating a historical target list based on the actual positions of the dynamic obstacles at the current moment and the estimated positions of the historical targets; and screening out the target dynamic barrier meeting the stable motion condition from the updated historical target list to track.
In one embodiment, filtering the actual position of each historical target at the previous moment to obtain the estimated position of each historical target at the current moment includes: acquiring the actual speed of each historical target in a target list at the previous moment of the current moment and the time difference between the current moment and the previous moment; and calculating the estimated position of each historical target at the current moment according to the actual position, the actual speed and the time difference of each historical target at the previous moment.
Illustratively, the robot obtains an actual speed of each historical target in a target list of a time immediately preceding the current time and a time difference between the current time and the previous time; and calculating the estimated position of each historical target at the current moment according to the actual position, the actual speed and the time difference of each historical target at the previous moment. The specific calculation formula is as follows:
X1e=X0+Vx0*△t,Y1e=Y 0+Vy0*△t,△t=t1-t0。
in the above formula, t1 is the current time, t0 is the previous time, Δt is the time difference between the current time and the previous time, X0 is the actual position of the history object at the previous time in the horizontal direction, Y0 is the actual position of the history object at the previous time in the vertical direction, vx0 is the actual speed of the history object at the previous time in the horizontal direction, vy0 is the actual speed of the history object at the previous time in the vertical direction, X1e is the estimated position of each history object at the current time in the horizontal direction, and Y1e is the estimated position of each history object at the current time in the vertical direction.
In this embodiment, the actual position of each dynamic obstacle at the current moment and the estimated position of each historical target in the historical target list at the current moment are obtained, and the historical target list is updated based on the two positions, so that the target dynamic obstacle meeting the stable motion condition is screened from the updated historical target list to track, the purpose of screening the dynamic obstacle based on the motion track of the dynamic obstacle to obtain the target dynamic obstacle with stable motion can be achieved, and the purposes of further improving the detection accuracy of the dynamic obstacle and tracking the target dynamic obstacle with stable motion are achieved.
In one embodiment, updating the historical targets list based on the actual and estimated positions, including for a current one of the dynamic obstacles, matching the current dynamic obstacle with each historical target based on the actual position of the current dynamic obstacle and the estimated position of each historical target; under the condition that the matching is successful, taking the actual position of the current dynamic obstacle as the actual position of a historical target on the matching at the current moment; under the condition of failure in matching, adding the current dynamic barrier as a new added target to a historical target list; and obtaining an updated historical target list based on at least one of the actual position of the historical target or the newly-added target on the matching of the current moment.
The new targets are targets different from each historical target in the historical target list, and the updated historical target list comprises each historical target and the new targets.
For example, the robot calculates the euclidean distance between the actual position of the current dynamic obstacle and the estimated position of each historical object for the current dynamic obstacle in each dynamic obstacle, determines the minimum euclidean distance according to the calculated euclidean distances, determines that the matching between the current dynamic obstacle and the historical object corresponding to the minimum euclidean distance is successful when the minimum euclidean distance is smaller than the preset distance, and determines that the matching between the current dynamic obstacle and each historical object is failed when the minimum euclidean distance is not smaller than the preset distance; under the condition that the matching is successful, replacing the estimated position of the history target on the matching at the current moment with the actual position of the current dynamic obstacle to obtain the actual position of the history target on the matching at the current moment; under the condition of failure in matching, adding the current dynamic obstacle as a new target to a historical target list, and taking the actual position of the current dynamic obstacle as the actual position of the new target at the current moment; and obtaining an updated historical target list based on at least one of the actual position of the historical target or the newly-added target on the matching of the current moment.
In this embodiment, the target history list is updated respectively according to the actual positions of the dynamic obstacles at the current time and the estimated positions of the history targets in the history target list by matching the dynamic obstacles and the history targets according to the actual positions of the dynamic obstacles at the current time and the estimated positions of the history targets in the history target list, so that the purpose of updating the history target list is achieved based on the actual positions of the dynamic obstacles at the current time and the estimated positions of the history targets in the history target list.
In one embodiment, screening out a target dynamic obstacle meeting a stable motion condition from an updated historical target list for tracking, wherein the method comprises the steps of calculating the actual speed and the actual acceleration of each intermediate target at the current moment according to the actual position of each intermediate target in the updated historical target list at the current moment; and screening the target dynamic barrier from the updated historical target list to track under the condition that at least one of the matching success times, the matching failure times, the actual speed and the actual acceleration of each intermediate target meets the preset condition.
Wherein the intermediate target is each target in the updated history target list. The steady motion condition includes at least one condition of a number of matches success, a number of matches failure, an actual speed, and an actual acceleration.
The robot calculates the actual speed and the actual acceleration of each intermediate target at the current moment according to the actual position of each intermediate target in the updated historical target list at the current moment and the time difference between the current moment and the previous moment, and records the times of matching success or matching failure corresponding to each intermediate target. After each dynamic obstacle is tracked for a target time period, the robot acquires a plurality of actual positions of each dynamic obstacle at each target time in the target time period and a plurality of estimated positions of each historical target in a historical target list, and records the matching success times, the matching failure times, the actual speed and the actual acceleration corresponding to the target time of each intermediate target at each target time; and screening the target dynamic obstacle from the updated historical target list under the condition that at least one of the matching success times, the matching failure times, the actual speed and the actual acceleration corresponding to the target moment meet the preset conditions, so as to track the target dynamic obstacle.
In one embodiment, in the case that at least one of the number of matching successes, the number of matching failures, the actual speed corresponding to the target time, and the actual acceleration of each intermediate target satisfies a preset condition, the method includes screening the target dynamic obstacle from the updated history target list to track, and determining at least one of the intermediate targets with the number of matching successes being greater than the first number or the intermediate targets with the number of matching failures being less than the second number as the target dynamic obstacle to track the target dynamic obstacle.
The first number and the second number are preset according to the detection requirement, which is not limited in the embodiment of the present application. The intermediate target with the matching success times larger than the first times is determined to be the target dynamic obstacle, so that the purpose of avoiding false detection can be achieved. For example, the first time is 2 times, and if at least 3 times of matching is successful, the intermediate variable is considered to be the target dynamic obstacle, otherwise, if only 1 time or only 2 times of matching is successful, the intermediate variable is considered to be the target dynamic obstacle, the false detection is easy to occur.
The intermediate target with the matching failure times smaller than the second times is determined to be the target dynamic obstacle, so that the purpose of missed detection can be achieved. For example, the second number of times is 3, and in the case that there are at most 2 failed matches, the intermediate variable is considered to be the target dynamic obstacle.
In one embodiment, when at least one of the number of successful matches, the number of failed matches, the actual speed corresponding to the target time, and the actual acceleration of each intermediate target meets a preset condition, the target dynamic obstacle is selected from the updated historical target list to track, and at least one of the intermediate target of the actual speed corresponding to the target time within a preset speed range or the intermediate target of the actual acceleration corresponding to the target time within a preset acceleration range is determined as the target dynamic obstacle to track the target dynamic obstacle.
The preset speed range and the preset acceleration range are preset according to the detection requirement, which is not limited in the embodiment of the present application. By using the intermediate target of which the actual speed corresponds to the target moment within the preset speed range as the target dynamic obstacle and using the intermediate target of which the actual acceleration corresponds to the target moment within the preset acceleration range as the target dynamic obstacle, the purpose of screening the target dynamic obstacle which stably moves from the intermediate target can be achieved.
In one embodiment, the method further comprises determining a motion track of the target dynamic obstacle in a subsequent time period according to the actual speed and the actual acceleration of the target dynamic obstacle at the current moment, so that the robot can avoid the motion track to run in the subsequent time period, and the purpose of obstacle avoidance is achieved.
In one embodiment, as illustrated in fig. 4, a dynamic obstacle detection method is provided, including the steps of:
step 1, acquiring input data including a global static map of a robot work area (i.e., a target area in the above embodiment), point cloud data, and positioning information (i.e., a pose of the robot, including a position and a heading angle of the robot).
Wherein a global static map is pre-built and stored on the robot, in which the location of static obstacles (e.g. walls, tables, doors) is identified. The acquisition of the point cloud data can be realized by an RGBD camera or a laser radar and other sensing sensors mounted on the robot chassis. The pose of the robot is obtained through a positioning module arranged in the robot, and the indoor robot generally uses a wheel type odometer, an IMU, an outdoor common GPS, an RTK and the like.
And step 2, updating the probability map.
In a robot front FOV (angle of view, field angle of view) area, for a position pos (x, y) of a point cloud on a map mask, a probability value of a grid corresponding to the position in a probability map is increased to obtain a probability value of the grid as: p+ [ delta ] p (where p is the historical probability value of the grid in the probability map, delta ] p is the incremental probability value, typically taken as an empirical value); for a position pos (x, y) without a point cloud on the map mask, reducing the probability value of a grid corresponding to the position on the probability map to obtain the probability value of the grid as follows: p- Δp. It will be appreciated that the probability value of the grid in the probability map corresponding to each pixel in the map mask is an accumulation of the probability values of the historical frame and the current frame.
And 3, clustering and filtering.
3.1, clustering: and carrying out connected domain clustering on the pixel points in the map mask to obtain a plurality of candidate barriers.
3.2, connected domain shape filtering: since the aspect ratio, the area and the number of the pixels corresponding to the point cloud data of the dynamic obstacle are required to be within a preset range, the candidate obstacle is subjected to obstacle shape filtering processing according to the aspect ratio, the area and the number of the pixels of the candidate obstacle, so that the candidate obstacle which is not within the preset range is removed.
3.3, probability filtering: on the probability map, the average value of the probability values of the grids corresponding to all the pixel points in the dynamic obstacle (i.e. the integrated probability value in the above embodiment) needs to be smaller than the preset probability value, otherwise, the dynamic obstacle may be a static obstacle. And reserving candidate barriers with the comprehensive probability value smaller than a preset probability value, and eliminating candidate barriers with the comprehensive probability value not smaller than the preset probability value.
3.4, shielding and filtering: in any one angular direction of the robot's view, the robot can only see the nearest obstacle, and cannot see the latter. Therefore, the obstacle nearest to the robot is left in any one angular direction of the robot view angle, and the object behind the nearest obstacle is removed.
3.5, static map filtering: and combining the global static map, if a certain proportion of points of the candidate obstacle are above the static obstacle of the global static map, considering that the candidate obstacle belongs to false detection, and eliminating the candidate obstacle.
And 4, tracking.
4.1, associated targets
1) Determining an object to be tracked (current object) at the current moment, and acquiring the actual position of the current object at the current moment;
2) Matching the current object with each historical object in the historical object list, and specifically comprising the following steps:
21 Acquiring the actual position, the actual speed, the time difference between the two moments and other state quantities of each historical target at the previous moment; 22 Filtering the state quantity of each historical target at the previous moment to obtain the estimated position of each historical target at the current moment; 23 Respectively calculating Euclidean distance between the actual position of the current object at the current moment and the estimated position of each historical target; 24 Determining a minimum Euclidean distance according to the calculated multiple Euclidean distances corresponding to the current object; and under the condition that the minimum Euclidean distance is smaller than the preset distance, determining that the current object is successfully matched with the historical target corresponding to the minimum Euclidean distance.
3) If matching, updating the historical target list:
taking the actual position of the current object at the current moment as the actual position of the current moment of the historical target corresponding to the minimum Euclidean distance; if the current object is not matched, the current object is taken as a new target to be inserted into a historical target list, namely: the current object is detected only by the current frame, and is therefore added to the history object list.
4) And recording the attribute of each target (tracking target) in the updated historical target list, wherein the attribute comprises the matching success times, the matching failure times and the track.
4.2 State estimation
In the step 22), the filtering processing method may use Kalman filtering (KF for short), extended Kalman filtering (Extended Kalman Filter EKF for short), lossless Kalman filtering (Unscented Kalman Filter for short, UKF for short) or particle filtering to estimate the motion state of each historical target at the current moment, including estimating the position, estimating the speed and estimating the acceleration.
4.3 Condition constraint
The target detection result (tracking target) satisfying the following constraint condition is output to obtain a stable dynamic obstacle detection result (target dynamic obstacle).
a. The number of successful matching times is more than T1 and the number of failed matching times is less than T2; (wherein T1 and T2 are positive integers)
b. Calculating the movement speed and direction of the tracking target according to the track; the motion speed and the acceleration of the last few frames need to be within a certain range, and the variation of the speed and the acceleration needs to be within a certain range.
The dynamic obstacle detection result includes: the system comprises coordinates of static obstacles and a dynamic obstacle list, wherein each dynamic obstacle is represented by a structural body and comprises point cloud data corresponding to the dynamic obstacle, probability values of grids corresponding to the dynamic obstacle in a probability map, a current time stamp, the pose, the movement speed and the acceleration of the dynamic obstacle.
4.4, predicting the motion track of the target in a future period of time (for example, in the next second) according to the speed and the acceleration of the target.
In the embodiment, the dynamic obstacle in front of the robot can be detected, the dynamic obstacle result with high robustness and high accuracy is output, the dynamic obstacle result comprises the pose, the movement speed, the acceleration and the historical track of the dynamic obstacle, and the future track of the dynamic obstacle is predicted, so that the robot can avoid the dynamic obstacle in advance, plan a route better and adjust the running state of the robot, and the robot can work safely and efficiently.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a dynamic obstacle detection device for realizing the dynamic obstacle detection method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation of the embodiment of the dynamic obstacle detection device or devices provided below may be referred to the limitation of the dynamic obstacle detection method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 5, a dynamic obstacle detection device 500 is provided, and is applied to a robot, where a sensor for detecting environmental information is mounted on the robot, and the dynamic obstacle detection device 500 includes: an acquisition module 501, a conversion module 502, an update module 503, a clustering module 504, and a filtering module 505, wherein:
the obtaining module 501 is configured to obtain an initial probability map corresponding to a target area, and obtain point cloud data obtained by detecting the target area by using a sensor.
The conversion module 502 is configured to convert the point cloud data into a map mask, and a pixel value of each pixel in the map mask characterizes whether a point cloud projection exists.
An updating module 503, configured to update the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by an obstacle in the target area.
And the clustering module 504 is used for clustering the pixel points in the map mask to obtain candidate barriers.
The filtering module 505 is configured to perform filtering processing on the candidate obstacle based on the target probability map, so as to obtain a dynamic obstacle.
In one embodiment, the conversion module 502 is further configured to obtain a global static map corresponding to the target area; performing grid division on a preset horizontal area centered by a robot to construct a preset grid map; converting the point cloud data from a robot coordinate system to a map coordinate system where a global static map is located, and projecting the converted point cloud data to a preset grid map to obtain a two-dimensional point cloud grid map; and setting the pixel value of a first grid with the point cloud projection in the two-dimensional point cloud grid map as a first value, and setting the pixel value of a second grid without the point cloud projection in the two-dimensional point cloud grid map as a second value, so as to obtain a map mask.
In one embodiment, the updating module 503 is further configured to determine a third grid corresponding to the first grid in the initial probability map, and increase the probability value of the third grid by a preset value; determining a fourth grid corresponding to the second grid in the initial probability map, and attenuating the probability value of the fourth grid by a preset value; and obtaining a target probability map based on the result after the probability value adjustment.
In one embodiment, the filtering module 505 is further configured to perform an obstacle shape filtering process on the candidate obstacle to obtain a first filtering result; carrying out probability filtering processing on the first filtering result based on the target probability map to obtain a second filtering result; carrying out shielding filtering treatment on the second filtering result to obtain a third filtering result; and carrying out static obstacle filtering processing on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle.
In one embodiment, the filtering module 505 is further configured to determine, for the current candidate obstacle in the first filtering result, a probability value of a grid corresponding to each pixel point in the current candidate obstacle in the target probability map; determining a comprehensive probability value of the current candidate obstacle according to the determined probability value; and screening based on the comprehensive probability value of the candidate obstacle to obtain a second filtering result.
In one embodiment, the filtering module 505 is further configured to obtain a global static map corresponding to the target area, and calculate a proportion of pixels of each candidate obstacle in the third filtering result that occur in the global static map; and removing candidate barriers with the proportion larger than a preset proportion in the third filtering result to obtain dynamic barriers.
In one embodiment, the dynamic obstacle detection device 500 further includes a tracking module, where the tracking module is configured to obtain an actual position of each dynamic obstacle at the current time, and obtain an estimated position of each historical target in the historical target list at the current time; the historical target list is a set of dynamic obstacles detected in a historical time period; updating a historical target list based on the actual position and the estimated position; and screening out the target dynamic barrier meeting the stable motion condition from the updated historical target list to track.
In one embodiment, the tracking module is further configured to match, for a current dynamic obstacle of the dynamic obstacles, the current dynamic obstacle and each historical target according to an actual position of the current dynamic obstacle and an estimated position of each historical target; under the condition that the matching is successful, taking the actual position of the current dynamic obstacle as the actual position of a historical target on the matching at the current moment; under the condition of failure in matching, adding the current dynamic barrier as a new added target to a historical target list; and obtaining an updated historical target list based on at least one of the actual position of the historical target or the newly-added target on the matching of the current moment.
In one embodiment, the tracking module is further configured to calculate an actual speed and an actual acceleration of each intermediate target at the current time according to an actual position of each intermediate target at the current time in the updated historical target list; and screening the target dynamic barrier from the updated historical target list to track under the condition that at least one of the matching success times, the matching failure times, the actual speed and the actual acceleration of each intermediate target meets the preset condition.
The respective modules in the dynamic obstacle detecting apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in the processor in the robot or independent of the processor in the robot in a hardware mode, and can also be stored in a memory in the robot in a software mode, so that the processor can call and execute the operations corresponding to the modules.
In one embodiment, a robot is provided, the internal structure of which may be as shown in fig. 6. The robot includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the robot is adapted to provide computing and control capabilities. The memory of the robot includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the robot is used for exchanging information between the processor and the external device. The communication interface of the robot is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a dynamic obstacle detection method. The display unit of the robot is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the robot can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on a robot shell, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the robots to which the present application is applied, and that a particular robot may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a robot is provided, including a memory having a computer program stored therein and a processor, which when executing the computer program, implements the steps of the method embodiments described above.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. The dynamic obstacle detection method is characterized by being applied to a robot, wherein a sensing sensor for detecting environmental information is mounted on the robot;
the method comprises the following steps:
acquiring an initial probability map corresponding to a target area, and acquiring point cloud data obtained by detecting the target area by the perception sensor;
converting the point cloud data into a map mask, wherein the pixel value of each pixel point in the map mask represents whether point cloud projection exists or not;
Updating the initial probability map based on the map mask to obtain a target probability map; the target probability map characterizes probability values of each grid occupied by obstacles in the target area;
clustering the pixel points in the map mask to obtain candidate barriers;
and filtering the candidate obstacle based on the target probability map to obtain a dynamic obstacle.
2. The method of claim 1, wherein the converting the point cloud data into a map mask comprises:
acquiring a global static map corresponding to the target area;
performing grid division on a preset horizontal area centered by the robot to construct a preset grid map;
converting the point cloud data from a coordinate system of the robot to a map coordinate system where the global static map is located, and projecting the converted point cloud data to the preset grid map to obtain a two-dimensional point cloud grid map;
and setting the pixel value of a first grid with the point cloud projection in the two-dimensional point cloud grid map as a first value, and setting the pixel value of a second grid without the point cloud projection in the two-dimensional point cloud grid map as a second value, so as to obtain a map mask.
3. The method of claim 2, wherein updating the initial probability map based on the map mask yields a target probability map, comprising:
determining a third grid corresponding to the first grid in the initial probability map, and increasing the probability value of the third grid by a preset value;
determining a fourth grid corresponding to the second grid in the initial probability map, and attenuating the probability value of the fourth grid by the preset value;
and obtaining a target probability map based on the result after the probability value adjustment.
4. The method of claim 1, wherein filtering the candidate obstacle based on the target probability map to obtain a dynamic obstacle comprises:
performing obstacle shape filtering processing on the candidate obstacle to obtain a first filtering result;
carrying out probability filtering processing on the first filtering result based on the target probability map to obtain a second filtering result;
carrying out shielding filtering treatment on the second filtering result to obtain a third filtering result;
and carrying out static obstacle filtering processing on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle.
5. The method of claim 4, wherein the probability filtering the first filtering result based on the target probability map to obtain a second filtering result comprises:
for a current candidate obstacle in a first filtering result, determining a probability value of a grid corresponding to each pixel point in the current candidate obstacle in the target probability map;
determining a comprehensive probability value of the current candidate obstacle according to the determined probability value;
and screening based on the comprehensive probability value of the candidate obstacle to obtain a second filtering result.
6. The method of claim 4, wherein the performing static obstacle filtering on the third filtering result based on the global static map corresponding to the target area to obtain a dynamic obstacle includes:
acquiring a global static map corresponding to the target area;
calculating the occurrence proportion of the pixel points of each candidate obstacle in the third filtering result in the global static map;
and removing candidate barriers with the proportion larger than a preset proportion from the third filtering result to obtain dynamic barriers.
7. The method according to any one of claims 1 to 6, wherein after said filtering said candidate obstacle to obtain a dynamic obstacle, said method further comprises:
Acquiring the actual position of each dynamic barrier at the current moment, and acquiring the estimated position of each historical target in a historical target list at the current moment; the historical target list is a set of dynamic obstacles detected in a historical time period;
updating the historical target list based on the actual location and the estimated location;
and screening out the target dynamic barrier meeting the stable motion condition from the updated historical target list to track.
8. The method of claim 7, wherein the updating the historical target list based on the actual location and the estimated location comprises:
for a current dynamic obstacle in each dynamic obstacle, matching the current dynamic obstacle with each historical target according to the actual position of the current dynamic obstacle and the estimated position of each historical target;
under the condition that the matching is successful, taking the actual position of the current dynamic obstacle as the actual position of a history target on the matching at the current moment;
under the condition of failure of matching, the current dynamic obstacle is used as a new object to be added to the historical object list;
And obtaining an updated historical target list based on at least one of the actual position of the historical target on the matching of the current moment or the newly-added target.
9. The method of claim 7, wherein screening out the target dynamic obstacle satisfying the stable motion condition from the updated historical target list for tracking, comprises:
calculating the actual speed and the actual acceleration of each intermediate target at the current moment according to the actual position of each intermediate target at the current moment in the updated historical target list;
and screening the target dynamic barrier from the updated historical target list to track under the condition that at least one of the matching success times, the matching failure times, the actual speed and the actual acceleration of each intermediate target meets the preset condition.
10. A robot comprising a memory, a processor and a sensor for sensing environmental information, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 9.
CN202211624831.7A 2022-12-16 2022-12-16 Dynamic obstacle detection method and robot Pending CN116088503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211624831.7A CN116088503A (en) 2022-12-16 2022-12-16 Dynamic obstacle detection method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211624831.7A CN116088503A (en) 2022-12-16 2022-12-16 Dynamic obstacle detection method and robot

Publications (1)

Publication Number Publication Date
CN116088503A true CN116088503A (en) 2023-05-09

Family

ID=86207324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211624831.7A Pending CN116088503A (en) 2022-12-16 2022-12-16 Dynamic obstacle detection method and robot

Country Status (1)

Country Link
CN (1) CN116088503A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661468A (en) * 2023-08-01 2023-08-29 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium
CN117152719A (en) * 2023-11-01 2023-12-01 锐驰激光(深圳)有限公司 Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN117830143B (en) * 2024-03-06 2024-05-03 江苏航智嘉信息科技有限公司 Method for accurately positioning robot tool based on computer vision

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661468A (en) * 2023-08-01 2023-08-29 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium
CN116661468B (en) * 2023-08-01 2024-04-12 深圳市普渡科技有限公司 Obstacle detection method, robot, and computer-readable storage medium
CN117152719A (en) * 2023-11-01 2023-12-01 锐驰激光(深圳)有限公司 Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN117152719B (en) * 2023-11-01 2024-03-26 锐驰激光(深圳)有限公司 Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device
CN117830143B (en) * 2024-03-06 2024-05-03 江苏航智嘉信息科技有限公司 Method for accurately positioning robot tool based on computer vision

Similar Documents

Publication Publication Date Title
CN111708047B (en) Robot positioning evaluation method, robot and computer storage medium
US11537906B2 (en) Machine learning based target localization for autonomous unmanned vehicles
US10748061B2 (en) Simultaneous localization and mapping with reinforcement learning
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN116088503A (en) Dynamic obstacle detection method and robot
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
CN110472553B (en) Target tracking method, computing device and medium for fusion of image and laser point cloud
CN108717710A (en) Localization method, apparatus and system under indoor environment
CN111968229A (en) High-precision map making method and device
RU2656711C2 (en) Method and system for detecting and tracking of moving objects based on three-dimensional sensor data
CN110470308B (en) Obstacle avoidance system and method
CN110046677B (en) Data preprocessing method, map construction method, loop detection method and system
Pan et al. Gem: online globally consistent dense elevation mapping for unstructured terrain
JP2019168953A (en) State estimation device and program
CN111578959A (en) Unknown environment autonomous positioning method based on improved Hector SLAM algorithm
CN116091724A (en) Building digital twin modeling method
KR20180027242A (en) Apparatus and method for environment mapping of an unmanned vehicle
WO2020213099A1 (en) Object detection/tracking device, method, and program recording medium
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN116931583B (en) Method, device, equipment and storage medium for determining and avoiding moving object
CN111739066B (en) Visual positioning method, system and storage medium based on Gaussian process
Rameshbabu et al. Target tracking system using kalman filter
CN110864670B (en) Method and system for acquiring position of target obstacle
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
Wang et al. 4-D SLAM: An efficient dynamic Bayes network-based approach for dynamic scene understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination