WO2023088125A1 - 一种机器人提升障碍物标注精度的方法 - Google Patents

一种机器人提升障碍物标注精度的方法 Download PDF

Info

Publication number
WO2023088125A1
WO2023088125A1 PCT/CN2022/130463 CN2022130463W WO2023088125A1 WO 2023088125 A1 WO2023088125 A1 WO 2023088125A1 CN 2022130463 W CN2022130463 W CN 2022130463W WO 2023088125 A1 WO2023088125 A1 WO 2023088125A1
Authority
WO
WIPO (PCT)
Prior art keywords
moment
grid
area
positioning
closed
Prior art date
Application number
PCT/CN2022/130463
Other languages
English (en)
French (fr)
Inventor
孙明
陈卓标
周和文
徐松舟
黄惠保
熊坤
Original Assignee
珠海一微半导体股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海一微半导体股份有限公司 filed Critical 珠海一微半导体股份有限公司
Publication of WO2023088125A1 publication Critical patent/WO2023088125A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Definitions

  • the invention relates to the technical field of intelligent robots, and relates to a method for a robot to improve the marking accuracy of obstacles.
  • robots For a robot that uses laser positioning, there will inevitably be some obstacles that are invisible to the laser in its environment, such as transparent glass, mirrors, objects with low reflectivity, low objects, etc. The robot needs to mark them on the map for better Real-time motion, navigation planning, etc.
  • robots are typically equipped with physical collision sensors and some other optical sensors.
  • the optical sensor is limited by its optical characteristics and installation location, and cannot accurately detect the above-mentioned obstacles.
  • Physical collisions have to be used as the last detection method to detect the above-mentioned types of obstacles, so that the robot can adjust its motion posture, navigation planning, etc.
  • the physical collision sensor is limited by its installation location and installation quantity, so it cannot detect obstacles accurately, and can only mark the collision location roughly. In order to avoid repeated collisions, such obstacles that cannot be accurately detected have to be marked slightly more. However, if there are too many labels, the area that can be navigated may become an area that cannot be navigated, which will make the robot go far and reduce the operating efficiency.
  • the technical solution of the present invention discloses a method for a robot to improve the marking accuracy of obstacles.
  • This application corrects the obstacle area detected by physical collision by combining discrete positioning pose, pose interpolation, and positioning reliability. , to improve the accuracy of obstacle labeling.
  • a method for a robot to improve the accuracy of obstacle labeling includes the following steps: S1: perform two positioning according to the set time, and then obtain the first time and the second time of the two positioning on the grid map The positioning pose at the second moment; S2: divide the coverage area at the first moment and the coverage area at the second moment respectively based on the positioning positions at the first moment and the second moment, obtain the confidence of the two positioning, and use the confidence to compare the first The time coverage area and the second time coverage area are processed; S3: Interpolate the positioning pose at the first moment and the second moment, and then according to the positioning pose at the first moment and the second moment, the interpolation and the processed first Construct a closed figure by covering the area at the first moment and covering the area at the second moment; S4: Obtain the grid occupied by the closed figure on the grid map, and then according to the grid occupied by the closed figure on the grid map and the area of the closed figure To modify the labeling of obstacles.
  • step S2 the positioning positions at the first moment and the second moment are used to divide the coverage area at the first moment and the coverage area at the second moment respectively, including the following steps: obtaining the positions positioned on the grid map twice, and then Take the two positioning positions as the center of the circle and the set value as the radius, and make a circle to obtain the coverage area at the first moment and the coverage area at the second moment.
  • step S2 obtaining the confidence of the two positionings includes the following steps: A1: Obtain the point cloud involved in laser positioning, randomly select a point cloud, and divide the point cloud based on the position of the point cloud on the probability grid map Grid area; A2: Calculate the probability value of the point cloud on the probability grid map through the information of the grid area, repeat steps S1 and S2 until the probability of all point clouds participating in laser positioning on the probability grid map is obtained value; A3: Obtain the detection distance of all point clouds participating in laser positioning acquired by lidar, and then filter the point cloud to obtain the number value of the filtered point cloud; A4: Pass all point clouds participating in laser positioning in the probability grid The probability value and detection distance on the grid map are used to obtain the probability-weighted average; A5: Based on the probability-weighted average, the quantity value of all point clouds participating in laser positioning, the quantity value of the filtered point cloud and the setting of points participating in laser positioning The number of clouds is used to obtain the confidence level of this positioning.
  • step A1 based on the position of the point cloud on the probability grid map, the grid area is divided, including the following steps: obtaining the position of the point cloud on the probability grid map, and then finding the nearest point cloud to the position The grid intersection on the probability grid map; and then divide the grid area of N*N grids on the probability grid map with the grid intersection as the center; wherein, N is a positive integer.
  • the bicubic interpolation method is used to calculate the probability value of the point cloud on the probability grid map through the information of the grid area, including the following steps: obtaining the distance between each grid in the grid area and the point cloud The distance between them, and then obtain the corresponding coefficients of rows and columns in the grid area according to the distance; obtain the corresponding weight of each grid through the corresponding coefficients of rows and columns, and then use the summation formula to obtain all The pixel value of the above-mentioned point cloud, and then obtain the probability value corresponding to the pixel value.
  • processing the coverage area at the first moment and the coverage area at the second moment by the confidence degree includes the following steps: according to the confidence degree of the two positionings, the error distance negatively correlated with the confidence degree is obtained, and then the error distance of the two positionings is compared Distance: Obtain the maximum error distance in the two positionings, and then the coverage area at the first moment and the coverage area at the second moment uniformly reduce the maximum error distance inward.
  • interpolating the positioning poses at the first moment and the second moment includes the following steps: inserting an intermediate pose between the positioning poses at the first moment and the second moment.
  • step S3 constructing a closed graph according to the positioning poses at the first moment and the second moment, the interpolation and the processed coverage area at the first moment and the coverage area at the second moment, including the following steps: Make a straight line perpendicular to the front of the robot at the positioning position, and the straight line has two intersections with the edge of the coverage area at the first moment after processing, and the first distance line segment with the two intersections as endpoints is obtained; the positioning at the second moment The position is a straight line perpendicular to the front of the robot, and the straight line has two intersection points with the edge of the coverage area at the second moment after processing, and the second distance line segment with the two intersection points as the endpoint is obtained; the intermediate pose is displayed on the grid map Make a straight line perpendicular to the front of the robot at the above position, and then get the third distance line segment according to the first distance line segment or the second distance line segment; connect the endpoints of the first distance line segment, the second distance line segment and the third distance line segment with the corresponding
  • step S4 the marking of the obstacle is modified according to the grid and the area of the closed figure occupied by the closed figure on the grid map, including the following steps: first obtain the area occupied by the closed figure on the grid map The area of the grid and the closed figure; then obtain the intersection area of the grid and the closed figure occupied by each closed figure on the grid map; if the intersection area is greater than the set threshold and there is an obstacle label in the grid, delete it Obstacle labeling.
  • obtaining the intersection area of the grid occupied by the closed figure on the grid map and the area of the closed figure includes the following steps: firstly obtain the area of each grid, and then obtain the edge of the closed figure on each grid , to identify the graph located in the closed graph and composed of the edges of the closed graph and the edges of the grid; divide the graph located in the closed graph and composed of the edges of the closed graph and the edges of the grid into several quadrilaterals, Obtain the area of each quadrilateral; add the areas of each quadrilateral to obtain the area of the figure located in the closed figure and composed of the sides of the closed figure and the sides of the grid, that is, the intersection area of the grid and the closed figure.
  • the technical solution of this application combines discrete positioning pose, pose interpolation, and positioning reliability to correct the obstacle area detected by physical collision and improve the accuracy of obstacle labeling.
  • Fig. 1 is a flow chart of a method for a robot to improve obstacle labeling accuracy according to an embodiment of the present invention.
  • Fig. 2 is a flow chart of a method for evaluating laser positioning reliability according to an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of a probability grid map according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of a positioning pose according to an embodiment of the present invention.
  • Step S1 The robot performs two positioning according to the set time through the point cloud, and then obtains the two positioning points on the grid map
  • selecting the point cloud for positioning is the implementation method, and other positioning methods can also be used, without specific limitations; when obtaining the positioning, the position and position of the robot on the grid map Posture is a conventional technique, and will not be repeated here.
  • S2 Use the positioning positions at the first moment and the second moment to divide the coverage area at the first moment and the coverage area at the second moment respectively, obtain the confidence degree of the two positioning, and compare the coverage area at the first moment and the coverage area at the second moment by the confidence degree The coverage area is processed;
  • S3 Interpolate the positioning pose at the first moment and the second moment, and then according to the positioning pose at the first moment and the second moment, interpolate and process the coverage area at the first moment and the second moment Cover the area at all times to construct a closed figure;
  • S4 Obtain the grid occupied by the closed figure on the grid map, and then mark the obstacle according to the grid occupied by the closed figure on the grid map and the area of the closed figure Revise.
  • step S2 the coverage area at the first moment and the coverage area at the second moment are respectively divided according to the positioning positions at the first moment and the second moment, including the following steps: The position on the map, and then take the two positioning positions as the center of the circle, set the value as the radius, and make a circle to obtain the coverage area at the first moment and the coverage area at the second moment.
  • the confidence degree of the two positionings obtained by the robot is to evaluate the confidence degree after each positioning of the robot.
  • the evaluation method is shown in Figure 2. It is an evaluation method for the reliability of laser positioning.
  • the radar robot will use laser frames for positioning when it is running. Different numbers of laser frames obtained at different times and positions should have different confidence levels after positioning. For example, when moving in a small range, the acquired laser frame changes little, and the confidence level should gradually increase with time; when exploring a new scene, the robot continuously enters a new space, and the laser frame changes greatly. Positioning reliability should be reduced. Therefore, it is necessary to evaluate the confidence of laser positioning.
  • the method includes the following steps: Step A1: The laser radar robot obtains the point cloud participating in laser positioning, randomly selects a point cloud, and takes the position of the point cloud on the probability grid map as Base, divide the grid area.
  • the robot generally uses a frame of laser data for positioning. After the laser data is used for positioning, the quantity value of the point cloud after positioning is recorded. Then randomly select a point cloud, and divide the grid area based on the position of the point cloud on the probability grid map, including the following steps: Obtain the position of the point cloud on the probability grid map, and then find the nearest point cloud to the position The grid intersection on the probability grid map; and then divide the grid area of N*N grids on the probability grid map with the grid intersection as the center; wherein, N is a positive integer.
  • the grid located in the grid area outside the probability grid map is filled with the median value of the probability.
  • the median value of the probability means that the value represented by it corresponds to a 50% probability of an obstacle, and a 50% probability of open.
  • the grid uses 0 ⁇ 1 to linearly represent the probability that the corresponding area is an obstacle, and the median value of the probability is 0.5.
  • Step A2 The lidar robot calculates the probability value of the point cloud on the probability grid map through the information of the grid area, and repeats steps A1 and A2 until the probability of all point clouds participating in laser positioning on the probability grid map is obtained value.
  • the bicubic interpolation method is mainly used.
  • Bicubic interpolation is a complex interpolation method, which can create a ratio Bilinear interpolation smoother image edges, also known as bicubic interpolation, is a method for "interpolating" or increasing the number/density of "pixels" in an image.
  • Graphical data is usually augmented by interpolation techniques so that when it is printed or otherwise output, the print area and/or resolution can be increased.
  • the method includes the following steps: obtaining the distance between each grid in the grid area and the point cloud, and then obtaining the corresponding coefficients of rows and columns in the grid area according to the distance; obtaining each The weights corresponding to grids, and then use the summation formula to obtain the pixel values of the point cloud through the weights, and then obtain the probability values corresponding to the pixel values. Randomly calculate the probability values of the point clouds participating in the laser positioning on the probability grid map one by one until the probability values of the point clouds participating in the laser positioning on the probability grid map are obtained, and enter the next step.
  • Step A3 The lidar robot obtains the detection distances of all point clouds involved in laser positioning acquired by the lidar, and then filters the point clouds to obtain the quantity value of the filtered point clouds.
  • the robot first acquires the detection distance of the point cloud after the point cloud is detected by the lidar, and then filters the point cloud to obtain the quantity value of the filtered point cloud, and the acquisition probability value is greater than The value of the number of points in the probability median is used for subsequent calculations.
  • the set distance setting weight T is set according to the actual situation and is not limited.
  • Step A5 Based on the probability weighted average, the quantity value of all point clouds participating in the laser positioning, the quantity value of the filtered point cloud and the set quantity value of the point cloud participating in the laser positioning, the confidence degree of this positioning is obtained.
  • Quantity value is the quantity value of the filtered point cloud, where N is less than or equal to M. Every time the robot performs positioning through laser data, it evaluates the location reliability through the point cloud data involved in laser positioning. The accuracy of each location is judged according to the confidence level, and that location is selected as the standard for cleaning, mapping, etc.
  • the robot randomly selects a point cloud from the point cloud participating in laser positioning, then obtains the point Pi of the point cloud on the probability grid map, and then finds the point Pi closest to the point Pi on the probability grid map
  • the grid intersection Ci is centered on the grid intersection Ci, and the surrounding 4*4 grids are selected to form a grid area. If any grid in the grid area exceeds the probability grid map, the grid outside the probability grid map is filled with the probability median value or the initial value of the probability grid map. Then perform bicubic interpolation on the 4x4 grid to obtain the probability value Si corresponding to the point Pi on the probability grid map. This method is a conventional technique and will not be repeated here.
  • the distance setting weight T, the weight of laser points R and the weight K of hit points wherein, the number of laser points is the number of participating laser positioning, and the number of hit points is the number of point clouds obtained by a frame of laser, which is a set value and can be passed through T , R, K three parameters for flexible tuning.
  • the technical solution of this application performs confidence evaluation and calculation by combining point cloud distribution, grid hit, point cloud quantity, etc., and provides multiple parameters for targeted and flexible tuning to improve confidence evaluation Accuracy rate, evaluating location reliability can also make related processing more reasonable and accurate.
  • the robot processes the coverage area at the first moment and the coverage area at the second moment according to the confidence degree, including the following steps: according to the confidence degree of the two positionings, the error distance negatively correlated with the confidence degree is obtained, and then compared The error distance of the two positionings; obtain the maximum error distance in the two positionings, and then the coverage area at the first moment and the coverage area at the second moment uniformly reduce the maximum error distance inward.
  • step S3 the robot interpolates the positioning poses at the first moment and the second moment, including the following steps: inserting an intermediate pose between the positioning poses at the first moment and the second moment .
  • step S3 construct a closed graph according to the positioning pose at the first moment and the second moment, the interpolated and processed coverage area at the first moment and the coverage area at the second moment, including the following steps: the positioning position at the first moment Make a straight line perpendicular to the front of the robot, and the straight line has two intersections with the edge of the coverage area at the first moment after processing, and obtain the first distance line segment with the two intersections as endpoints; A vertical straight line directly in front of the robot, and the straight line has two intersection points with the edge of the coverage area at the second moment after processing, and the second distance line segment with the two intersection points as endpoints is obtained; the position of the middle pose on the grid map Make a straight line perpendicular to the front of the robot, and then obtain the third distance line segment according to the first distance line segment
  • step S4 the marking of obstacles is modified according to the grid occupied by the closed figure on the grid map and the area of the closed figure, including the following steps: the robot first obtains the closed figure on the grid The area of the grid and the closed figure occupied on the map; then obtain the intersection area of the grid and the closed figure occupied by each closed figure on the grid map; if the intersection area is greater than the set threshold and there are obstacles in the grid If there is an object label, delete the obstacle label.
  • Obtaining the intersection area of the grid occupied by the closed figure on the grid map and the area of the closed figure includes the following steps: first obtain the area of each grid, and then obtain the position of the edge of the closed figure on each grid, To identify the graph that is located in the closed graph and composed of the edges of the closed graph and the edges of the grid; divide the graph that is located in the closed graph and composed of the edges of the closed graph and the edges of the grid into several quadrilaterals, and obtain each The area of the quadrilateral; the area of each quadrilateral is added to obtain the area of the figure located in the closed figure and composed of the sides of the closed figure and the sides of the grid, that is, the intersection area of the grid and the closed figure.
  • the technical solution of this application combines discrete positioning pose, pose interpolation, and positioning reliability to correct the obstacle area detected by physical collision and improve the accuracy of obstacle labeling.
  • P2 is the pose of the positioning at the second moment
  • the arrow points to the front of the robot at this time
  • the peripheral dotted line is the overall coverage area of the machine combined with the shape and size of the machine.
  • the direction of the dotted arrow is directly in front of the robot corresponding to the intermediate pose P', and the position directly in front of the robot in the intermediate pose P'
  • the setting is set according to the orientation of the arrows of the pose positioned at the first moment and the pose positioned at the second moment.
  • the walking route of the robot can also be estimated according to the orientation of the arrow positioned at the first moment and the orientation of the pose at the second moment, and then the direction of the arrow of the intermediate pose P' can be obtained according to the walking route.
  • the coverage area when the coverage area is a circle, subtract the error distance Em from the radius of the coverage area to obtain a coverage area that uniformly reduces the error distance Em inward; if the coverage area is a rectangle, the length of the rectangle can be subtracted from the error distance Em , and then make the width of the rectangle subtract the proportional value from the ratio of the width to the length.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种机器人提升障碍物标注精度的方法,包括:S1:按照设定时刻进行两次定位,然后获取两次定位在栅格地图上的第一时刻和第二时刻的定位位姿;S2:以第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,获取两次定位的置信度,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理;S3:对第一时刻和第二时刻的定位位姿进行插值,然后根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形;S4:根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改。

Description

一种机器人提升障碍物标注精度的方法 技术领域
本发明涉及智能机器人技术领域,涉及一种机器人提升障碍物标注精度的方法。
背景技术
对于用激光定位的机器人,其所在环境中难免会有一些激光不可见的障碍物,比如透明玻璃、镜子、低反光率物体、低矮物体等,机器人需要在地图中对其进行标注以便更好的进行实时运动、导航规划等。为了检测这些障碍物,机器人一般会配备物理碰撞传感器及一些其他光学传感器。显然,光学传感器受限于光学特性以及安装位置,对上述障碍物无法准确检测,不得不将物理碰撞作为最后一道检测方法来检测上述类型的障碍物,以便机器人调整运动姿态、导航规划等。而物理碰撞传感器受限于其安装位置、安装数量,对障碍物的检测无法做到精确,仅能进行粗略的碰撞位置标注。为了避免反复的碰撞,对此类无法精确检测的障碍物不得不进行稍多的标注。但标注多将有可能导致原本能够导航通过的区域变成不能导航通过的区域,使得机器人绕远,降低了运行效率。
技术解决方案
为了解决上述技术缺陷,本发明技术方案公开一种机器人提升障碍物标注精度的方法,本申请通过结合离散定位位姿、位姿插值、定位置信度,对物理碰撞检测到的障碍物区域进行修正,提升障碍物标注精度。具体的技术方案如下:一种机器人提升障碍物标注精度的方法,该方法包括以下步骤:S1:按照设定时刻进行两次定位,然后获取两次定位在栅格地图上的第一时刻和第二时刻的定位位姿;S2:以第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,获取两次定位的置信度,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理;S3:对第一时刻和第二时刻的定位位姿进行插值,然后根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形;S4:获取闭合图形在栅格地图上所占据的栅格,然后根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改。
进一步地,步骤S2中,第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,包括以下步骤:获取两次定位在栅格地图上的位置,然后分别以两次定位位置为圆心,设定值为半径,做圆来得到第一时刻覆盖区域和第二时刻覆盖区域。
进一步地,步骤S2中,获取两次定位的置信度包括以下步骤:A1:获取参与激光定位的点云,随机选取一个点云,以这个点云在概率栅格地图上的位置为基础,划分栅格区域;A2:通过栅格区域的信息来计算该点云在概率栅格地图上的概率值,重复步骤S1和S2,直至获取所有参与激光定位的点云在概率栅格地图上的概率值;A3:获取激光雷达获取的所有参与激光定位的点云的检测距离,然后对点云进行筛选来获取筛选后的点云的数量值;A4:通过所有参与激光定位的点云在概率栅格地图上的概率值和检测距离来获取概率加权平均值;A5:基于概率加权平均值、所有参与激光定位的点云的数量值、筛选出的点云数量值和设定参与激光定位的点云的数量值来得到该次定位的置信度。
进一步地,步骤A1中,以这个点云在概率栅格地图上的位置为基础,划分栅格区域,包括以下步骤:获取该点云在概率栅格地图上的位置,然后找到离该位置最近的概率栅格地图上的栅格交叉点;然后以栅格交叉点为中心,在概率栅格地图上划分出N*N个栅格的栅格区域;其中,N为正整数。
进一步地,通过栅格区域的信息来计算该点云在概率栅格地图上的概率值中,采用双三次插值方法,包括以下步骤:获取栅格区域中每个栅格与所述点云之间的距离,然后根据该距离来获取栅格区域中,行和列的对应系数;通过行和列的对应系数获取每个栅格对应的权值,然后通过权值采用求和公式来获取所述点云的像素值,然后获取像素值对应的概率值。
进一步地,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理,包括以下步骤:根据两次定位的置信度来获取与置信度负相关的误差距离,然后比较两次定位的误差距离;获取两次定位中最大的误差距离,然后第一时刻覆盖区域和第二时刻覆盖区域向内均匀缩减最大误差距离。
进一步地,步骤S3中,对第一时刻和第二时刻的定位位姿进行插值,包括以下步骤:对第一时刻和第二时刻的定位位姿之间插入中间位姿。
进一步地,步骤S3中,根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形,包括以下步骤:过第一时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第一时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第一距离线段;过第二时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第二时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第二距离线段;过中间位姿在栅格地图上的位置做与机器人正前方垂直的直线,然后根据第一距离线段或第二距离线段得到第三距离线段;连接第一距离线段、第二距离线段和第三距离线段的端点与相应的处理后的第一时刻覆盖区域和第二时刻覆盖区域的边,得的面积最大的图形为闭合图形;其中,所述定位位姿包括机器人在当前定位位置的正前方朝向。
进一步地,步骤S4中,根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改,包括以下步骤:先获取闭合图形在栅格地图上所占据的栅格和闭合图形的面积;然后获取每个闭合图形在栅格地图上所占据的栅格与闭合图形的交集面积;若交集面积大于设定阈值且该栅格中有障碍物标注,则删除障碍物标注。
进一步地,获取闭合图形在栅格地图上所占据的栅格与闭合图形的面积的交集面积,包括以下步骤:先获取每个栅格的面积,然后获取闭合图形的边在每个栅格上的位置,来识别出位于闭合图形中且由闭合图形的边和栅格的边构成的图形;将位于闭合图形中且由闭合图形的边和栅格的边构成的图形分割为若干个四边形,获取每个四边形的面积;将获取每个四边形的面积相加来得到位于闭合图形中且由闭合图形的边和栅格的边构成的图形的面积,也就是栅格与闭合图形的交集面积。
与现有的技术相比,本申请的技术方案结合离散定位位姿、位姿插值、定位置信度,对物理碰撞检测到的障碍物区域进行修正,提升障碍物标注精度。
附图说明
图1是本发明一种实施例的一种机器人提升障碍物标注精度的方法的流程图。
图2是本发明一种实施例的一种激光定位置信度的评估方法的流程图。
图3是本发明一种实施例的概率栅格地图的示意图。
图4是本发明一种实施例的定位位姿的示意图。
本发明的实施方式
下面结合附图对本发明的具体实施方式作进一步说明。应该指出,以下详细说明都是示例性的,旨在对本申请提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。
如图1所示,一种机器人提升障碍物标注精度的方法,该方法包括以下步骤:步骤S1:机器人通过点云按照设定时刻进行两次定位,然后获取两次定位在栅格地图上的第一时刻和第二时刻的定位位姿,选取点云进行定位是其中实施方式,还可以采用其他的定位方式,不做具体的限定;获取定位时,机器人在栅格地图上的位置和位姿是常规技术,此处不再赘述。S2:以第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,获取两次定位的置信度,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理;S3:对第一时刻和第二时刻的定位位姿进行插值,然后根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形;S4:获取闭合图形在栅格地图上所占据的栅格,然后根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改。
作为其中一种实施例,步骤S2中,通过第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,包括以下步骤:机器人获取两次定位在栅格地图上的位置,然后分别以两次定位位置为圆心,设定值为半径,做圆来得到第一时刻覆盖区域和第二时刻覆盖区域。
作为其中一种实施例,步骤S2中,机器人获取两次定位的置信度为机器人每次定位后就进行置信度评估,评估方法如图2所示,一种激光定位置信度的评估方法,激光雷达机器人在运行时会用激光帧进行定位,不同时刻、不同位置获取的不同数量的激光帧,所对应定位后的置信度应有所不同。比如在一个小范围移动时,所获取的激光帧变化较小,置信度应随着时间增加而逐渐提高;在探索新场景时,机器人连续的进入新空间,激光帧变化会很大,此时定位置信度应要有所减小。因此需要对激光定位的置信度进行评估,该方法包括以下步骤:步骤A1:激光雷达机器人获取参与激光定位的点云,随机选取一个点云,以这个点云在概率栅格地图上的位置为基础,划分栅格区域。机器人一般是通过一帧的激光数据进行定位,在使用激光数据进行定位完成后,记录进行定位后的点云的数量值。然后随机选取一个点云,以这个点云在概率栅格地图上的位置为基础,划分栅格区域,包括以下步骤:获取该点云在概率栅格地图上的位置,然后找到离该位置最近的概率栅格地图上的栅格交叉点;然后以栅格交叉点为中心,在概率栅格地图上划分出N*N个栅格的栅格区域;其中,N为正整数。在划分出栅格区域后,对位于概率栅格地图外的栅格区域的栅格采用概率中值进行填充,概率中值是指其代表的值对应50%概率为障碍物,50%概率为空旷。比如栅格用0~1线性表示对应区域为障碍物的概率,则概率中值就是0.5。
步骤A2:激光雷达机器人通过栅格区域的信息来计算该点云在概率栅格地图上的概率值,重复步骤A1和A2,直至获取所有参与激光定位的点云在概率栅格地图上的概率值。通过栅格区域的信息来计算该点云在概率栅格地图上的概率值中,主要采用双三次插值方法,双三次插值(Bicubic interpolation),是一种复杂的插值方式,它能创造出比双线性插值更平滑的图像边缘,又叫双立方插值,用于在图像中"插值"(Interpolating)或增加"像素"(Pixel)数量/密度的一种方法。通常利用插值技术增加图形数据,以便在它打印或其他形式输出的时候,能够增大打印面积以及(或者)分辨率。包括以下步骤:获取栅格区域中每个栅格与所述点云之间的距离,然后根据该距离来获取栅格区域中,行和列的对应系数;通过行和列的对应系数获取每个栅格对应的权值,然后通过权值采用求和公式来获取所述点云的像素值,然后获取像素值对应的概率值。随机来将参与激光定位的点云在概率栅格地图上的概率值,一个一个求出来,直至得到参与激光定位的点云在概率栅格地图上的概率值,进入下一步骤。
步骤A3:激光雷达机器人获取激光雷达获取的所有参与激光定位的点云的检测距离,然后对点云进行筛选来获取筛选后的点云的数量值。步骤S3中,机器人先是获取激光雷达在进行点云获取时,检测到点云后得到的点云的检测距离,然后对点云进行筛选来获取筛选后的点云的数量值,获取概率值大于概率中值的点云的数量值,用于后续的计算。
步骤A4:机器人通过所有参与激光定位的点云在概率栅格地图上的概率值和检测距离来获取概率加权平均值。主要以以下公式进行计算:A=(S 1×T L1 + S 2×T L2 +...+ S N×T LN)/N;其中,S为点云的概率值,T为设定的距离设定权重,L为激光雷达获取的点云的检测距离,N为参与激光定位的点云的数量值。设定的距离设定权重T根据实际情况进行设置,不进行限定。
步骤A5:基于概率加权平均值、所有参与激光定位的点云的数量值、筛选出的点云数量值和设定参与激光定位的点云的数量值来得到该次定位的置信度。步骤A5中,基于概率加权平均值、所有参与激光定位的点云的数量值、筛选出的点云数量值和设定参与激光定位的点云的数量值来得到该次定位的置信度,包括以下公式:C=A×R (N/M-1)×K (F/N-1);其中,R为激光点数权重,K为命中点数权重,M为设定参与激光定位的点云的数量值,F为筛选出的点云数量值,其中N小于等于M。机器人每通过激光数据进行一次定位,就通过参与激光定位的点云数据进行定位置信度评估。根据置信度来判断各个定位的准确度,来选择那个定位来作为清扫、建图等等工作的标准。
如图3所示,机器人从参与激光定位的点云中随机选取一个点云,然后获取该点云在概率栅格地图上的位置点Pi,然后找到离点Pi最近的概率栅格地图上的栅格交叉点Ci,以栅格交叉点Ci为中心,选取周围的4*4个栅格,构成栅格区域。如果栅格区域中有栅格超出概率栅格地图,则采用概率中值或概率栅格地图的初始值对位于概率栅格地图外的栅格进行填充。然后对4x4栅格进行双三次插值,来得到点Pi在概率栅格地图上对应的概率值Si,此方法为常规技术,此处不再赘述。然后记录Pi与激光雷达观测点(一般为激光雷达中心)的检测距离L,也就是激光雷达在获取点云数据时的检测距离;记录所有概率值Si大于概率中值的电源的数量值F;设定参与激光定位的点云的数量值M,M为理想情况的参与定位的激光点数,根据激光头性能、运算器性能、算法所需等设定,且M≥N。计算所有参与激光定位的点云的概率加权平均值为A=(S 1×T L1 + S 2×T L2 +...+ S N×T LN)/N。计算时,将点云的概率值S乘以,距离设定权重T为底数、检测距离L为指数的值,然后求所有点云的值的平均值,得到概率加权平均值A。然后通过公式C=A×R (N/M-1)×K (F/N-1),计算置信度C,置信度C等于,概率加权平均值A乘以,激光点数权重R为底数、(N/M-1)为指数的数值,再乘以命中点数权重K为底数、(F/N-1)为指数的数值。其中,距离设定权重T、激光点数权重R和命中点数权重K,其中,激光点数为参与激光定位的数量,命中点数为一帧激光获取到的点云数量,为设定值,能够通过T、R、K三个参数进行灵活调优。
与现有的技术相比,本申请的技术方案通过结合点云分布、栅格命中、点云数量等进行置信度评估计算,且提供多个参数进行针对性灵活调优,提高置信度的评估准确率,评估定位置信度也能使相关的处理更合理更准确。
作为其中一种实施例,机器人通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理,包括以下步骤:根据两次定位的置信度来获取与置信度负相关的误差距离,然后比较两次定位的误差距离;获取两次定位中最大的误差距离,然后第一时刻覆盖区域和第二时刻覆盖区域向内均匀缩减最大误差距离。
为其中一种实施例,步骤S3中,机器人对第一时刻和第二时刻的定位位姿进行插值,包括以下步骤:对第一时刻和第二时刻的定位位姿之间插入一个中间位姿。步骤S3中,根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形,包括以下步骤:过第一时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第一时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第一距离线段;过第二时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第二时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第二距离线段;过中间位姿在栅格地图上的位置做与机器人正前方垂直的直线,然后根据第一距离线段或第二距离线段得到第三距离线段;连接第一距离线段、第二距离线段和第三距离线段的端点相应的处理后的第一时刻覆盖区域和第二时刻覆盖区域的边,得的面积最大的图形为闭合图形;其中,所述定位位姿包括机器人在当前定位位置的正前方朝向。
为其中一种实施例,步骤S4中,根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改,包括以下步骤:机器人先获取闭合图形在栅格地图上所占据的栅格和闭合图形的面积;然后获取每个闭合图形在栅格地图上所占据的栅格与闭合图形的交集面积;若交集面积大于设定阈值且该栅格中有障碍物标注,则删除障碍物标注。获取闭合图形在栅格地图上所占据的栅格与闭合图形的面积的交集面积,包括以下步骤:先获取每个栅格的面积,然后获取闭合图形的边在每个栅格上的位置,来识别出位于闭合图形中且由闭合图形的边和栅格的边构成的图形;将位于闭合图形中且由闭合图形的边和栅格的边构成的图形分割为若干个四边形,获取每个四边形的面积;将获取每个四边形的面积相加来得到位于闭合图形中且由闭合图形的边和栅格的边构成的图形的面积,也就是栅格与闭合图形的交集面积。
与现有的技术相比,本申请的技术方案结合离散定位位姿、位姿插值、定位置信度,对物理碰撞检测到的障碍物区域进行修正,提升障碍物标注精度。
如图4所示,P2为第二时刻定位的位姿,箭头指向为此时机器人正前方,外围虚线为结合机器形状、尺寸划出的机器整体覆盖区域,划分覆盖区域时,先以定位的位置为圆心,设定数值为半径,通过虚线划分出覆盖区域。覆盖区域为圆形仅为举例,并非限定,可以根据实际需求来设置覆盖区域的形状和大小。P1为第一时刻定位位姿,获取一样的圆形的覆盖区域。然后对第一时刻和第二时刻的定位位姿进行插值,得到中间位姿P’,虚线箭头方向为中间位姿P’对应的机器人正前方,位于中间位姿P’的机器人的正前方的设置是根据第一时刻定位的位姿和第二时刻定位的位姿的箭头的指向来设置,可以是第一时刻定位的位姿和第二时刻定位的位姿的箭头的指向的中间值,也可以根据第一时刻定位的位姿和第二时刻定位的位姿的箭头的指向来预估出机器人的行走路线,然后根据行走路线来得到中间位姿P’的箭头的指向。插值方法众多,均为常规技术,此处不赘述,而且插值数量视运行速度、定位频率等而定,不做限定,此处只插一个位姿并非强制要求,仅为说明原理。根据实际参数和需求来设计一个和定位置信度C负相关的误差距离E,对应关系E=f(C),根据两个定位置信度计算得到对应的误差距离,并取二者中较大值Em。然后外围虚线构成的覆盖区域向内均匀缩减误差距离Em,得到图中所示的两个实线轮廓构成的处理后的覆盖区域。其中,覆盖区域为圆形时,将覆盖区域的半径减去误差距离Em就可以得到向内均匀缩减误差距离Em的覆盖区域;如果覆盖区域为矩形,则可以使矩形的长减去误差距离Em,然后使矩形的宽根据与长的比值减去等比例的值。
进一步地,在得到两个实线轮廓构成的处理后的覆盖区域后,过P1做直线,直线垂直于机器人正前方向,与第一时刻的覆盖区域的实线交于S1、T1两点;同理,过P2做直线,直线垂直于机器人正前方向,与第一时刻的覆盖区域的实线交于S2、T2两点。然后如图4所示,过P’做直线,直线垂直于机器人正前方向,根据长度关系,P’S’=P1S1,P’T’=P1T1,在过点P’的直线上截取端点,得到S’、T’两点。然后构建闭合图形,通过连接S1S’S2,T1T’T2,与圆弧S1R1T1、S2R2T2形成闭合图形,图中灰色区域即为此图形内区域所对应的占据栅格。分别计算每个灰色栅格与上述闭合图形内区域的交集面积,当其计算值大于预设阈值F时,若此栅格之前被标注为障碍物,则清除对应标注。在计算时,先获取每个栅格的面积,然后获取闭合图形的边在每个栅格上的位置,然后识别出位于闭合图形中且由闭合图形的边和栅格的边构成的图形,再将位于闭合图形中且由闭合图形的边和栅格的边构成的图形分割为若干个四边形,获取每个四边形的面积,得到位于闭合图形中且由闭合图形的边和栅格的边构成的图形的面积,就得到了栅格与上述闭合图形内区域的交集面积。
以上所述,仅是本发明的较佳实施例而己,并非是对本发明作其它形式的限制,任何熟悉本专业的技术人员可能利用上述揭示的技术内容加以变更或改型为等同变化的等效实施例。但是凡是未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与改型,仍属于本发明技术方案的保护范围。

Claims (10)

  1. 一种机器人提升障碍物标注精度的方法,其特征在于,该方法包括以下步骤:
    S1:按照设定时刻进行两次定位,然后获取两次定位在栅格地图上的第一时刻和第二时刻的定位位姿;
    S2:以第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,获取两次定位的置信度,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理;
    S3:对第一时刻和第二时刻的定位位姿进行插值,然后根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形;
    S4:获取闭合图形在栅格地图上所占据的栅格,然后根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改。
  2. 根据权利要求1所述的机器人提升障碍物标注精度的方法,其特征在于,步骤S2中,第一时刻和第二时刻的定位位置来分别划分出第一时刻覆盖区域和第二时刻覆盖区域,包括以下步骤:获取两次定位在栅格地图上的位置,然后分别以两次定位位置为圆心,设定值为半径,做圆来得到第一时刻覆盖区域和第二时刻覆盖区域。
  3. 根据权利要求1所述的机器人提升障碍物标注精度的方法,其特征在于,步骤S2中,获取两次定位的置信度包括以下步骤:
    A1:获取参与激光定位的点云,随机选取一个点云,以这个点云在概率栅格地图上的位置为基础,划分栅格区域;
    A2:通过栅格区域的信息来计算该点云在概率栅格地图上的概率值,重复步骤S1和S2,直至获取所有参与激光定位的点云在概率栅格地图上的概率值;
    A3:获取激光雷达获取的所有参与激光定位的点云的检测距离,然后对点云进行筛选来获取筛选后的点云的数量值;
    A4:通过所有参与激光定位的点云在概率栅格地图上的概率值和检测距离来获取概率加权平均值;
    A5:基于概率加权平均值、所有参与激光定位的点云的数量值、筛选出的点云数量值和设定参与激光定位的点云的数量值来得到该次定位的置信度。
  4. 根据权利要求1所述的机器人提升障碍物标注精度的方法,其特征在于,步骤A1中,以这个点云在概率栅格地图上的位置为基础,划分栅格区域,包括以下步骤:
    获取该点云在概率栅格地图上的位置,然后找到离该位置最近的概率栅格地图上的栅格交叉点;
    然后以栅格交叉点为中心,在概率栅格地图上划分出N*N个栅格的栅格区域;其中,N为正整数。
  5. 根据权利要求4所述的机器人提升障碍物标注精度的方法,其特征在于,通过栅格区域的信息来计算该点云在概率栅格地图上的概率值中,采用双三次插值方法,包括以下步骤:
    获取栅格区域中每个栅格与所述点云之间的距离,然后根据该距离来获取栅格区域中,行和列的对应系数;
    通过行和列的对应系数获取每个栅格对应的权值,然后通过权值采用求和公式来获取所述点云的像素值,然后获取像素值对应的概率值。
  6. 根据权利要求4所述的机器人提升障碍物标注精度的方法,其特征在于,通过置信度对第一时刻覆盖区域和第二时刻覆盖区域进行处理,包括以下步骤:
    根据两次定位的置信度来获取与置信度负相关的误差距离,然后比较两次定位的误差距离;
    获取两次定位中最大的误差距离,然后第一时刻覆盖区域和第二时刻覆盖区域向内均匀缩减最大误差距离。
  7. 根据权利要求1所述的机器人提升障碍物标注精度的方法,其特征在于,步骤S3中,对第一时刻和第二时刻的定位位姿进行插值,包括以下步骤:对第一时刻和第二时刻的定位位姿之间插入中间位姿。
  8. 根据权利要求7所述的机器人提升障碍物标注精度的方法,其特征在于,步骤S3中,根据第一时刻和第二时刻的定位位姿、插值和进行处理后的第一时刻覆盖区域和第二时刻覆盖区域来构建闭合图形,包括以下步骤:
    过第一时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第一时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第一距离线段;
    过第二时刻的定位位置做与机器人正前方垂直的直线,且该直线与处理后的第二时刻覆盖区域的边缘具有两个交点,得到以两个交点为端点的第二距离线段;
    过中间位姿在栅格地图上的位置做与机器人正前方垂直的直线,然后根据第一距离线段或第二距离线段得到第三距离线段;
    连接第一距离线段、第二距离线段和第三距离线段的端点与相应的处理后的第一时刻覆盖区域和第二时刻覆盖区域的边,得的面积最大的图形为闭合图形;
    其中,所述定位位姿包括机器人在当前定位位置的正前方朝向。
  9. 根据权利要求1所述的机器人提升障碍物标注精度的方法,其特征在于,步骤S4中,根据闭合图形在栅格地图上所占据的栅格和闭合图形的面积来对障碍物的标注进行修改,包括以下步骤:
    先获取闭合图形在栅格地图上所占据的栅格和闭合图形的面积;
    然后获取每个闭合图形在栅格地图上所占据的栅格与闭合图形的交集面积;
    若交集面积大于设定阈值且该栅格中有障碍物标注,则删除障碍物标注。
  10. 根据权利要求9所述的机器人提升障碍物标注精度的方法,其特征在于,获取闭合图形在栅格地图上所占据的栅格与闭合图形的面积的交集面积,包括以下步骤:
    先获取每个栅格的面积,然后获取闭合图形的边在每个栅格上的位置,来识别出位于闭合图形中且由闭合图形的边和栅格的边构成的图形;
    将位于闭合图形中且由闭合图形的边和栅格的边构成的图形分割为若干个四边形,获取每个四边形的面积;
    将获取每个四边形的面积相加来得到位于闭合图形中且由闭合图形的边和栅格的边构成的图形的面积。
PCT/CN2022/130463 2021-11-22 2022-11-08 一种机器人提升障碍物标注精度的方法 WO2023088125A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111382649.0A CN116148879B (zh) 2021-11-22 2021-11-22 一种机器人提升障碍物标注精度的方法
CN202111382649.0 2021-11-22

Publications (1)

Publication Number Publication Date
WO2023088125A1 true WO2023088125A1 (zh) 2023-05-25

Family

ID=86360538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130463 WO2023088125A1 (zh) 2021-11-22 2022-11-08 一种机器人提升障碍物标注精度的方法

Country Status (2)

Country Link
CN (1) CN116148879B (zh)
WO (1) WO2023088125A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110260856A (zh) * 2019-06-26 2019-09-20 北京海益同展信息科技有限公司 一种建图方法和装置
CN110530368A (zh) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 一种机器人定位方法及设备
CN110858076A (zh) * 2018-08-22 2020-03-03 杭州海康机器人技术有限公司 一种设备定位、栅格地图构建方法及移动机器人
US20200409387A1 (en) * 2017-12-05 2020-12-31 Sony Corporation Image processing apparatus, image processing method, and program
CN112180910A (zh) * 2019-06-18 2021-01-05 北京京东尚科信息技术有限公司 一种移动机器人障碍物感知方法和装置
CN112967283A (zh) * 2021-04-22 2021-06-15 上海西井信息科技有限公司 基于双目摄像头的目标识别方法、系统、设备及存储介质
CN113284240A (zh) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 地图构建方法及装置、电子设备和存储介质
CN113510703A (zh) * 2021-06-25 2021-10-19 深圳市优必选科技股份有限公司 机器人位姿的确定方法、装置、机器人及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1194946A (ja) * 1997-07-23 1999-04-09 Denso Corp 車両用障害物認識装置
US11314262B2 (en) * 2016-08-29 2022-04-26 Trifo, Inc. Autonomous platform guidance systems with task planning and obstacle avoidance
CN110968083B (zh) * 2018-09-30 2023-02-10 科沃斯机器人股份有限公司 栅格地图的构建方法、避障的方法、设备及介质
WO2020258218A1 (zh) * 2019-06-28 2020-12-30 深圳市大疆创新科技有限公司 可移动平台的障碍物检测方法、装置及可移动平台
CN110889808B (zh) * 2019-11-21 2023-02-28 广州文远知行科技有限公司 一种定位的方法、装置、设备及存储介质
CN111461245B (zh) * 2020-04-09 2022-11-04 武汉大学 一种融合点云和图像的轮式机器人语义建图方法及系统
CN113589306B (zh) * 2020-04-30 2023-04-11 北京猎户星空科技有限公司 定位方法、装置、电子设备及存储介质
CN116148818A (zh) * 2021-11-22 2023-05-23 珠海一微半导体股份有限公司 一种激光定位置信度的评估方法、芯片和机器人

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200409387A1 (en) * 2017-12-05 2020-12-31 Sony Corporation Image processing apparatus, image processing method, and program
CN110858076A (zh) * 2018-08-22 2020-03-03 杭州海康机器人技术有限公司 一种设备定位、栅格地图构建方法及移动机器人
CN112180910A (zh) * 2019-06-18 2021-01-05 北京京东尚科信息技术有限公司 一种移动机器人障碍物感知方法和装置
CN110260856A (zh) * 2019-06-26 2019-09-20 北京海益同展信息科技有限公司 一种建图方法和装置
CN110530368A (zh) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 一种机器人定位方法及设备
CN112967283A (zh) * 2021-04-22 2021-06-15 上海西井信息科技有限公司 基于双目摄像头的目标识别方法、系统、设备及存储介质
CN113284240A (zh) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 地图构建方法及装置、电子设备和存储介质
CN113510703A (zh) * 2021-06-25 2021-10-19 深圳市优必选科技股份有限公司 机器人位姿的确定方法、装置、机器人及存储介质

Also Published As

Publication number Publication date
CN116148879A (zh) 2023-05-23
CN116148879B (zh) 2024-05-03

Similar Documents

Publication Publication Date Title
CN112258600A (zh) 一种基于视觉与激光雷达的同时定位与地图构建方法
CN111521195B (zh) 一种智能机器人
CN110941999B (zh) 一种人群计数系统中自适应计算高斯核大小的方法
CN107750364A (zh) 使用稳定的坐标系的道路垂直轮廓检测
CN106056643B (zh) 一种基于点云的室内动态场景slam方法及系统
JP2013239143A (ja) 道路の垂直輪郭検出
CN113269837A (zh) 一种适用于复杂三维环境的定位导航方法
JP2009052940A (ja) 移動物体検出装置および自律移動物体
WO2020233436A1 (zh) 车辆速度确定方法及车辆
CN115880364B (zh) 基于激光点云和视觉slam的机器人位姿估计方法
CN113327296B (zh) 基于深度加权的激光雷达与相机在线联合标定方法
CN111292369A (zh) 激光雷达的伪点云数据生成方法
CN112562000A (zh) 基于特征点检测和误匹配筛选的机器人视觉定位方法
CN106408596A (zh) 基于边缘的局部立体匹配方法
WO2022215409A1 (ja) 物体追跡装置
CN115144828A (zh) 一种智能汽车多传感器时空融合的自动在线标定方法
CN113593035A (zh) 一种运动控制决策生成方法、装置、电子设备及存储介质
CN116879870A (zh) 一种适用于低线束3d激光雷达的动态障碍物去除方法
CN109949344B (zh) 一种基于颜色概率目标建议窗口的核相关滤波跟踪方法
CN112991385B (zh) 一种基于不同度量准则的孪生网络目标跟踪方法
WO2023088125A1 (zh) 一种机器人提升障碍物标注精度的方法
Buck et al. Capturing uncertainty in monocular depth estimation: Towards fuzzy voxel maps
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN116148818A (zh) 一种激光定位置信度的评估方法、芯片和机器人
JP3906201B2 (ja) 補間画像生成装置および隠蔽領域推定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894662

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022894662

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022894662

Country of ref document: EP

Effective date: 20240624