WO2022110473A1 - Robot mapping method and device, computer readable storage medium, and robot - Google Patents

Robot mapping method and device, computer readable storage medium, and robot Download PDF

Info

Publication number
WO2022110473A1
WO2022110473A1 PCT/CN2020/140430 CN2020140430W WO2022110473A1 WO 2022110473 A1 WO2022110473 A1 WO 2022110473A1 CN 2020140430 W CN2020140430 W CN 2020140430W WO 2022110473 A1 WO2022110473 A1 WO 2022110473A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
robot
pose
rotation angle
Prior art date
Application number
PCT/CN2020/140430
Other languages
French (fr)
Chinese (zh)
Inventor
何婉君
刘志超
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Publication of WO2022110473A1 publication Critical patent/WO2022110473A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application belongs to the field of robotics, and in particular, relates to a method, device, computer-readable storage medium, and robot for mapping a robot.
  • the point cloud collected by the lidar is used directly without processing, it is easy to construct the dynamic object point cloud into the map, so that the map matching effect during map building is poor, the robot pose estimation will have a large deviation, and the loopback quality and efficiency will be low. , the constructed map has lower accuracy and will affect the accuracy of subsequent positioning and navigation.
  • the embodiments of the present application provide a robot mapping method, device, computer-readable storage medium and robot, so as to solve the problem that the existing robot mapping method is easy to construct the point cloud of dynamic objects into the map, and the constructed map lower precision issues.
  • a first aspect of the embodiments of the present application provides a method for building a robot map, which may include:
  • the dynamic object point cloud is removed from the first point cloud to obtain a fourth point cloud, and the fourth point cloud is used for robot mapping.
  • the filtering out the dynamic object point cloud in the first point cloud according to the third point cloud may include:
  • a preferred point set is selected from each candidate point set
  • the dynamic object point cloud that meets the preset condition is screened out from the preferred point set.
  • selecting each candidate point from the first point cloud according to the third point cloud may include:
  • the target point is any point in the first point cloud, and the corresponding point is the point with the smallest distance from the target point;
  • the target point is determined as a candidate point.
  • the robot after acquiring the first point cloud and the first pose collected by the robot at the current moment, it may also include:
  • De-distortion processing is performed on the first point cloud to obtain the first point cloud after the de-distortion;
  • the performing de-distortion processing on the first point cloud may include:
  • the first rotation angle is the rotation angle when the first data point of the first point cloud is collected
  • the second rotation angle is the rotation angle when the last data point of the first point cloud is collected
  • the first point cloud is de-distorted according to the first rotation angle, the second rotation angle and the angle difference, so as to obtain a de-distorted first point cloud.
  • performing the de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference may include:
  • the first point cloud is dedistorted according to the following formula:
  • is the first rotation angle
  • ⁇ ′ is the second rotation angle
  • angleH is the angle difference
  • (px, py, pz) is the coordinate of any point in the first point cloud
  • ( px', py', pz') are the coordinates of the point after de-distortion processing.
  • the projecting the second point cloud to the coordinates of the first point cloud according to the pose difference may include:
  • the second point cloud is projected onto the coordinates of the first point cloud according to the following formula:
  • (dx, dy, d ⁇ ) is the pose difference
  • (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud
  • (px_new, py_new, pz_new) is the projected point coordinate.
  • a second aspect of the embodiments of the present application provides a robotic mapping device, which may include:
  • the data acquisition module is used to acquire the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment;
  • a pose difference calculation module for calculating the pose difference between the first pose and the second pose
  • a point cloud projection module configured to project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud
  • a dynamic object point cloud screening module configured to filter out the dynamic object point cloud in the first point cloud according to the third point cloud
  • the mapping module is used to remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to perform robot mapping.
  • the dynamic object point cloud screening module may include:
  • a candidate point screening submodule configured to screen out each candidate point from the first point cloud according to the third point cloud
  • the clustering sub-module is used to cluster the selected candidate points to obtain each candidate point set;
  • the calculation submodule is used to calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction;
  • the preferred point set screening submodule is configured to filter out the preferred point set from each candidate point set according to the number of points, the first variance and the second variance;
  • the dynamic object point cloud screening sub-module is used to screen out the dynamic object point cloud that meets the preset conditions from the preferred point set.
  • candidate point screening sub-module may include:
  • a corresponding point determination unit used to determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the smallest distance from the target point point;
  • a distance calculation unit for calculating the distance between the target point and the corresponding point, and calculating the distance between the target point and the corresponding point in the normal direction of the connecting line between the target point and the robot;
  • the candidate point screening unit is used for if the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the target point and the corresponding point are in the distance between the target point and the robot. If the distance in the normal direction of the connecting line is greater than the preset second distance threshold, the target point is determined as a candidate point.
  • the robot mapping device may also include:
  • a de-distortion processing module configured to perform de-distortion processing on the first point cloud to obtain the de-distorted first point cloud
  • the downsampling processing module is configured to perform downsampling processing on the dedistorted first point cloud to obtain the downsampled first point cloud.
  • the de-distortion processing module may include:
  • the rotation angle acquisition sub-module is used to acquire the first rotation angle and the second rotation angle of the lidar of the robot, and the first rotation angle is the rotation angle when the first data point of the first point cloud is collected , the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
  • an angle difference acquisition sub-module configured to acquire the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar
  • a de-distortion processing sub-module configured to perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a de-distorted first point cloud.
  • de-distortion processing sub-module is specifically configured to perform de-distortion processing on the first point cloud according to the following formula:
  • is the first rotation angle
  • ⁇ ′ is the second rotation angle
  • angleH is the angle difference
  • (px, py, pz) is the coordinate of any point in the first point cloud
  • ( px', py', pz') are the coordinates of the point after de-distortion processing.
  • the point cloud projection module is specifically configured to project the second point cloud onto the coordinates of the first point cloud according to the following formula:
  • (dx, dy, d ⁇ ) is the pose difference
  • (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud
  • (px_new, py_new, pz_new) is the projected point coordinate.
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of any one of the above robot mapping methods are implemented .
  • a fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the computer program when the processor executes the computer program.
  • a fifth aspect of the embodiments of the present application provides a computer program product, which, when the computer program product runs on a robot, causes the robot to execute the steps of any one of the above robot mapping methods.
  • the beneficial effects of the embodiments of the present application compared with the prior art are: the embodiments of the present application obtain the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second point cloud collected at the historical moment. pose; calculate the pose difference between the first pose and the second pose; project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain the third point cloud; filter out the dynamic object point cloud in the first point cloud according to the third point cloud; remove the dynamic object point cloud from the first point cloud to obtain a fourth point cloud, And use the fourth point cloud for robot mapping.
  • the influence of the point cloud of dynamic objects can be eliminated from the current point cloud data according to the historical point cloud data, thereby greatly improving the mapping accuracy.
  • FIG. 1 is a flowchart of an embodiment of a robot mapping method in an embodiment of the present application
  • Fig. 2 is a schematic flowchart of filtering out the dynamic object point cloud in the first point cloud according to the third point cloud;
  • FIG. 3 is a structural diagram of an embodiment of a robot mapping device in an embodiment of the application.
  • FIG. 4 is a schematic block diagram of a robot in an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting” .
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • an embodiment of a robot mapping method in the embodiment of the present application may include:
  • Step S101 acquiring a first point cloud and a first pose collected by the robot at the current moment, and a second point cloud and a second pose collected at a historical moment.
  • the point cloud data can be collected through the lidar of the robot, and the pose data can be collected through the wheel odometer of the robot.
  • the frequency of data collection can be set according to the actual situation. For example, the frequency of data collection can be set to 10Hz, that is, data collection is performed every 0.1 seconds.
  • the point cloud data and pose data collected at the current moment are recorded as the first point cloud and the first pose.
  • preprocessing processes such as de-distortion and downsampling may also be performed on the first point cloud, thereby further improving the accuracy of the data.
  • the first rotation angle and the second rotation angle of the lidar may be obtained first, wherein the first rotation angle is the rotation when the first data point of the first point cloud is collected The second rotation angle is the rotation angle when the last data point of the first point cloud is collected; then, the angles between each point of the first point cloud and the horizontal positive direction of the lidar are obtained respectively difference; finally, perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a first point cloud after de-distortion.
  • the first point cloud can be dedistorted according to the following formula:
  • is the first rotation angle
  • ⁇ ′ is the second rotation angle
  • angleH is the angle difference
  • (px, py, pz) is the coordinate of any point in the first point cloud
  • ( px', py', pz') are the coordinates of the point after de-distortion processing.
  • the voxel grid downsampling method can be used to divide the dedistorted first point cloud into several cube grids of fixed size.
  • the coordinate centroid of each point in the grid is used as the point coordinate after downsampling of the grid, namely:
  • (px'(n), py'(n), pz'(n)) is the coordinate of the nth point in the grid, 1 ⁇ n ⁇ N, N is the total number of points in the grid, ( px", py", pz") are the point coordinates obtained after downsampling the grid, that is, each grid is downsampled as a point, and traversing all grids, the first point cloud after the downsampling can be obtained .
  • the influence of the point cloud of dynamic objects is removed from the current point cloud data according to the historical point cloud data. Therefore, it is also necessary to obtain the point cloud data and pose data collected at the historical moment. It can be set according to the actual situation. In this embodiment of the present application, it is preferably set to a time 1 second before the current time.
  • the point cloud data and pose data collected at historical moments are recorded as the second point cloud and the second pose here. It is easy to understand that after the second point cloud is acquired, preprocessing such as de-distortion and downsampling can also be performed on it. The specific process is similar to the preprocessing process for the first point cloud. Repeat.
  • the first point cloud and the second point cloud mentioned in the following are the results obtained after preprocessing.
  • Step S102 Calculate the pose difference between the first pose and the second pose.
  • the first pose is recorded as (x1, y1, ⁇ 1)
  • the second pose is recorded as (x2, y2, ⁇ 2)
  • Step S103 Project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud.
  • the second point cloud can be projected onto the coordinates of the first point cloud according to the following formula:
  • (px_old, py_old, pz_old) are the coordinates of any point in the second point cloud
  • (px_new, py_new, pz_new) are the projected coordinates of the point. Traversing all the points in the second point cloud according to the above formula, the projected third point cloud can be obtained.
  • Step S104 Screen out the point cloud of dynamic objects in the first point cloud according to the third point cloud.
  • step S104 may specifically include the following processes:
  • Step S1041 Screen out each candidate point from the first point cloud according to the third point cloud.
  • the point with the smallest distance from the target point can be determined in the third point cloud, and it is used as the target
  • the corresponding point of the point, the target point and the corresponding point constitute a point pair.
  • a KD tree may be used to determine the point with the smallest distance from the target point.
  • the distance between the target point and the corresponding point may be calculated, and the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot may be calculated. If the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot If the distance is greater than the preset second distance threshold, the target point may be determined as a candidate point.
  • the specific values of the first distance threshold and the second distance threshold may be set according to actual conditions, which are not specifically limited in this embodiment of the present application. Traversing all the points in the first point cloud according to the above method, each candidate point can be screened out.
  • Step S1042 cluster each of the selected candidate points to obtain each candidate point set.
  • any clustering method in the prior art may be selected according to the actual situation to perform clustering, which is not specifically limited in this embodiment of the present application.
  • the Euclidean distance may be used for the segmentation metric.
  • the clustering process if the distance between the current point and the previous point is within the preset threshold range, the current point is clustered into the category of the previous point; otherwise, the current point is set as a new clustering category, according to The distance judges whether the next point belongs to the same category as the point, and repeats the above process until all points are divided into different categories, and the points in each cluster category constitute a candidate point set.
  • Step S1043 Calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction.
  • the main direction of any candidate point set is the direction corresponding to the mean angle
  • the mean angle is the mean value of the rotation angles corresponding to each point in the candidate point set.
  • Step S1044 Screen out a preferred point set from each candidate point set according to the number of points, the first variance, and the second variance.
  • the point The set is determined as the preferred point set.
  • the specific values of the number threshold and the ratio threshold may be set according to actual conditions, which are not specifically limited in this embodiment of the present application.
  • Step S1045 Screen out the dynamic object point cloud that meets the preset condition from the preferred point set.
  • the KD tree can be used to find points in the preferred point set whose distances are less than a preset third distance threshold between each point in the preferred point set and the point cloud space at the current moment, and mark these found points as dynamic object point clouds.
  • the specific value of the third distance threshold may be set according to the actual situation, which is not specifically limited in this embodiment of the present application.
  • all points in the preferred point set may also be marked as dynamic object point clouds.
  • Step S105 remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to build a robot map.
  • the embodiment of the present application obtains the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment; calculates the first pose and The pose difference between the second poses; the second point cloud is projected onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud; according to the third point The cloud filters out the dynamic object point cloud in the first point cloud; removes the dynamic object point cloud from the first point cloud to obtain a fourth point cloud, and uses the fourth point cloud for robot construction picture.
  • the influence of the point cloud of dynamic objects can be eliminated from the current point cloud data according to the historical point cloud data, thereby greatly improving the mapping accuracy.
  • FIG. 3 shows a structural diagram of an embodiment of a robot mapping apparatus provided by an embodiment of the present application.
  • a robot mapping device may include:
  • the data acquisition module 301 is used to acquire the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment;
  • a pose difference calculation module 302 configured to calculate the pose difference between the first pose and the second pose
  • a point cloud projection module 303 configured to project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
  • a dynamic object point cloud screening module 304 configured to filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
  • the mapping module 305 is configured to remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to construct a robot map.
  • the dynamic object point cloud screening module may include:
  • a candidate point screening submodule configured to screen out each candidate point from the first point cloud according to the third point cloud
  • the clustering sub-module is used to cluster the selected candidate points to obtain each candidate point set;
  • the calculation submodule is used to calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction;
  • the preferred point set screening submodule is configured to filter out the preferred point set from each candidate point set according to the number of points, the first variance and the second variance;
  • the dynamic object point cloud screening sub-module is used to screen out the dynamic object point cloud that meets the preset conditions from the preferred point set.
  • candidate point screening sub-module may include:
  • a corresponding point determination unit used to determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the smallest distance from the target point point;
  • a distance calculation unit configured to calculate the distance between the target point and the corresponding point, and calculate the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot;
  • the candidate point screening unit is used for if the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the target point and the corresponding point are in the distance between the target point and the robot. If the distance in the normal direction of the connecting line is greater than the preset second distance threshold, the target point is determined as a candidate point.
  • the robot mapping device may also include:
  • a de-distortion processing module configured to perform de-distortion processing on the first point cloud to obtain the de-distorted first point cloud
  • the downsampling processing module is configured to perform downsampling processing on the dedistorted first point cloud to obtain the downsampled first point cloud.
  • the de-distortion processing module may include:
  • the rotation angle acquisition sub-module is used to acquire the first rotation angle and the second rotation angle of the lidar of the robot, and the first rotation angle is the rotation angle when the first data point of the first point cloud is collected , the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
  • an angle difference acquisition sub-module configured to acquire the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar
  • a de-distortion processing sub-module configured to perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a de-distorted first point cloud.
  • de-distortion processing sub-module is specifically configured to perform de-distortion processing on the first point cloud according to the following formula:
  • is the first rotation angle
  • ⁇ ′ is the second rotation angle
  • angleH is the angle difference
  • (px, py, pz) is the coordinate of any point in the first point cloud
  • ( px', py', pz') are the coordinates of the point after de-distortion processing.
  • the point cloud projection module is specifically configured to project the second point cloud onto the coordinates of the first point cloud according to the following formula:
  • (dx, dy, d ⁇ ) is the pose difference
  • (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud
  • (px_new, py_new, pz_new) is the projected point coordinate.
  • FIG. 4 shows a schematic block diagram of a robot provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown.
  • the robot 4 of this embodiment includes a processor 40 , a memory 41 , and a computer program 42 stored in the memory 41 and executable on the processor 40 .
  • the processor 40 executes the computer program 42
  • the steps in each of the above embodiments of the robot mapping method are implemented, for example, steps S101 to S105 shown in FIG. 1 .
  • the processor 40 executes the computer program 42
  • the functions of the modules/units in each of the foregoing apparatus embodiments such as the functions of the modules 301 to 305 shown in FIG. 3, are implemented.
  • the computer program 42 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 41 and executed by the processor 40 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 42 in the robot 4 .
  • FIG. 4 is only an example of the robot 4, and does not constitute a limitation to the robot 4. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as
  • the robot 4 may also include input and output devices, network access devices, buses, and the like.
  • the processor 40 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the robot 4 , such as a hard disk or a memory of the robot 4 .
  • the memory 41 can also be an external storage device of the robot 4, such as a plug-in hard disk equipped on the robot 4, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash card (Flash Card) and so on.
  • the memory 41 may also include both an internal storage unit of the robot 4 and an external storage device.
  • the memory 41 is used to store the computer program and other programs and data required by the robot 4 .
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/robot and method may be implemented in other ways.
  • the device/robot embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc. It should be noted that the content contained in the computer-readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer-readable Storage media exclude electrical carrier signals and telecommunications signals.

Abstract

A robot mapping method and device, a computer readable storage medium, and a robot. The method comprises: obtaining a first point cloud and a first pose acquired by a robot at the current moment, and a second point cloud and a second pose acquired at a historical moment (S101); calculating a pose difference between the first pose and the second pose (S102); projecting the second point cloud to coordinates of the first point cloud according to the pose difference to obtain a third point cloud (S103); screening out a dynamic object point cloud in the first point cloud according to the third point cloud (S104); and removing the dynamic object point cloud from the first point cloud to obtain a fourth point cloud, and performing robot mapping by using the fourth point cloud (S105). By means of the method, the influence of the dynamic object point cloud can be eliminated from the current point cloud data according to historical point cloud data, so that the precision of mapping is greatly improved.

Description

机器人建图方法、装置、计算机可读存储介质及机器人Robot mapping method, device, computer-readable storage medium, and robot
本申请要求于2020年11月24日在中国专利局提交的、申请号为202011330699.X的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application No. 202011330699.X, filed with the Chinese Patent Office on November 24, 2020, the entire contents of which are incorporated herein by reference.
技术领域technical field
本申请属于机器人技术领域,尤其涉及一种机器人建图方法、装置、计算机可读存储介质及机器人。The present application belongs to the field of robotics, and in particular, relates to a method, device, computer-readable storage medium, and robot for mapping a robot.
背景技术Background technique
机器人在进行建图的过程中,周围环境中通常存在各种动态物体,如汽车、行人等。若将激光雷达采集的点云不经过处理直接使用,容易将动态物体点云构建入地图中,使建图时地图匹配效果差,机器人位姿估计会产生较大的偏差,回环质量和效率低,构建的地图精度较低,并会影响后续定位与导航的精度。In the process of building a map, there are usually various dynamic objects in the surrounding environment, such as cars and pedestrians. If the point cloud collected by the lidar is used directly without processing, it is easy to construct the dynamic object point cloud into the map, so that the map matching effect during map building is poor, the robot pose estimation will have a large deviation, and the loopback quality and efficiency will be low. , the constructed map has lower accuracy and will affect the accuracy of subsequent positioning and navigation.
技术问题technical problem
有鉴于此,本申请实施例提供了一种机器人建图方法、装置、计算机可读存储介质及机器人,以解决现有的机器人建图方法容易将动态物体点云构建入地图中,构建的地图精度较低的问题。In view of this, the embodiments of the present application provide a robot mapping method, device, computer-readable storage medium and robot, so as to solve the problem that the existing robot mapping method is easy to construct the point cloud of dynamic objects into the map, and the constructed map lower precision issues.
技术解决方案technical solutions
本申请实施例的第一方面提供了一种机器人建图方法,可以包括:A first aspect of the embodiments of the present application provides a method for building a robot map, which may include:
获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;Obtain the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and second pose collected at the historical moment;
计算所述第一位姿和所述第二位姿之间的位姿差;calculating the pose difference between the first pose and the second pose;
根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;Projecting the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
根据所述第三点云筛选出所述第一点云中的动态物体点云;Filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。The dynamic object point cloud is removed from the first point cloud to obtain a fourth point cloud, and the fourth point cloud is used for robot mapping.
进一步地,所述根据所述第三点云筛选出所述第一点云中的动态物体点云,可以包括:Further, the filtering out the dynamic object point cloud in the first point cloud according to the third point cloud may include:
根据所述第三点云从所述第一点云中筛选出各个候选点;Screen out each candidate point from the first point cloud according to the third point cloud;
对筛选出的各个候选点进行聚类,得到各个候选点集;Clustering the selected candidate points to obtain each candidate point set;
分别计算各个候选点集中的点数目、点坐标在主方向上的第一方差以及点坐标在主方向的法向上的第二方差;Calculate the number of points in each candidate point set, the first variance of point coordinates in the main direction, and the second variance of point coordinates in the normal direction of the main direction;
根据所述点数目、所述第一方差以及所述第二方差从各个候选点集中筛选出优选点集;According to the number of points, the first variance and the second variance, a preferred point set is selected from each candidate point set;
从所述优选点集中筛选出符合预设条件的动态物体点云。The dynamic object point cloud that meets the preset condition is screened out from the preferred point set.
进一步地,所述根据所述第三点云从所述第一点云中筛选出各个候选点,可以包括:Further, selecting each candidate point from the first point cloud according to the third point cloud may include:
在所述第三点云中确定目标点的对应点,所述目标点为所述第一点云中的任意一点,所述对应点为与所述目标点距离最小的点;Determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the point with the smallest distance from the target point;
计算所述目标点与所述对应点之间的距离,并计算所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离;Calculate the distance between the target point and the corresponding point, and calculate the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot;
若所述目标点与所述对应点之间的距离大于预设的第一距离阈值,或所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离大于预设的第二距离阈值,则将所述目标点确定为候选点。If the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot If the distance is greater than the preset second distance threshold, the target point is determined as a candidate point.
进一步地,在获取机器人在当前时刻采集的第一点云和第一位姿之后,还可以包括:Further, after acquiring the first point cloud and the first pose collected by the robot at the current moment, it may also include:
对所述第一点云进行去畸变处理,得到去畸变后的第一点云;De-distortion processing is performed on the first point cloud to obtain the first point cloud after the de-distortion;
对所述去畸变后的第一点云进行下采样处理,得到下采样后的第一点云。Perform down-sampling processing on the dedistorted first point cloud to obtain a down-sampled first point cloud.
进一步地,所述对所述第一点云进行去畸变处理,可以包括:Further, the performing de-distortion processing on the first point cloud may include:
获取所述机器人的激光雷达的第一旋转角和第二旋转角,所述第一旋转角为采集所述 第一点云的第一个数据点时的旋转角,所述第二旋转角为采集所述第一点云的最后一个数据点时的旋转角;Obtain the first rotation angle and the second rotation angle of the lidar of the robot, the first rotation angle is the rotation angle when the first data point of the first point cloud is collected, and the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
获取所述第一点云的各点与所述激光雷达的水平正方向的角度差;Obtain the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar;
根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,得到去畸变后的第一点云。The first point cloud is de-distorted according to the first rotation angle, the second rotation angle and the angle difference, so as to obtain a de-distorted first point cloud.
进一步地,所述根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,可以包括:Further, performing the de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference may include:
根据下式对所述第一点云进行去畸变处理:The first point cloud is dedistorted according to the following formula:
px′=cos((θ′-θ)*angleH/2π)*px+px′=cos((θ′-θ)*angleH/2π)*px+
sin((θ′-θ)*angleH/2π)*pysin((θ′-θ)*angleH/2π)*py
py′=-sin((θ′-θ)*angleeH/2π)*px+py′=-sin((θ′-θ)*angleeH/2π)*px+
cos((θ′-θ)*angleH/2π)*pycos((θ′-θ)*angleH/2π)*py
pz′=pzpz'=pz
其中,θ为所述第一旋转角,θ′为所述第二旋转角,angleH为所述角度差,(px,py,pz)为所述第一点云中的任意一点的坐标,(px′,py′,pz′)为对该点进行去畸变处理后的坐标。Wherein, θ is the first rotation angle, θ′ is the second rotation angle, angleH is the angle difference, (px, py, pz) is the coordinate of any point in the first point cloud, ( px', py', pz') are the coordinates of the point after de-distortion processing.
进一步地,所述根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,可以包括:Further, the projecting the second point cloud to the coordinates of the first point cloud according to the pose difference may include:
根据下式将所述第二点云投影至所述第一点云的坐标上:The second point cloud is projected onto the coordinates of the first point cloud according to the following formula:
px_new=cos(dθ)*px_old+sin(dθ)*py_old-dxpx_new=cos(dθ)*px_old+sin(dθ)*py_old-dx
py_new=-sin(dθ)*px_old+cos(dθ)*py_old-dypy_new=-sin(dθ)*px_old+cos(dθ)*py_old-dy
pz_new=pz_oldpz_new=pz_old
其中,(dx,dy,dθ)为所述位姿差,(px_old,py_old,pz_old)为所述第二点云中的任意一点的坐标,(px_new,py_new,pz_new)为该点投影后的坐标。Among them, (dx, dy, dθ) is the pose difference, (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud, (px_new, py_new, pz_new) is the projected point coordinate.
本申请实施例的第二方面提供了一种机器人建图装置,可以包括:A second aspect of the embodiments of the present application provides a robotic mapping device, which may include:
数据获取模块,用于获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;The data acquisition module is used to acquire the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment;
位姿差计算模块,用于计算所述第一位姿和所述第二位姿之间的位姿差;a pose difference calculation module for calculating the pose difference between the first pose and the second pose;
点云投影模块,用于根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;a point cloud projection module, configured to project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
动态物体点云筛选模块,用于根据所述第三点云筛选出所述第一点云中的动态物体点云;a dynamic object point cloud screening module, configured to filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
建图模块,用于从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。The mapping module is used to remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to perform robot mapping.
进一步地,所述动态物体点云筛选模块可以包括:Further, the dynamic object point cloud screening module may include:
候选点筛选子模块,用于根据所述第三点云从所述第一点云中筛选出各个候选点;a candidate point screening submodule, configured to screen out each candidate point from the first point cloud according to the third point cloud;
聚类子模块,用于对筛选出的各个候选点进行聚类,得到各个候选点集;The clustering sub-module is used to cluster the selected candidate points to obtain each candidate point set;
计算子模块,用于分别计算各个候选点集中的点数目、点坐标在主方向上的第一方差以及点坐标在主方向的法向上的第二方差;The calculation submodule is used to calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction;
优选点集筛选子模块,用于根据所述点数目、所述第一方差以及所述第二方差从各个候选点集中筛选出优选点集;The preferred point set screening submodule is configured to filter out the preferred point set from each candidate point set according to the number of points, the first variance and the second variance;
动态物体点云筛选子模块,用于从所述优选点集中筛选出符合预设条件的动态物体点云。The dynamic object point cloud screening sub-module is used to screen out the dynamic object point cloud that meets the preset conditions from the preferred point set.
进一步地,所述候选点筛选子模块可以包括:Further, the candidate point screening sub-module may include:
对应点确定单元,用于在所述第三点云中确定目标点的对应点,所述目标点为所述第一点云中的任意一点,所述对应点为与所述目标点距离最小的点;A corresponding point determination unit, used to determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the smallest distance from the target point point;
距离计算单元,用于计算所述目标点与所述对应点之间的距离,并计算所述目标点与 所述对应点在所述目标点与所述机器人的连线的法向上的距离;A distance calculation unit for calculating the distance between the target point and the corresponding point, and calculating the distance between the target point and the corresponding point in the normal direction of the connecting line between the target point and the robot;
候选点筛选单元,用于若所述目标点与所述对应点之间的距离大于预设的第一距离阈值,或所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离大于预设的第二距离阈值,则将所述目标点确定为候选点。The candidate point screening unit is used for if the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the target point and the corresponding point are in the distance between the target point and the robot. If the distance in the normal direction of the connecting line is greater than the preset second distance threshold, the target point is determined as a candidate point.
进一步地,所述机器人建图装置还可以包括:Further, the robot mapping device may also include:
去畸变处理模块,用于对所述第一点云进行去畸变处理,得到去畸变后的第一点云;a de-distortion processing module, configured to perform de-distortion processing on the first point cloud to obtain the de-distorted first point cloud;
下采样处理模块,用于对所述去畸变后的第一点云进行下采样处理,得到下采样后的第一点云。The downsampling processing module is configured to perform downsampling processing on the dedistorted first point cloud to obtain the downsampled first point cloud.
进一步地,所述去畸变处理模块可以包括:Further, the de-distortion processing module may include:
旋转角获取子模块,用于获取所述机器人的激光雷达的第一旋转角和第二旋转角,所述第一旋转角为采集所述第一点云的第一个数据点时的旋转角,所述第二旋转角为采集所述第一点云的最后一个数据点时的旋转角;The rotation angle acquisition sub-module is used to acquire the first rotation angle and the second rotation angle of the lidar of the robot, and the first rotation angle is the rotation angle when the first data point of the first point cloud is collected , the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
角度差获取子模块,用于获取所述第一点云的各点与所述激光雷达的水平正方向的角度差;an angle difference acquisition sub-module, configured to acquire the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar;
去畸变处理子模块,用于根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,得到去畸变后的第一点云。A de-distortion processing sub-module, configured to perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a de-distorted first point cloud.
进一步地,所述去畸变处理子模块具体用于根据下式对所述第一点云进行去畸变处理:Further, the de-distortion processing sub-module is specifically configured to perform de-distortion processing on the first point cloud according to the following formula:
px′=cos((θ′-θ)*angleH/2π)*px+px′=cos((θ′-θ)*angleH/2π)*px+
sin((θ′-θ)*angleH/2π)*pysin((θ′-θ)*angleH/2π)*py
py′=-sin((θ′-θ)*angleeH/2π)*px+py′=-sin((θ′-θ)*angleeH/2π)*px+
cos((θ′-θ)*angleH/2π)*pycos((θ′-θ)*angleH/2π)*py
pz′=pzpz'=pz
其中,θ为所述第一旋转角,θ′为所述第二旋转角,angleH为所述角度差,(px,py,pz)为所述第一点云中的任意一点的坐标,(px′,py′,pz′)为对该点进行去畸变处理后的坐标。Wherein, θ is the first rotation angle, θ′ is the second rotation angle, angleH is the angle difference, (px, py, pz) is the coordinate of any point in the first point cloud, ( px', py', pz') are the coordinates of the point after de-distortion processing.
进一步地,所述点云投影模块具体用于根据下式将所述第二点云投影至所述第一点云的坐标上:Further, the point cloud projection module is specifically configured to project the second point cloud onto the coordinates of the first point cloud according to the following formula:
px_new=cos(dθ)*px_old+sin(dθ)*py_old-dxpx_new=cos(dθ)*px_old+sin(dθ)*py_old-dx
py_new=-sin(dθ)*px_old+cos(dθ)*py_old-dypy_new=-sin(dθ)*px_old+cos(dθ)*py_old-dy
pz_new=pz_oldpz_new=pz_old
其中,(dx,dy,dθ)为所述位姿差,(px_old,py_old,pz_old)为所述第二点云中的任意一点的坐标,(px_new,py_new,pz_new)为该点投影后的坐标。Among them, (dx, dy, dθ) is the pose difference, (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud, (px_new, py_new, pz_new) is the projected point coordinate.
本申请实施例的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述任一种机器人建图方法的步骤。A third aspect of the embodiments of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of any one of the above robot mapping methods are implemented .
本申请实施例的第四方面提供了一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一种机器人建图方法的步骤。A fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the computer program when the processor executes the computer program. The steps of any one of the above robot mapping methods.
本申请实施例的第五方面提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行上述任一种机器人建图方法的步骤。A fifth aspect of the embodiments of the present application provides a computer program product, which, when the computer program product runs on a robot, causes the robot to execute the steps of any one of the above robot mapping methods.
有益效果beneficial effect
本申请实施例与现有技术相比存在的有益效果是:本申请实施例获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;计算所述第一位姿和所述第二位姿之间的位姿差;根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;根据所述第三点云筛选出所述第一点云中的动态物体点云;从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。通过本申请实施例,可以根据历史点云数据从当前的点云数据中剔除掉动态物 体点云的影响,从而极大提高了建图精度。The beneficial effects of the embodiments of the present application compared with the prior art are: the embodiments of the present application obtain the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second point cloud collected at the historical moment. pose; calculate the pose difference between the first pose and the second pose; project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain the third point cloud; filter out the dynamic object point cloud in the first point cloud according to the third point cloud; remove the dynamic object point cloud from the first point cloud to obtain a fourth point cloud, And use the fourth point cloud for robot mapping. Through the embodiment of the present application, the influence of the point cloud of dynamic objects can be eliminated from the current point cloud data according to the historical point cloud data, thereby greatly improving the mapping accuracy.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for the present application. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1为本申请实施例中一种机器人建图方法的一个实施例流程图;FIG. 1 is a flowchart of an embodiment of a robot mapping method in an embodiment of the present application;
图2为根据第三点云筛选出第一点云中的动态物体点云的示意流程图;Fig. 2 is a schematic flowchart of filtering out the dynamic object point cloud in the first point cloud according to the third point cloud;
图3为本申请实施例中一种机器人建图装置的一个实施例结构图;3 is a structural diagram of an embodiment of a robot mapping device in an embodiment of the application;
图4为本申请实施例中一种机器人的示意框图。FIG. 4 is a schematic block diagram of a robot in an embodiment of the present application.
本发明的实施方式Embodiments of the present invention
为使得本申请的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本申请一部分实施例,而非全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。In order to make the purpose, features and advantages of the invention of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the following The described embodiments are only some, but not all, embodiments of the present application. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It is to be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described feature, integer, step, operation, element and/or component, but does not exclude one or more other features , whole, step, operation, element, component and/or the presence or addition of a collection thereof.
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terminology used in the specification of the application herein is for the purpose of describing particular embodiments only and is not intended to limit the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural unless the context clearly dictates otherwise.
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should also be further understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items .
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in this specification and the appended claims, the term "if" may be contextually interpreted as "when" or "once" or "in response to determining" or "in response to detecting" . Similarly, the phrases "if it is determined" or "if the [described condition or event] is detected" may be interpreted, depending on the context, to mean "once it is determined" or "in response to the determination" or "once the [described condition or event] is detected. ]" or "in response to detection of the [described condition or event]".
另外,在本申请的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the present application, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
请参阅图1,本申请实施例中一种机器人建图方法的一个实施例可以包括:Referring to FIG. 1, an embodiment of a robot mapping method in the embodiment of the present application may include:
步骤S101、获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿。Step S101 , acquiring a first point cloud and a first pose collected by the robot at the current moment, and a second point cloud and a second pose collected at a historical moment.
在本申请实施例中,可以通过所述机器人的激光雷达进行点云数据的采集,并通过所述机器人的轮式里程计进行位姿数据的采集。数据采集的频率可以根据实际情况进行设置,例如,可以将数据采集频率设置为10Hz,即每隔0.1秒进行一次数据采集。为了便于区分,此处将在当前时刻采集的点云数据和位姿数据记为第一点云和第一位姿。In the embodiment of the present application, the point cloud data can be collected through the lidar of the robot, and the pose data can be collected through the wheel odometer of the robot. The frequency of data collection can be set according to the actual situation. For example, the frequency of data collection can be set to 10Hz, that is, data collection is performed every 0.1 seconds. In order to facilitate the distinction, the point cloud data and pose data collected at the current moment are recorded as the first point cloud and the first pose.
优选地,在获取到所述第一点云之后,还可以对其进行去畸变及下采样处理等预处理过程,从而进一步提高数据的精度。Preferably, after the first point cloud is acquired, preprocessing processes such as de-distortion and downsampling may also be performed on the first point cloud, thereby further improving the accuracy of the data.
在进行去畸变处理时,可以首先获取所述激光雷达的第一旋转角和第二旋转角,其中,所述第一旋转角为采集所述第一点云的第一个数据点时的旋转角,所述第二旋转角为采集所述第一点云的最后一个数据点时的旋转角;然后,分别获取所述第一点云的各点与所述激光雷达的水平正方向的角度差;最后,根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,得到去畸变后的第一点云。When performing the de-distortion processing, the first rotation angle and the second rotation angle of the lidar may be obtained first, wherein the first rotation angle is the rotation when the first data point of the first point cloud is collected The second rotation angle is the rotation angle when the last data point of the first point cloud is collected; then, the angles between each point of the first point cloud and the horizontal positive direction of the lidar are obtained respectively difference; finally, perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a first point cloud after de-distortion.
具体地,可以根据下式对所述第一点云进行去畸变处理:Specifically, the first point cloud can be dedistorted according to the following formula:
px′=cos((θ′-θ)*angleH/2π)*px+px′=cos((θ′-θ)*angleH/2π)*px+
sin((θ′-θ)*angleH/2π)*pysin((θ′-θ)*angleH/2π)*py
py′=-sin((θ′-θ)*angleeH/2π)*px+py′=-sin((θ′-θ)*angleeH/2π)*px+
cos((θ′-θ)*angleH/2π)*pycos((θ′-θ)*angleH/2π)*py
pz′=pzpz'=pz
其中,θ为所述第一旋转角,θ′为所述第二旋转角,angleH为所述角度差,(px,py,pz)为所述第一点云中的任意一点的坐标,(px′,py′,pz′)为对该点进行去畸变处理后的坐标。根据上式遍历所述第一点云中的所有点,即可得到所述去畸变后的第一点云。Wherein, θ is the first rotation angle, θ′ is the second rotation angle, angleH is the angle difference, (px, py, pz) is the coordinate of any point in the first point cloud, ( px', py', pz') are the coordinates of the point after de-distortion processing. By traversing all points in the first point cloud according to the above formula, the first point cloud after de-distortion can be obtained.
在进行下采样处理时,可以使用体素网格下采样方法,将所述去畸变后的第一点云划分成若干固定大小的立方体网格,对于任意一个立方体网格而言,可以取该网格中各点的坐标质心来作为该网格下采样后的点坐标,即:When performing downsampling processing, the voxel grid downsampling method can be used to divide the dedistorted first point cloud into several cube grids of fixed size. The coordinate centroid of each point in the grid is used as the point coordinate after downsampling of the grid, namely:
Figure PCTCN2020140430-appb-000001
Figure PCTCN2020140430-appb-000001
Figure PCTCN2020140430-appb-000002
Figure PCTCN2020140430-appb-000002
Figure PCTCN2020140430-appb-000003
Figure PCTCN2020140430-appb-000003
其中,(px′(n),py′(n),pz′(n))为该网格中第n个点的坐标,1≤n≤N,N为该网格中的总点数,(px″,py″,pz″)为对该网格下采样后得到的点坐标,即每个网格下采样为一个点,遍历所有的网格,即可得到所述下采样后的第一点云。Among them, (px'(n), py'(n), pz'(n)) is the coordinate of the nth point in the grid, 1≤n≤N, N is the total number of points in the grid, ( px", py", pz") are the point coordinates obtained after downsampling the grid, that is, each grid is downsampled as a point, and traversing all grids, the first point cloud after the downsampling can be obtained .
在本申请实施例中,是根据历史点云数据从当前的点云数据中剔除掉动态物体点云的影响,因此还需要获取在历史时刻采集的点云数据和位姿数据,所述历史时刻可以根据实际情况进行设置,在本申请实施例中,优选将其设置为距离当前时刻1秒之前的时刻。为了便于区分,此处将在历史时刻采集的点云数据和位姿数据记为第二点云和第二位姿。容易理解地,在获取到所述第二点云之后,也可以对其进行去畸变及下采样处理等预处理过程,具体过程与对所述第一点云的预处理过程类似,此处不再赘述。为了便于叙述,后文中所提及的第一点云及第二点云均为预处理后所得结果。In the embodiment of the present application, the influence of the point cloud of dynamic objects is removed from the current point cloud data according to the historical point cloud data. Therefore, it is also necessary to obtain the point cloud data and pose data collected at the historical moment. It can be set according to the actual situation. In this embodiment of the present application, it is preferably set to a time 1 second before the current time. For the convenience of distinction, the point cloud data and pose data collected at historical moments are recorded as the second point cloud and the second pose here. It is easy to understand that after the second point cloud is acquired, preprocessing such as de-distortion and downsampling can also be performed on it. The specific process is similar to the preprocessing process for the first point cloud. Repeat. For the convenience of description, the first point cloud and the second point cloud mentioned in the following are the results obtained after preprocessing.
步骤S102、计算所述第一位姿和所述第二位姿之间的位姿差。Step S102: Calculate the pose difference between the first pose and the second pose.
此处将所述第一位姿记为(x1,y1,θ1),将所述第二位姿记为(x2,y2,θ2),则可以根据下式计算两者之间的位姿差:Here, the first pose is recorded as (x1, y1, θ1), and the second pose is recorded as (x2, y2, θ2), then the pose difference between the two can be calculated according to the following formula :
dx=x1-x2dx=x1-x2
dy=y1-x2dy=y1-x2
dθ=θ1-θ2dθ=θ1-θ2
则(dx,dy,dθ)即为所述位姿差。Then (dx, dy, dθ) is the pose difference.
步骤S103、根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云。Step S103: Project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud.
具体地,可以根据下式将所述第二点云投影至所述第一点云的坐标上:Specifically, the second point cloud can be projected onto the coordinates of the first point cloud according to the following formula:
px_new=cos(dθ)*px_old+sin(dθ)*py_old-dxpx_new=cos(dθ)*px_old+sin(dθ)*py_old-dx
py_new=-sin(dθ)*px_old+cos(dθ)*py_old-dypy_new=-sin(dθ)*px_old+cos(dθ)*py_old-dy
pz_new=pz_oldpz_new=pz_old
其中,(px_old,py_old,pz_old)为所述第二点云中的任意一点的坐标,(px_new,py_new,pz_new)为该点投影后的坐标。根据上式遍历所述第二点云中的所有点,即可得到投影后的所述第三点云。Wherein, (px_old, py_old, pz_old) are the coordinates of any point in the second point cloud, and (px_new, py_new, pz_new) are the projected coordinates of the point. Traversing all the points in the second point cloud according to the above formula, the projected third point cloud can be obtained.
步骤S104、根据所述第三点云筛选出所述第一点云中的动态物体点云。Step S104: Screen out the point cloud of dynamic objects in the first point cloud according to the third point cloud.
如图2所示,步骤S104具体可以包括如下过程:As shown in FIG. 2, step S104 may specifically include the following processes:
步骤S1041、根据所述第三点云从所述第一点云中筛选出各个候选点。Step S1041: Screen out each candidate point from the first point cloud according to the third point cloud.
以所述第一点云中的任意一点(将其记为目标点)为例,首先,可以在所述第三点云中确定与所述目标点距离最小的点,将其作为所述目标点的对应点,所述目标点和所述对应点即构成了一个点对,在本申请实施例的一种具体实现中,可以使用KD树来确定与所述目标点距离最小的点。Taking any point in the first point cloud (denoted as the target point) as an example, first, the point with the smallest distance from the target point can be determined in the third point cloud, and it is used as the target The corresponding point of the point, the target point and the corresponding point constitute a point pair. In a specific implementation of the embodiment of the present application, a KD tree may be used to determine the point with the smallest distance from the target point.
然后,可以计算所述目标点与所述对应点之间的距离,并计算所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离。若所述目标点与所述对应点之间的距离大于预设的第一距离阈值,或所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离大于预设的第二距离阈值,则可以将所述目标点确定为候选点。其中,所述第一距离阈值和所述第二距离阈值的具体取值可以根据实际情况进行设置,本申请实施例对此不作具体限定。根据上述方式遍历所述第一点云中的所有点,即可筛选出其中的各个候选点。Then, the distance between the target point and the corresponding point may be calculated, and the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot may be calculated. If the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot If the distance is greater than the preset second distance threshold, the target point may be determined as a candidate point. The specific values of the first distance threshold and the second distance threshold may be set according to actual conditions, which are not specifically limited in this embodiment of the present application. Traversing all the points in the first point cloud according to the above method, each candidate point can be screened out.
步骤S1042、对筛选出的各个候选点进行聚类,得到各个候选点集。Step S1042 , cluster each of the selected candidate points to obtain each candidate point set.
在本申请中,可以根据实际情况选择现有技术中的任意一种聚类方法来进行聚类,本申请实施例对此不作具体限定。In the present application, any clustering method in the prior art may be selected according to the actual situation to perform clustering, which is not specifically limited in this embodiment of the present application.
优选地,在本申请实施例的一种具体实现中,可以采用欧式距离进行分割度量。在聚类过程中,如果当前点与前一个点的距离在预设的阈值范围内,则当前点被聚到前一个点的类别中;否则,把当前点设置为新的聚类类别,根据距离判断下一个点是否与该点属于同一类,重复以上过程直到所有的点都分割到不同的类别中,每个聚类类别中的点即构成一个候选点集。Preferably, in a specific implementation of the embodiment of the present application, the Euclidean distance may be used for the segmentation metric. In the clustering process, if the distance between the current point and the previous point is within the preset threshold range, the current point is clustered into the category of the previous point; otherwise, the current point is set as a new clustering category, according to The distance judges whether the next point belongs to the same category as the point, and repeats the above process until all points are divided into different categories, and the points in each cluster category constitute a candidate point set.
步骤S1043、分别计算各个候选点集中的点数目、点坐标在主方向上的第一方差以及点坐标在主方向的法向上的第二方差。Step S1043: Calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction.
其中,任意一个候选点集的主方向为其均值角度所对应的方向,所述均值角度为该候选点集中各个点所对应的旋转角的均值。Wherein, the main direction of any candidate point set is the direction corresponding to the mean angle, and the mean angle is the mean value of the rotation angles corresponding to each point in the candidate point set.
步骤S1044、根据所述点数目、所述第一方差以及所述第二方差从各个候选点集中筛选出优选点集。Step S1044: Screen out a preferred point set from each candidate point set according to the number of points, the first variance, and the second variance.
以任意一个候选点集为例,若该点集的点数目大于预设的数目阈值,且所述第一方差与所述第二方差之比小于预设的比例阈值,则可以将该点集确定为优选点集。所述数目阈值和所述比例阈值的具体取值可以根据实际情况进行设置,本申请实施例对此不作具体限定。根据上述方式遍历各个候选点集,即可筛选出其中的优选点集。Taking any candidate point set as an example, if the number of points in the point set is greater than a preset number threshold, and the ratio of the first variance to the second variance is less than a preset ratio threshold, the point The set is determined as the preferred point set. The specific values of the number threshold and the ratio threshold may be set according to actual conditions, which are not specifically limited in this embodiment of the present application. By traversing each candidate point set according to the above method, the preferred point set can be filtered out.
步骤S1045、从所述优选点集中筛选出符合预设条件的动态物体点云。Step S1045: Screen out the dynamic object point cloud that meets the preset condition from the preferred point set.
具体地,可以利用KD树寻找所述优选点集中各点与当前时刻点云空间中距离小于预设的第三距离阈值的点,将寻找到的这些点标记为动态物体点云。其中,所述第三距离阈值的具体取值可以根据实际情况进行设置,本申请实施例对此不作具体限定。特殊地,还可以将所述优选点集中的所有点均标记为动态物体点云。Specifically, the KD tree can be used to find points in the preferred point set whose distances are less than a preset third distance threshold between each point in the preferred point set and the point cloud space at the current moment, and mark these found points as dynamic object point clouds. The specific value of the third distance threshold may be set according to the actual situation, which is not specifically limited in this embodiment of the present application. In particular, all points in the preferred point set may also be marked as dynamic object point clouds.
步骤S105、从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。Step S105 , remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to build a robot map.
综上所述,本申请实施例获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;计算所述第一位姿和所述第二位姿之间的位姿差;根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;根据所述第三点云筛选出所述第一点云中的动态物体点云;从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。通过本申请实施例,可以根据历史点云数据从当前的点云数据中剔除掉动态物体点云的影响,从而极大提高了建图精度。To sum up, the embodiment of the present application obtains the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment; calculates the first pose and The pose difference between the second poses; the second point cloud is projected onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud; according to the third point The cloud filters out the dynamic object point cloud in the first point cloud; removes the dynamic object point cloud from the first point cloud to obtain a fourth point cloud, and uses the fourth point cloud for robot construction picture. Through the embodiment of the present application, the influence of the point cloud of dynamic objects can be eliminated from the current point cloud data according to the historical point cloud data, thereby greatly improving the mapping accuracy.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
对应于上文实施例所述的一种机器人建图方法,图3示出了本申请实施例提供的一种机器人建图装置的一个实施例结构图。Corresponding to the robot mapping method described in the above embodiment, FIG. 3 shows a structural diagram of an embodiment of a robot mapping apparatus provided by an embodiment of the present application.
本实施例中,一种机器人建图装置可以包括:In this embodiment, a robot mapping device may include:
数据获取模块301,用于获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;The data acquisition module 301 is used to acquire the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment;
位姿差计算模块302,用于计算所述第一位姿和所述第二位姿之间的位姿差;a pose difference calculation module 302, configured to calculate the pose difference between the first pose and the second pose;
点云投影模块303,用于根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;A point cloud projection module 303, configured to project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
动态物体点云筛选模块304,用于根据所述第三点云筛选出所述第一点云中的动态物体点云;A dynamic object point cloud screening module 304, configured to filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
建图模块305,用于从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。The mapping module 305 is configured to remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to construct a robot map.
进一步地,所述动态物体点云筛选模块可以包括:Further, the dynamic object point cloud screening module may include:
候选点筛选子模块,用于根据所述第三点云从所述第一点云中筛选出各个候选点;a candidate point screening submodule, configured to screen out each candidate point from the first point cloud according to the third point cloud;
聚类子模块,用于对筛选出的各个候选点进行聚类,得到各个候选点集;The clustering sub-module is used to cluster the selected candidate points to obtain each candidate point set;
计算子模块,用于分别计算各个候选点集中的点数目、点坐标在主方向上的第一方差以及点坐标在主方向的法向上的第二方差;The calculation submodule is used to calculate the number of points in each candidate point set, the first variance of the point coordinates in the main direction, and the second variance of the point coordinates in the normal direction of the main direction;
优选点集筛选子模块,用于根据所述点数目、所述第一方差以及所述第二方差从各个候选点集中筛选出优选点集;The preferred point set screening submodule is configured to filter out the preferred point set from each candidate point set according to the number of points, the first variance and the second variance;
动态物体点云筛选子模块,用于从所述优选点集中筛选出符合预设条件的动态物体点云。The dynamic object point cloud screening sub-module is used to screen out the dynamic object point cloud that meets the preset conditions from the preferred point set.
进一步地,所述候选点筛选子模块可以包括:Further, the candidate point screening sub-module may include:
对应点确定单元,用于在所述第三点云中确定目标点的对应点,所述目标点为所述第一点云中的任意一点,所述对应点为与所述目标点距离最小的点;A corresponding point determination unit, used to determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the smallest distance from the target point point;
距离计算单元,用于计算所述目标点与所述对应点之间的距离,并计算所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离;a distance calculation unit, configured to calculate the distance between the target point and the corresponding point, and calculate the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot;
候选点筛选单元,用于若所述目标点与所述对应点之间的距离大于预设的第一距离阈值,或所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离大于预设的第二距离阈值,则将所述目标点确定为候选点。The candidate point screening unit is used for if the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the target point and the corresponding point are in the distance between the target point and the robot. If the distance in the normal direction of the connecting line is greater than the preset second distance threshold, the target point is determined as a candidate point.
进一步地,所述机器人建图装置还可以包括:Further, the robot mapping device may also include:
去畸变处理模块,用于对所述第一点云进行去畸变处理,得到去畸变后的第一点云;a de-distortion processing module, configured to perform de-distortion processing on the first point cloud to obtain the de-distorted first point cloud;
下采样处理模块,用于对所述去畸变后的第一点云进行下采样处理,得到下采样后的第一点云。The downsampling processing module is configured to perform downsampling processing on the dedistorted first point cloud to obtain the downsampled first point cloud.
进一步地,所述去畸变处理模块可以包括:Further, the de-distortion processing module may include:
旋转角获取子模块,用于获取所述机器人的激光雷达的第一旋转角和第二旋转角,所述第一旋转角为采集所述第一点云的第一个数据点时的旋转角,所述第二旋转角为采集所述第一点云的最后一个数据点时的旋转角;The rotation angle acquisition sub-module is used to acquire the first rotation angle and the second rotation angle of the lidar of the robot, and the first rotation angle is the rotation angle when the first data point of the first point cloud is collected , the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
角度差获取子模块,用于获取所述第一点云的各点与所述激光雷达的水平正方向的角度差;an angle difference acquisition sub-module, configured to acquire the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar;
去畸变处理子模块,用于根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,得到去畸变后的第一点云。A de-distortion processing sub-module, configured to perform de-distortion processing on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, to obtain a de-distorted first point cloud.
进一步地,所述去畸变处理子模块具体用于根据下式对所述第一点云进行去畸变处理:Further, the de-distortion processing sub-module is specifically configured to perform de-distortion processing on the first point cloud according to the following formula:
px′=cos((θ′-θ)*angleH/2π)*px+px′=cos((θ′-θ)*angleH/2π)*px+
sin((θ′-θ)*angleH/2π)*pysin((θ′-θ)*angleH/2π)*py
py′=-sin((θ′-θ)*angleeH/2π)*px+py′=-sin((θ′-θ)*angleeH/2π)*px+
cos((θ′-θ)*angleH/2π)*pycos((θ′-θ)*angleH/2π)*py
pz′=pzpz'=pz
其中,θ为所述第一旋转角,θ′为所述第二旋转角,angleH为所述角度差,(px,py,pz)为所述第一点云中的任意一点的坐标,(px′,py′,pz′)为对该点进行去畸变处理后的坐标。Wherein, θ is the first rotation angle, θ′ is the second rotation angle, angleH is the angle difference, (px, py, pz) is the coordinate of any point in the first point cloud, ( px', py', pz') are the coordinates of the point after de-distortion processing.
进一步地,所述点云投影模块具体用于根据下式将所述第二点云投影至所述第一点云的坐标上:Further, the point cloud projection module is specifically configured to project the second point cloud onto the coordinates of the first point cloud according to the following formula:
px_new=cos(dθ)*px_old+sin(dθ)*py_old-dxpx_new=cos(dθ)*px_old+sin(dθ)*py_old-dx
py_new=-sin(dθ)*px_old+cos(dθ)*py_old-dypy_new=-sin(dθ)*px_old+cos(dθ)*py_old-dy
pz_new=pz_oldpz_new=pz_old
其中,(dx,dy,dθ)为所述位姿差,(px_old,py_old,pz_old)为所述第二点云中的任意一点的坐标,(px_new,py_new,pz_new)为该点投影后的坐标。Among them, (dx, dy, dθ) is the pose difference, (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud, (px_new, py_new, pz_new) is the projected point coordinate.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置,模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the above-described devices, modules and units can be referred to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
图4示出了本申请实施例提供的一种机器人的示意框图,为了便于说明,仅示出了与本申请实施例相关的部分。FIG. 4 shows a schematic block diagram of a robot provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown.
如图4所示,该实施例的机器人4包括:处理器40、存储器41以及存储在所述存储器41中并可在所述处理器40上运行的计算机程序42。所述处理器40执行所述计算机程序42时实现上述各个机器人建图方法实施例中的步骤,例如图1所示的步骤S101至步骤S105。或者,所述处理器40执行所述计算机程序42时实现上述各装置实施例中各模块/单元的功能,例如图3所示模块301至模块305的功能。As shown in FIG. 4 , the robot 4 of this embodiment includes a processor 40 , a memory 41 , and a computer program 42 stored in the memory 41 and executable on the processor 40 . When the processor 40 executes the computer program 42 , the steps in each of the above embodiments of the robot mapping method are implemented, for example, steps S101 to S105 shown in FIG. 1 . Alternatively, when the processor 40 executes the computer program 42, the functions of the modules/units in each of the foregoing apparatus embodiments, such as the functions of the modules 301 to 305 shown in FIG. 3, are implemented.
示例性的,所述计算机程序42可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器41中,并由所述处理器40执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序42在所述机器人4中的执行过程。Exemplarily, the computer program 42 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 41 and executed by the processor 40 to complete the this application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 42 in the robot 4 .
本领域技术人员可以理解,图4仅仅是机器人4的示例,并不构成对机器人4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述机器人4还可以包括输入输出设备、网络接入设备、总线等。Those skilled in the art can understand that FIG. 4 is only an example of the robot 4, and does not constitute a limitation to the robot 4. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as The robot 4 may also include input and output devices, network access devices, buses, and the like.
所述处理器40可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 40 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
所述存储器41可以是所述机器人4的内部存储单元,例如机器人4的硬盘或内存。所述存储器41也可以是所述机器人4的外部存储设备,例如所述机器人4上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器41还可以既包括所述机器人4的内部存储单元也包括外部存储设备。所述存储器41用于存储所述计算机程序以及所述机器人4所需的其它程序和数据。所述存储器41还可以用于暂时地存储已经输出或者将要输出的数据。The memory 41 may be an internal storage unit of the robot 4 , such as a hard disk or a memory of the robot 4 . The memory 41 can also be an external storage device of the robot 4, such as a plug-in hard disk equipped on the robot 4, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash card (Flash Card) and so on. Further, the memory 41 may also include both an internal storage unit of the robot 4 and an external storage device. The memory 41 is used to store the computer program and other programs and data required by the robot 4 . The memory 41 can also be used to temporarily store data that has been output or will be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上 描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/机器人和方法,可以通过其它的方式实现。例如,以上所描述的装置/机器人实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the device/robot embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or Components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不包括电载波信号和电信信号。The integrated modules/units, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc. It should be noted that the content contained in the computer-readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer-readable Storage media exclude electrical carrier signals and telecommunications signals.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.

Claims (10)

  1. 一种机器人建图方法,其特征在于,包括:A method for building a robot map, comprising:
    获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;Obtain the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and second pose collected at the historical moment;
    计算所述第一位姿和所述第二位姿之间的位姿差;calculating the pose difference between the first pose and the second pose;
    根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;Projecting the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
    根据所述第三点云筛选出所述第一点云中的动态物体点云;Filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
    从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。The dynamic object point cloud is removed from the first point cloud to obtain a fourth point cloud, and the fourth point cloud is used for robot mapping.
  2. 根据权利要求1所述的机器人建图方法,其特征在于,所述根据所述第三点云筛选出所述第一点云中的动态物体点云,包括:The robot mapping method according to claim 1, wherein the filtering out the dynamic object point cloud in the first point cloud according to the third point cloud comprises:
    根据所述第三点云从所述第一点云中筛选出各个候选点;Screen out each candidate point from the first point cloud according to the third point cloud;
    对筛选出的各个候选点进行聚类,得到各个候选点集;Clustering the selected candidate points to obtain each candidate point set;
    分别计算各个候选点集中的点数目、点坐标在主方向上的第一方差以及点坐标在主方向的法向上的第二方差;Calculate the number of points in each candidate point set, the first variance of point coordinates in the main direction, and the second variance of point coordinates in the normal direction of the main direction;
    根据所述点数目、所述第一方差以及所述第二方差从各个候选点集中筛选出优选点集;According to the number of points, the first variance and the second variance, a preferred point set is selected from each candidate point set;
    从所述优选点集中筛选出符合预设条件的动态物体点云。The dynamic object point cloud that meets the preset condition is screened out from the preferred point set.
  3. 根据权利要求2所述的机器人建图方法,其特征在于,所述根据所述第三点云从所述第一点云中筛选出各个候选点,包括:The robot mapping method according to claim 2, wherein the filtering out each candidate point from the first point cloud according to the third point cloud comprises:
    在所述第三点云中确定目标点的对应点,所述目标点为所述第一点云中的任意一点,所述对应点为与所述目标点距离最小的点;Determine the corresponding point of the target point in the third point cloud, the target point is any point in the first point cloud, and the corresponding point is the point with the smallest distance from the target point;
    计算所述目标点与所述对应点之间的距离,并计算所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离;Calculate the distance between the target point and the corresponding point, and calculate the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot;
    若所述目标点与所述对应点之间的距离大于预设的第一距离阈值,或所述目标点与所述对应点在所述目标点与所述机器人的连线的法向上的距离大于预设的第二距离阈值,则将所述目标点确定为候选点。If the distance between the target point and the corresponding point is greater than a preset first distance threshold, or the distance between the target point and the corresponding point in the normal direction of the line connecting the target point and the robot If the distance is greater than the preset second distance threshold, the target point is determined as a candidate point.
  4. 根据权利要求1所述的机器人建图方法,其特征在于,在获取机器人在当前时刻采集的第一点云和第一位姿之后,还包括:The method for building a robot map according to claim 1, wherein after acquiring the first point cloud and the first pose collected by the robot at the current moment, the method further comprises:
    对所述第一点云进行去畸变处理,得到去畸变后的第一点云;De-distortion processing is performed on the first point cloud to obtain the first point cloud after the de-distortion;
    对所述去畸变后的第一点云进行下采样处理,得到下采样后的第一点云。Perform down-sampling processing on the dedistorted first point cloud to obtain a down-sampled first point cloud.
  5. 根据权利要求4所述的机器人建图方法,其特征在于,所述对所述第一点云进行去畸变处理,包括:The method for building a robot map according to claim 4, wherein the performing de-distortion processing on the first point cloud comprises:
    获取所述机器人的激光雷达的第一旋转角和第二旋转角,所述第一旋转角为采集所述第一点云的第一个数据点时的旋转角,所述第二旋转角为采集所述第一点云的最后一个数据点时的旋转角;Obtain the first rotation angle and the second rotation angle of the lidar of the robot, the first rotation angle is the rotation angle when the first data point of the first point cloud is collected, and the second rotation angle is the rotation angle when the last data point of the first point cloud is collected;
    获取所述第一点云的各点与所述激光雷达的水平正方向的角度差;Obtain the angle difference between each point of the first point cloud and the horizontal positive direction of the lidar;
    根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,得到去畸变后的第一点云。The first point cloud is de-distorted according to the first rotation angle, the second rotation angle and the angle difference, so as to obtain a de-distorted first point cloud.
  6. 根据权利要求5所述的机器人建图方法,其特征在于,所述根据所述第一旋转角、所述第二旋转角和所述角度差对所述第一点云进行去畸变处理,包括:The robot mapping method according to claim 5, wherein the performing a de-distortion process on the first point cloud according to the first rotation angle, the second rotation angle and the angle difference, comprising: :
    根据下式对所述第一点云进行去畸变处理:The first point cloud is dedistorted according to the following formula:
    px′=cos((θ′-θ)*angleH/2π)*px+sin((θ′-θ)*angleH/2π)*pypx′=cos((θ′-θ)*angleH/2π)*px+sin((θ′-θ)*angleH/2π)*py
    py′=-sin((θ′-θ)*angleeH/2π)*px+cos((θ′-θ)*angleH/2π)*pypy′=-sin((θ′-θ)*angleeH/2π)*px+cos((θ′-θ)*angleH/2π)*py
    pz′=pzpz'=pz
    其中,θ为所述第一旋转角,θ′为所述第二旋转角,angleH为所述角度差,(px,py,pz)为所述第一点云中的任意一点的坐标,(px′,py′,pz′)为对该点进行去畸变处理后的坐标。Among them, θ is the first rotation angle, θ′ is the second rotation angle, angleH is the angle difference, (px, py, pz) is the coordinate of any point in the first point cloud, ( px', py', pz') are the coordinates of the point after de-distortion processing.
  7. 根据权利要求1至6中任一项所述的机器人建图方法,其特征在于,所述根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,包括:The robot mapping method according to any one of claims 1 to 6, wherein the projecting the second point cloud onto the coordinates of the first point cloud according to the pose difference includes the following steps: :
    根据下式将所述第二点云投影至所述第一点云的坐标上:The second point cloud is projected onto the coordinates of the first point cloud according to the following formula:
    px_new=cos(dθ)*px_old+sin(dθ)*py_old-dxpx_new=cos(dθ)*px_old+sin(dθ)*py_old-dx
    py_new=-sin(dθ)*px_old+cos(dθ)*py_old-dypy_new=-sin(dθ)*px_old+cos(dθ)*py_old-dy
    pz_new=pz_oldpz_new=pz_old
    其中,(dx,dy,dθ)为所述位姿差,(px_old,py_old,pz_old)为所述第二点云中的任意一点的坐标,(px_new,py_new,pz_new)为该点投影后的坐标。Among them, (dx, dy, dθ) is the pose difference, (px_old, py_old, pz_old) is the coordinate of any point in the second point cloud, (px_new, py_new, pz_new) is the projected point coordinate.
  8. 一种机器人建图装置,其特征在于,包括:A robot mapping device, comprising:
    数据获取模块,用于获取机器人在当前时刻采集的第一点云和第一位姿,以及在历史时刻采集的第二点云和第二位姿;The data acquisition module is used to acquire the first point cloud and the first pose collected by the robot at the current moment, and the second point cloud and the second pose collected at the historical moment;
    位姿差计算模块,用于计算所述第一位姿和所述第二位姿之间的位姿差;a pose difference calculation module for calculating the pose difference between the first pose and the second pose;
    点云投影模块,用于根据所述位姿差将所述第二点云投影至所述第一点云的坐标上,得到第三点云;a point cloud projection module, configured to project the second point cloud onto the coordinates of the first point cloud according to the pose difference to obtain a third point cloud;
    动态物体点云筛选模块,用于根据所述第三点云筛选出所述第一点云中的动态物体点云;a dynamic object point cloud screening module, configured to filter out the dynamic object point cloud in the first point cloud according to the third point cloud;
    建图模块,用于从所述第一点云中剔除掉所述动态物体点云,得到第四点云,并使用所述第四点云进行机器人建图。The mapping module is used to remove the dynamic object point cloud from the first point cloud, obtain a fourth point cloud, and use the fourth point cloud to perform robot mapping.
  9. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的机器人建图方法的步骤。A computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the robot mapping according to any one of claims 1 to 7 is realized steps of the method.
  10. 一种机器人,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述的机器人建图方法的步骤。A robot, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, characterized in that, when the processor executes the computer program, the implementation of claims 1 to 7 The steps of any one of the robot mapping methods.
PCT/CN2020/140430 2020-11-24 2020-12-28 Robot mapping method and device, computer readable storage medium, and robot WO2022110473A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011330699.X 2020-11-24
CN202011330699.XA CN112484738B (en) 2020-11-24 2020-11-24 Robot mapping method and device, computer readable storage medium and robot

Publications (1)

Publication Number Publication Date
WO2022110473A1 true WO2022110473A1 (en) 2022-06-02

Family

ID=74933820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140430 WO2022110473A1 (en) 2020-11-24 2020-12-28 Robot mapping method and device, computer readable storage medium, and robot

Country Status (2)

Country Link
CN (1) CN112484738B (en)
WO (1) WO2022110473A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704035A (en) * 2023-06-28 2023-09-05 北京迁移科技有限公司 Workpiece pose recognition method, electronic equipment, storage medium and grabbing system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266871B (en) * 2022-03-01 2022-07-15 深圳市普渡科技有限公司 Robot, map quality evaluation method, and storage medium
CN115060276B (en) * 2022-06-10 2023-05-12 江苏集萃清联智控科技有限公司 Multi-environment adaptive automatic driving vehicle positioning equipment, system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610177A (en) * 2017-09-29 2018-01-19 联想(北京)有限公司 A kind of method and apparatus that characteristic point is determined in synchronous superposition
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN110009718A (en) * 2019-03-07 2019-07-12 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device
CN110019570A (en) * 2017-07-21 2019-07-16 百度在线网络技术(北京)有限公司 For constructing the method, apparatus and terminal device of map
CN110197615A (en) * 2018-02-26 2019-09-03 北京京东尚科信息技术有限公司 For generating the method and device of map
US20190286915A1 (en) * 2018-03-13 2019-09-19 Honda Motor Co., Ltd. Robust simultaneous localization and mapping via removal of dynamic traffic participants

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111417871A (en) * 2017-11-17 2020-07-14 迪普迈普有限公司 Iterative closest point processing for integrated motion estimation using high definition maps based on lidar
US10921455B2 (en) * 2018-04-05 2021-02-16 Apex.AI, Inc. Efficient and scalable three-dimensional point cloud segmentation for navigation in autonomous vehicles
CN111443359B (en) * 2020-03-26 2022-06-07 达闼机器人股份有限公司 Positioning method, device and equipment
CN111429528A (en) * 2020-04-07 2020-07-17 高深智图(广州)科技有限公司 Large-scale distributed high-precision map data processing system
CN111665522B (en) * 2020-05-19 2022-12-16 上海有个机器人有限公司 Method, medium, terminal and device for filtering static object in laser scanning pattern
CN111695497B (en) * 2020-06-10 2024-04-09 上海有个机器人有限公司 Pedestrian recognition method, medium, terminal and device based on motion information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664841A (en) * 2017-03-27 2018-10-16 郑州宇通客车股份有限公司 A kind of sound state object recognition methods and device based on laser point cloud
CN110019570A (en) * 2017-07-21 2019-07-16 百度在线网络技术(北京)有限公司 For constructing the method, apparatus and terminal device of map
CN107610177A (en) * 2017-09-29 2018-01-19 联想(北京)有限公司 A kind of method and apparatus that characteristic point is determined in synchronous superposition
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data
CN110197615A (en) * 2018-02-26 2019-09-03 北京京东尚科信息技术有限公司 For generating the method and device of map
US20190286915A1 (en) * 2018-03-13 2019-09-19 Honda Motor Co., Ltd. Robust simultaneous localization and mapping via removal of dynamic traffic participants
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN110009718A (en) * 2019-03-07 2019-07-12 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704035A (en) * 2023-06-28 2023-09-05 北京迁移科技有限公司 Workpiece pose recognition method, electronic equipment, storage medium and grabbing system
CN116704035B (en) * 2023-06-28 2023-11-07 北京迁移科技有限公司 Workpiece pose recognition method, electronic equipment, storage medium and grabbing system

Also Published As

Publication number Publication date
CN112484738A (en) 2021-03-12
CN112484738B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
WO2022110473A1 (en) Robot mapping method and device, computer readable storage medium, and robot
TWI491848B (en) System and method for extracting point cloud
WO2021017471A1 (en) Point cloud filtering method based on image processing, apparatus, and storage medium
CN109658454B (en) Pose information determination method, related device and storage medium
CN111695429B (en) Video image target association method and device and terminal equipment
JP2017534046A (en) Building height calculation method, apparatus and storage medium
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
WO2021196698A1 (en) Method, apparatus and device for determining reserve of object to be detected, and medium
CN109685764B (en) Product positioning method and device and terminal equipment
CN112198878B (en) Instant map construction method and device, robot and storage medium
WO2022222291A1 (en) Optical axis calibration method and apparatus of optical axis detection system, terminal, system, and medium
CN110390668A (en) Bolt looseness detection method, terminal device and storage medium
CN115512316A (en) Static obstacle grounding contour line multi-frame fusion method, device and medium
CN110580325B (en) Ubiquitous positioning signal multi-source fusion method and system
JP7351892B2 (en) Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform
CN110793437A (en) Positioning method and device of manual operator, storage medium and electronic equipment
WO2021129142A1 (en) Building based outdoor positioning method, device and mobile equipment
WO2020143499A1 (en) Corner detection method based on dynamic vision sensor
WO2022077660A1 (en) Vehicle positioning method and apparatus
CN112800806B (en) Object pose detection tracking method and device, electronic equipment and storage medium
CN112629828A (en) Optical information detection method, device and equipment
CN116579960B (en) Geospatial data fusion method
CN111950659B (en) Double-layer license plate image processing method and device, electronic equipment and storage medium
WO2022204953A1 (en) Method and apparatus for determining pitch angle, and terminal device
CN111814869B (en) Method and device for synchronous positioning and mapping, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963361

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963361

Country of ref document: EP

Kind code of ref document: A1