WO2021189468A1 - 激光雷达的姿态校正方法、装置和系统 - Google Patents

激光雷达的姿态校正方法、装置和系统 Download PDF

Info

Publication number
WO2021189468A1
WO2021189468A1 PCT/CN2020/081826 CN2020081826W WO2021189468A1 WO 2021189468 A1 WO2021189468 A1 WO 2021189468A1 CN 2020081826 W CN2020081826 W CN 2020081826W WO 2021189468 A1 WO2021189468 A1 WO 2021189468A1
Authority
WO
WIPO (PCT)
Prior art keywords
ground
lidar
attitude
point cloud
coordinate system
Prior art date
Application number
PCT/CN2020/081826
Other languages
English (en)
French (fr)
Inventor
杨林
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to PCT/CN2020/081826 priority Critical patent/WO2021189468A1/zh
Priority to CN202310645293.8A priority patent/CN116930933A/zh
Priority to CN202080005491.2A priority patent/CN113748357B/zh
Publication of WO2021189468A1 publication Critical patent/WO2021189468A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • This application relates to the field of automatic driving, and in particular to a method, device and system for attitude correction of lidar.
  • the camera pose refers to the position of the camera in the three-dimensional space and the orientation of the camera.
  • the camera pose combined with the camera's viewing angle and visible distance, determines the exact range that the camera can perceive.
  • the accuracy of the camera attitude directly affects the performance of related functions and the safety of pedestrians in specific scenarios.
  • the image-based camera can obtain the accurate posture parameters of the camera according to the texture information predefined by the calibration plate and the imaging principle of the camera through Zhang's calibration method.
  • the lidar attitude correction process because lidar measures the spatial position instead of texture information, there are certain difficulties in lidar attitude correction. How to correct the lidar attitude is a problem that needs to be solved urgently.
  • the technical problem to be solved by the embodiments of the present application is to provide a method, device, and system for lidar attitude correction, which can estimate the current attitude of the lidar based on a point cloud and correct the current attitude to the target attitude to improve scanning efficiency.
  • this application provides a lidar attitude correction method, including:
  • the number of point clouds is one or more, when the number of point clouds is multiple, each point cloud corresponds to a lidar scan frame;
  • the bearing device is controlled to adjust the lidar from the current attitude to the target attitude.
  • the detecting ground points in the point cloud to obtain a ground point set includes:
  • At least one representative point selected in each container satisfies the height threshold condition as a ground point.
  • the detection of ground points in the point cloud to obtain a ground point set includes:
  • the point cloud is a first point cloud; wherein, detecting ground points in the point cloud to obtain a ground point set includes:
  • the points in the first point cloud that are parallel to the first direction and the second direction are detected as ground points.
  • the establishing a ground coordinate system according to the ground point set includes:
  • the ground coordinate system is established based on the normal vector; wherein the x-axis and y-axis of the ground coordinate system constitute the ground.
  • the present application provides a laser radar attitude correction device, including:
  • the acquisition unit is used to acquire the point cloud generated by lidar scanning
  • a detection unit for detecting ground points in the point cloud to obtain a ground point set
  • a calculation unit configured to establish a ground coordinate system according to the ground point set
  • the adjustment unit is configured to control the carrier device to adjust the lidar from the current posture to the target posture according to the posture correction parameters.
  • a posture-based correction device includes: Receiver, transmitter, memory, and processor; wherein, the memory stores a set of program codes, and the processor is used to call the program codes stored in the memory to execute the attitude of the lidar described in the above aspects Correction method.
  • the implementation of the device can be referred to the implementation of the method, and the repetitions No longer.
  • Another aspect of the present application provides a computer-readable storage medium having instructions stored in the computer-readable storage medium, which when run on a computer, cause the computer to execute the methods described in the above aspects.
  • Another aspect of the present application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the methods described in the above aspects.
  • the ground points in the point cloud are detected, the ground coordinate system is established according to the ground points, and the attitude correction parameters between the current attitude of the lidar in the radar coordinate system and the target attitude in the ground coordinate system are calculated based on
  • the attitude correction parameters control the rotation and/or translation of the lidar's carrying device, so that the lidar is adjusted from the current attitude to the target attitude, so as to automatically correct the attitude of the lidar, and solve the problem of low efficiency and inaccuracy caused by manual attitude correction .
  • FIG. 1 is a schematic diagram of the architecture of an attitude correction system provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a lidar attitude correction method provided by an embodiment of the present application
  • 3 to 7 are schematic diagrams of the principle of posture correction provided by this embodiment.
  • FIG. 8 is a schematic structural diagram of an attitude correction device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another structure of an attitude correction device provided by an embodiment of the present application.
  • the attitude correction system includes: a lidar, an attitude correction device, and a bearing device.
  • the carrying device is used to carry the lidar, and the carrying device includes, but is not limited to, drones, vehicles, or robotic arms.
  • the carrying device can adjust the attitude of the lidar through rotation and/or translation.
  • the carrying device is an unmanned aerial vehicle.
  • the unmanned aerial vehicle can drive the lidar to translate along the x-axis, y-axis and z-axis, and rotate around the x-axis, y-axis and z-axis.
  • the man-machine can drive the lidar to adjust the attitude of six degrees of freedom.
  • the attitude correction device may also be provided on the carrying device.
  • the attitude correction device is used to calculate attitude correction parameters and instruct the carrying device to adjust the attitude according to the attitude correction parameters, so that the current attitude of the lidar is adjusted to the target attitude.
  • Lidar is used to emit detection laser signals. The detection laser signals are reflected after encountering obstacles to generate echo laser signals. Obstacles include ground and non-ground obstacles. Lidar is based on the strength of the echo signal and the distance from the obstacle to the lidar.
  • the point cloud is generated with equal parameters, and the point cloud includes ground points and non-ground points.
  • FIG. 2 is an attitude correction method provided by an embodiment of the present application. The method includes but is not limited to the following steps:
  • the lidar can be scanned periodically, and a point cloud (also called a data frame) is generated after each scan.
  • the number of point clouds to be processed obtained by the attitude correction device can be one or more, that is, the attitude
  • the correction device can process the laser radar attitude correction according to one data frame or multiple data frames.
  • the point cloud in this embodiment may be a 3D point cloud, that is, the point cloud includes three-dimensional space coordinates and parameter values of echo intensity.
  • the points of the point cloud can be divided into two types: ground points and non-ground points.
  • Ground points are generated by lidar scanning the ground
  • non-ground points are generated by lidar scanning non-ground obstacles.
  • geometric analysis can be used.
  • Method or machine learning algorithm is used to detect the ground points in the point cloud, and after traversing all the ground points in the point cloud, the ground point set is obtained.
  • the process of detecting ground points in the point cloud includes:
  • the point in the container that participates in the straight line fitting that satisfies the height threshold condition is marked as a ground point.
  • the point cloud is fitted into a circle according to the distribution range of the point cloud, the radius of the circle is r, the preset angle interval is ⁇ radians, and the number of sectors divided by the point cloud It is 2 ⁇ / ⁇ , and the sectors of the point cloud are: P 1 , P 2 , ..., P 2 ⁇ / ⁇ .
  • the sector is divided into multiple containers according to the preset distance interval.
  • the sector P 1 is divided into C containers: a 1 , a 2 ,... , A C , C is an integer greater than 1.
  • the representative points are determined in each container.
  • the representative point is the lowest point in the container, that is, the point with the smallest height.
  • the representative points in each container in the same sector Perform straight-line fitting to obtain the fitted straight line, and then calculate the slope of the fitted straight line.
  • the slope of the fitted straight line is less than the preset slope threshold, mark the points in the container participating in the straight-line fitting that are less than the preset height threshold as the ground Point; if the slope of the fitted line is greater than or equal to the slope threshold, the fitting is stopped.
  • the ground points in each sector are detected.
  • the method for detecting ground points in the point cloud is as follows:
  • Embodiment A Determine a pre-trained deep learning network
  • the ground point in the point cloud is detected according to the deep learning network.
  • a training sample is generated.
  • the training sample is a labeled point.
  • the label indicates that the point is a ground point or a non-ground point.
  • the deep learning network can identify whether the sample data is a ground point or a non-ground point according to the label.
  • the deep learning network that has completed the training is used to perform the test phase.
  • the testing phase input the points in the point cloud generated in S201 into the trained deep learning network to detect whether the type of the point is a ground point or a non-ground point.
  • the deep learning network may be a pointnet++ network.
  • Embodiment B The process of detecting ground points in the point cloud includes:
  • the lidar is controlled by the carrying device to scan in a first direction parallel to the ground to generate a second point cloud. It should be noted that if the carrying device is a vehicle, the forward direction of the vehicle is always parallel to the ground, and you only need to control the vehicle to drive in a different forward direction; if the carrying device is a drone, you need to control the drone Fly in a direction parallel to the ground.
  • the lidar is controlled by the carrying device to scan in a second direction parallel to the ground to generate a third point cloud; wherein the first direction and the second direction are perpendicular to each other.
  • the first direction and the second direction may not be perpendicular to each other.
  • the points in the first point cloud that are parallel to the first direction and the second direction are detected as ground points.
  • embodiment A and embodiment B can be applied to single-frame ground point detection.
  • the preset angle is 20°, which is to improve the detection of the ground.
  • this embodiment can detect ground points based on multiple frames: Obtain the point cloud p 1 generated by lidar scanning at time t1, and calculate based on K neighborhood or PCA (Principal Component Analysis) The normal vector of the point cloud p 1 ; at time t2, the attitude correction device controls the lidar to move a distance (for example: 5m) in the first direction parallel to the ground through the carrier device, and the lidar scans to obtain the point cloud p 2 , the attitude The correction device estimates the first movement trajectory of the lidar according to the ICP (Iterative Closest Point) algorithm or the NDT (Normal Distributions Transform) algorithm to obtain the x-axis of the ground coordinate system; the attitude correction device carrying device Control the lidar to move a distance in
  • the specific value of the preset angle may be related to the ground point detection method. For example, when the preset angle is 5°, the ground point detection method of Embodiment A is no longer applicable; when the preset angle is 8°, The ground point detection algorithm of embodiment B is no longer applicable.
  • the specific value of the preset angle can also be related to the performance of the lidar. For example, the number of scan lines of the lidar and the preset angle are positively correlated. The more the number of scan lines, the greater the value of the preset angle, and vice versa. The smaller the value.
  • the coordinates and orientation of the ground coordinate system indicate the current radar coordinate system or the ground posture in the radar coordinate system, and the ground coordinate system is established by using this posture.
  • the z-axis is perpendicular to the ground, and the plane formed by the x-axis and the y-axis is the ground.
  • the ground coordinate system can be established through a PCA algorithm or a deep learning network.
  • the established ground coordinate system is shown in Figure 6.
  • Figure 5 is a collection of ground points projected on the x-axis and y-axis, and two vertical first direction vectors and second direction vectors in the set are determined according to PCA.
  • the first direction vector and the second direction vector set the longest and widest directions respectively, and then determine the normal vector perpendicular to the x-axis and y-axis, and the normal vector is the z-axis.
  • a ground coordinate system is established based on a deep learning network.
  • a training sample is generated.
  • the training sample is a labeled point, and the label represents the normal vector of the point.
  • the deep learning network is based on the label The normal vector of the sample data can be identified, and when the training phase is completed, the deep learning network that has completed the training is used to perform the test phase.
  • the points in the ground point set generated in S202 are input into the trained deep learning network to recognize normal vectors.
  • the deep learning network may be a pointnet++ network.
  • S204 Calculate an attitude correction parameter according to the current attitude of the lidar in the radar coordinate system and the target attitude in the ground coordinate system.
  • the lidar is located at the origin of the radar coordinate system, the plane formed by the x-axis and y-axis of the radar coordinate system is the horizontal plane of the lidar, and the z-axis is perpendicular to the horizontal plane.
  • the radar coordinate system is shown in Figure 7.
  • the attitude correction device is pre-configured with the target attitude based on the ground coordinate system.
  • the attitude correction device can calculate the attitude correction parameters between the current attitude in the radar coordinate system and the target attitude in the ground coordinate system according to the spatial geometric relationship.
  • the attitude correction parameters include rotation (rx, ry, rz) and translation (dx, dy, dz), the amount of rotation represents the angle of rotation of the x-axis, y-axis or z-axis, and the amount of translation.
  • the radar coordinate system is the original coordinate system, in which there are two objects O 1 and O 2 , respectively in two positions and two orientations in the coordinate system, and the posture of O 1 is T 1 (expressing O 1 in the original).
  • T 1 expressing O 1 in the original
  • M [dx, dy, dz, rx, ry, rz] matrix form)
  • T 1 can coincide with T 2 through M.
  • T 2 M * T 1
  • M Inv (M r1) * M r2 * (M t2 -M t1) mathematically.
  • Inv( ⁇ ) represents the operation of matrix inversion.
  • T 1 can be regarded as the current radar attitude (it coincides with the radar coordinate system by default, that is, T 1 is the identity matrix)
  • T 2 can be regarded as the ground attitude or the given target attitude
  • M is the attitude correction parameter.
  • the attitude correction parameters can be represented by a conversion matrix, which represents the amount of rotation and translation, so that the radar coordinate system and the ground coordinate system can coincide. Assuming the origin of the radar coordinate system (0, 0, 0), the radar coordinate system faces the z axis (0, 0, 1), the x axis (1, 0, 0), and the y axis (0, 1, 0). The origin of the ground coordinate system (1, 0, 0), z-axis/normal vector (0, 0, 1), x-axis (0, 1, 0), y-axis (-1, 0, 0).
  • the amount of rotation and the amount of translation are: translate the origin of the radar coordinate system by 1 unit in the x-axis direction, and then rotate the x-axis and y-axis counterclockwise by 90 degrees to make the radar coordinate system coincide with the ground coordinate system.
  • S205 Control the bearing device according to the attitude correction parameter to adjust the lidar from the current attitude to the target attitude.
  • the carrying device may be a six-degree-of-freedom movable device, and the attitude correction device drives the carrying device to adjust the attitude according to the attitude correction parameters calculated in S204, so that the current attitude of the lidar is adjusted to the template attitude. As shown in Fig. 7, the correction device adjusts the attitude of the lidar according to the attitude correction parameters such that the horizontal plane of the lidar is parallel to the ground.
  • the ground points in the point cloud are detected, the ground coordinate system is established according to the ground points, and the attitude correction parameters between the current attitude of the lidar in the radar coordinate system and the target attitude in the ground coordinate system are calculated, based on the attitude
  • the correction parameters control the rotation and/or translation of the lidar carrying device, so that the lidar is adjusted from the current attitude to the target attitude, which realizes the automatic correction of the lidar attitude, and solves the problems of low efficiency and inaccuracy caused by manual attitude correction.
  • attitude correction method of a lidar according to an embodiment of the present application.
  • the following provides an attitude correction device (hereinafter referred to as device 3) according to an embodiment of the present application.
  • the device 3 shown in FIG. 8 can implement the lidar attitude correction method of the embodiment shown in FIG. 2.
  • the device 3 includes an acquisition unit 301, a detection unit 302, a calculation unit 303 and an adjustment unit 304.
  • the obtaining unit 301 is used to obtain the point cloud generated by the laser radar scanning
  • the detecting unit 302 is configured to detect ground points in the point cloud to obtain a ground point set
  • the calculation unit 303 is configured to establish a ground coordinate system according to the ground point set;
  • the adjustment unit 304 is configured to control the carrier device to adjust the lidar from the current attitude to the target attitude according to the attitude correction parameter.
  • the detecting ground points in the point cloud to obtain a ground point set includes:
  • the point in the container that participates in the straight line fitting that satisfies the height threshold condition is marked as a ground point.
  • the detecting ground points in the point cloud to obtain a ground point set includes:
  • the included angle between the horizontal plane of the lidar and the ground is less than or equal to a preset angle
  • the point cloud is a first point cloud
  • detecting the ground points in the point cloud to obtain a ground point set includes:
  • the points in the first point cloud that are parallel to the first direction and the second direction are detected as ground points.
  • the establishing a ground coordinate system according to the ground point set includes:
  • the ground coordinate system is established based on the normal vector; wherein the x-axis and y-axis of the ground coordinate system constitute the ground.
  • the device 3 may be a field-programmable gate array (FPGA), a dedicated integrated chip, a system on chip (SoC), and a central processor unit (CPU) that implement related functions.
  • FPGA field-programmable gate array
  • SoC system on chip
  • CPU central processor unit
  • NP Network processor
  • NP digital signal processing circuit
  • MCU microcontroller unit
  • PLD programmable logic device
  • attitude correction method of a lidar according to an embodiment of the present application.
  • the following provides an attitude correction device based on an embodiment of the present application (hereinafter referred to as device 4).
  • FIG. 9 is a schematic diagram of the structure of a device provided by an embodiment of the application, hereinafter referred to as device 4, which can be integrated into the lidar or carrying device of the above-mentioned embodiment.
  • the device includes: a memory 402, processing 401, transmitter 404, and receiver 403.
  • the memory 402 may be an independent physical unit, and may be connected to the processor 401, the transmitter 404, and the receiver 403 through a bus.
  • the memory 402, the processor 401, the transmitter 404, and the receiver 401 can also be integrated together and implemented by hardware.
  • the transmitter 404 is used for transmitting signals, and the receiver 403 is used for receiving signals.
  • the memory 402 is used to store a program that implements the above method embodiment or each module of the device embodiment, and the processor 401 calls the program to execute the operation of the above method embodiment.
  • the device may also only include a processor.
  • the memory for storing the program is located outside the device, and the processor is connected to the memory through a circuit/wire for reading and executing the program stored in the memory.
  • the processor may be a central processing unit (CPU), a network processor (NP), or a combination of CPU and NP.
  • CPU central processing unit
  • NP network processor
  • the processor may further include a hardware chip.
  • the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL), or any combination thereof.
  • the memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory) , A hard disk drive (HDD) or a solid-state drive (solid-state drive, SSD); the memory may also include a combination of the foregoing types of memory.
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory such as flash memory (flash memory)
  • flash memory flash memory
  • HDD hard disk drive
  • solid-state drive solid-state drive
  • the sending unit or transmitter executes the steps sent by the foregoing method embodiments
  • the receiving unit or receiver executes the steps received by the foregoing method embodiments
  • other steps are executed by other units or processors.
  • the sending unit and the receiving unit can form a transceiver unit
  • the receiver and transmitter can form a transceiver.
  • An embodiment of the present application also provides a computer storage medium storing a computer program, and the computer program is used to execute the lidar attitude correction method provided in the foregoing embodiment.
  • the embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the lidar attitude correction method provided in the above-mentioned embodiments.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can direct a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种激光雷达的姿态校正装置、装置和系统。姿态校正方法包括:检测点云中的地面点(S202),根据地面点建立地面坐标系(S203),以及计算激光雷达在雷达坐标系中的当前姿态和地面坐标系中目标姿态之间的姿态校正参数(S204),基于姿态校正参数控制激光雷达的承载装置进行旋转和/或平移,使激光雷达由当前姿态调整为目标姿态(S205),由此实现自动校正激光雷达的姿态,提高激光雷达姿态校正的效率和精度。

Description

激光雷达的姿态校正方法、装置和系统 技术领域
本申请涉及自动驾驶领域,尤其涉及一种激光雷达的姿态校正方法、装置和系统。
背景技术
在相关技术中,相机姿态指的是相机在三维空间中的位置和相机的朝向。相机姿态结合相机的可视角度和可见距离,决定了相机能够感知的确切范围。在安防领域或自动驾驶领域,相机姿态的准确性直接影响着具体场景中相关功能的性能和行人的安全。基于图像的相机可以通过张氏标定法等根据标定板预定义的纹理信息以及相机的成像原理,获得相机准确的姿态参数。而在激光雷达的姿态校正过程中,由于激光雷达测量的是空间位置而不是纹理信息,导致激光雷达的姿态校正存在一定的困难,如何对激光雷达进行姿态校正是目标亟待解决的问题。
发明内容
本申请实施例所要解决的技术问题在于,提供一种激光雷达的姿态校正方法、装置和系统,可以基于点云估计激光雷达的当前姿态,以及将当前姿态校正为目标姿态实现扫描效率的提高。
第一方面,本申请提供了一种激光雷达的姿态校正方法,包括:
获取激光雷达扫描生成的点云;点云的数量为一个或多个,在点云的数量为多个时,每个点云对应一个激光雷达的扫描帧;
获取激光雷达扫描生成的点云;
检测所述点云中的地面点得到地面点集合;
根据所述地面点集合建立地面坐标系;
根据所述激光雷达在雷达坐标系统中的当前姿态和所述地面坐标系统中的目标姿态计算姿态校正参数;
根据所述姿态校正参数控制承载装置将所述激光雷达由所述当前姿态调整为所述目标姿态。
在一种可能的设计中,所述检测所述点云中的地面点得到地面点集合,包括:
根据预设的角度间隔将所述点云划分为多个扇区;其中,所述扇区的数量等于2π/△θ,△θ为所述角度间隔;
根据预设的距离间隔将所述扇区划分为多个容器;
在所述多个容器中确定至少一个代表点;
对所述多个容器中各个容器选择的至少一个代表点进行直线拟合;
在拟合出的直线满足斜率阈值条件时,将所述各个容器中选择的至少一个代表点满足高度阈值条件的点作为地面点。
在一种可能的设计中,所述检测点云中的地面点得到地面点集合,包括:
获取预先训练的深度学习模型;
基于所述深度学习网络检测所述点云中的地面点得到地面点集合。
在一种可能的设计中,所述点云为第一点云;其中,所述检测所述点云中的地面点得到地面点集合,包括:
在所述激光雷达的水平面和地面的夹角大于预设角度时,确定所述第一点云的法向量;
通过所述承载装置控制所述激光雷达沿平行于所述地面的第一方向进行扫描生成第二点云;
通过所述承载装置控制所述激光雷达沿平行于所述地面的第二方向上进行扫描生成第三点云;其中,所述第一方向和所述第二方向相互垂直;
将所述第一点云中平行于所述第一方向和所述第二方向的点检测为地面点。
在一种可能的设计中,所述根据所述地面点集合建立地面坐标系,包括:
获取预先训练的深度学习网络;
根据所述深度学校网络计算所述地面点集合的法向量;
基于所述法向量建立所述地面坐标系;其中,所述地面坐标系的x轴和y轴构成所述地面。
第二方面,本申请提供了一种激光雷达的姿态校正装置,包括:
获取单元,用于获取激光雷达扫描生成的点云;
检测单元,用于检测所述点云中的地面点得到地面点集合;
计算单元,用于根据所述地面点集合建立地面坐标系;
根据所述激光雷达在雷达坐标系统中的当前姿态和所述地面坐标系统中的目标姿态计算姿态校正参数;
调整单元,用于根据所述姿态校正参数控制承载装置将所述激光雷达由所述当前姿态调整为所述目标姿态本申请的又一方面公开了一种基于姿态校正装置,距离补偿装置包括:接收器、发射器、存储器和处理器;其中,所述存储器中存储一组程序代码,且所述处理器用于调用所述存储器中存储的程序代码,执行上述各方面所述的激光雷达的姿态校正方法。
基于同一申请构思,由于该装置解决问题的原理以及有益效果可以参见上述各可能的距离补偿装置的方法实施方式以及所带来的有益效果,因此该装置的实施可以参见方法的实施,重复之处不再赘述。
本申请的又一方面提了供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
本申请的又一方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
在本申请实施例中,检测点云中的地面点,根据地面点建立地面坐标系,以及计算激光雷达在雷达坐标系中的当前姿态和地面坐标系中目标姿态之间的姿态校正参数,基于姿态校正参数控制激光雷达的承载装置进行旋转和/或平移,使激光雷达由当前姿态调整为目标姿态,实现自动校正激光雷达的姿态,解决人工进行姿态校正带来的效率低和不准确的问题。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例提供的姿态校正系统的架构示意图;
图2是本申请实施例提供的一种激光雷达的姿态校正方法的流程示意图;
图3~图7是本实施例提供的姿态校正的原理示意图;
图8是本申请实施例提供的一种基于姿态校正装置的结构示意图;
图9是本申请实施例提供的一种基于姿态校正装置的另一结构示意图。
具体实施方式
为使得本申请实施例的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而非全部实施例。基于本申请中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。
参见图1,为本申请实施例提供的姿态校正系统的结构示意图,姿态校正系统包括:激光雷达、姿态校正装置和承载装置。承载装置用于承载激光雷达,承载装置包括但不限于无人机、车辆或机械臂等装置,承载装置可以通过旋转和/或平移调整激光雷达的姿态。如图1所示的,承载装置为无人机,无人机可以带动激光雷达沿着x轴、y轴和z轴方向平移,以及绕着x轴、y轴和z轴进行旋转,即无人机可以带动激光雷达进行六自由度的姿态调整。姿态校正装置也可以设置于承载装置上,姿态校正装置用于计算姿态校正参数,指示承载装置根据姿态校正参数进行姿态调整,以便激光雷达的当前姿态调整为目标姿态。激光雷达用于发射探测激光信号,探测激光信号遇到障碍物后反射生成回波激光信号,障碍物包括地面和非地面障碍物,激光雷达根据回波信号的强度、障碍物到激光雷达的距离等参数生成点云,点云包括地面点和非地面点。
请参见图2,图2是本申请实施例提供的一种姿态校正方法,该方法包括但不限于如下步骤:
S201、获取激光雷达扫描生成的点云。
其中,激光雷达可以周期性地进行扫描,每次扫描后生成一个点云(也称为数据帧),姿态校正装置获取的待处理的点云的数量可以是一个也可以是多个,即姿态校正装置可以根据一个数据帧或多个数据帧进行处理进行激光雷达的姿态校正。本实施例的点云可以是3D点云,即点云包括三维空间坐标和回波强度的参数值。
S202、检测所述点云中的地面点得到地面点集合。
其中,点云的点可以划分为地面点和非地面点两种类型,地面点是激光雷达扫描地面 生成的,非地面点是激光雷达扫描非地面障碍物生成的,本实施例可以通过几何分析法或基于机器学习算法检测点云中的地面点,遍历点云中所有的地面点后得到地面点集合。
在一种可能的实施方式中,检测点云中的地面点的过程包括:
将所述点云根据预设的角度间隔划分为多个扇区;
根据预设的距离间隔将所述扇区划分为多个容器;
确定各个容器中的代表点,以及将同一扇区中各个容器中的代表点进行直线拟合得到拟合直线;
在所述拟合直线满足斜率阈值条件时,将参与直线拟合的容器中满足高度阈值条件的点标记为地面点。
例如:参见图3所示,根据点云的分布范围将点云拟合成一个圆形,圆形的半径为r,预设的角度间隔为△θ弧度,将点云划分的扇区的数量为2π/△θ,点云划分的扇区为:P 1、P 2、…、P 2π/Δθ。对于每个扇区来说,根据预设的距离间隔将扇区划分为多个容器,例如:对于扇区P 1来说,扇区P 1划分为C个容器:a 1、a 2、…、a C,C为大于1的整数。参见图4所示,在各个容器中确定代表点,例如:代表点为容器中的最低点,即高度最小的点,对于扇区P 1来说,将同一扇区内各个容器中的代表点进行直线拟合得到拟合直线,然后计算拟合直线的斜率,在拟合直线的斜率小于预设的斜率阈值时,将参与直线拟合的容器中小于预设的高度阈值的点标记为地面点;如果拟合直线的斜率大于或等于斜率阈值,则停止拟合。通过上述方法检测出各个扇区中的地面点。
在本实施例中,检测点云中的地面点的方法为如下:
实施例A:确定预先训练的深度学习网络;
根据所述深度学习网络检测点云中的地面点。
其中,在训练阶段,生成训练样本,训练样本是带有标签的点,标签表示该点为地面点或非地面点,深度学习网络根据标签可以识别样本数据为地面点还是非地面点,在完成训练阶段时,利用完成训练的深度学习网络执行测试阶段。在测试阶段,将S201中生成的点云中的点输入到完成训练的深度学习网络中检测点的类型为地面点还是非地面点。在本实施例中,深度学习网络可以是pointnet++网络。
实施例B:检测点云中的地面点的过程包括:
在激光雷达的水平面和地面的夹角大于预设角度时,确定所述第一点云的法向量;
通过所述承载装置控制所述激光雷达沿平行于所述地面的第一方向进行扫描生成第二点云。需要说明的是,如果承载装置为车辆时,车辆的前进方向始终保持与地面平行,只需要控制车辆沿着不同的前进方向行驶即可;如果承载装置为无人机,则需要控制无人机沿着平行于地面的方向飞行。
通过所述承载装置控制所述激光雷达沿平行于所述地面的第二方向上进行扫描生成第三点云;其中,所述第一方向和所述第二方向相互垂直。可选的,第一方向和第二方向也可以不互相垂直,有时控制激光雷达的承载装置沿着地面方向运动并不容易,找到地面坐标系的法向量后,某一时刻的运动方向为x方向,用法向量和x方向进行叉乘即可得该地面坐标系第三个方向。
将所述第一点云中平行于所述第一方向和所述第二方向的点检测为地面点。
其中,实施例A和实施例B可适用于单帧的地面点检测,在激光雷达的水平面和地面之间的夹角大于预设角度时,例如:预设角度为20°,为提升检测地面点的鲁棒性,本实施例可以基于多帧进行地面点检测地面点:在t1时刻获取激光雷达扫描生成的点云p 1,基于K邻域或PCA(Principal Component Analysis,主成分分析)计算点云p 1的法向量;在t2时刻,姿态校正装置通过承载装置控制激光雷达沿平行于地面的第一方向移动一段距离(例如:移动5m),激光雷达扫描后得到点云p 2,姿态校正装置根据ICP(Iterative Closest Point,迭代最近点)算法或NDT(Normal Distributions Transform,正态分布变换)算法估计出激光雷达的第一运动轨迹,得到地面坐标系的x轴;姿态校正装置承载装置控制激光雷达沿平行于地面的第二方向移动一段距离(例如:移动5m),激光雷达扫描后得到点云p 3,第一方向和第二方向相互垂直,姿态校正装置根据ICP或NDT算法估计激光雷达的第二运动轨迹,从而得到地面坐标系的y轴。x轴和y轴的法向量为地面点的法向量,由此检测出点云p 1中的地面点。
需要说明的是,预设角度的具体值可以与地面点检测的方法有关,例如:预设角度为5°时,实施例A的地面点检测方法不再适用;预设角度为8°时,实施例B的地面点检测算法不再适用。另外,预设角度的具体值还可以与激光雷达的性能有关,例如:激光雷达的扫描线数量和预设角度呈正相关性,扫描线数量越多预设角度的值越大,反之预设角度的值越小。
S203、根据所述地面点集合建立地面坐标系。
其中,地面坐标系的坐标和朝向表示基于当前的雷达坐标系或者雷达坐标系下地面的姿态,利用该姿态建立地面坐标系。地面坐标系中z轴垂直于地面,x轴和y轴构成的平面即为地面,本实施例可以通过PCA算法或深度学习网络建立地面坐标系。例如:建立的地面坐标系如图6所示。
在一种可能的实施方式中,参见图5所示,图5为投射到x轴和y轴上的地面点的集合,根据PCA确定集合中两个垂直的第一方向向量和第二方向向量,第一方向向量和第二方向向量分别集合中最长和最宽的方向,然后确定垂直于x轴和y轴的法向量,该法向量即为z轴。
在一种可能的实施方式中,基于深度学习网络建立地面坐标系,其中,在训练阶段,生成训练样本,训练样本是带有标签的点,标签表示该点的法向量,深度学习网络根据标签可以识别样本数据的法向量,在完成训练阶段时,利用完成训练的深度学习网络执行测试阶段。在测试阶段,将S202中生成的地面点集合中的点输入到完成训练的深度学习网络中识别法向量。在本实施例中,深度学习网络可以是pointnet++网络。
S204、根据所述激光雷达在雷达坐标系中的当前姿态和所述地面坐标系中的目标姿态计算姿态校正参数。
其中,激光雷达位于雷达坐标系的原点,雷达坐标系的x轴和y轴构成的平面为激光雷达的水平面,z轴垂直于该水平面,例如:雷达坐标系参见图7所示。姿态校正装置预配置有基于地面坐标系中的目标姿态。姿态校正装置可以根据空间几何关系计算雷达坐标系中的当前姿态和地面坐标系中的目标姿态之间的姿态校正参数,姿态校正参数包括旋转量(rx,ry,rz)和平移量(dx,dy,dz),旋转量表示饶x轴、y轴或z轴旋转的角度,平 移量。
其中,以雷达坐标系为原始坐标系,其中有两个物体O 1和O 2,分别在该坐标系中的两个位置和两个朝向,其中O 1姿态为T 1(表达O 1在原始坐标系中的空间位置和朝向的矩阵形式,可拆解为T 1=M r1*M t1,表达相对于原始坐标系,该物体的位置偏移M t1和旋转偏移M r1)。O 2姿态为T 2(类似T 1表达O 2在原始坐标系中的另一个空间位置和朝向,也可表达为T 2=M r2*M t2,表达相对于原始坐标系,该物体的位置偏移M t2和旋转偏移M r2,)。假设有转换矩阵M([dx,dy,dz,rx,ry,rz]的矩阵形式),使得T 1通过M能够同T 2重合。数学上表达为T 2=M*T 1,那么 ,M=Inv(M r1)*M r2*(M t2-M t1)。其中,Inv(·)表示矩阵取逆的操作。其中T 1可以看做当前雷达的姿态(默认情况下和雷达坐标系重合即T 1为单位矩阵),T 2可以看成是地面的姿态或者给定的目标姿态,M即为姿态校正参数。
举例来说,姿态校正参数可以使用转换矩阵来表示,转换矩阵表示旋转量和平移量,使雷达坐标系和地面坐标系能重合。假设雷达坐标系原点(0,0,0),雷达坐标系朝向z轴(0,0,1),x轴(1,0,0),y轴(0,1,0)。地面坐标系的原点(1,0,0),z轴/法向量(0,0,1),x轴(0,1,0),y轴(-1,0,0)。那么,旋转量和平移量为:将雷达坐标系的原点在x轴方向平移1个单位,然后将x轴和y轴逆时针旋转90度,使雷达坐标系和地面坐标系重合。
S205、根据姿态校正参数控制承载装置将激光雷达由所述当前姿态调整为所述目标姿态。
其中,承载装置可以为一个六自由度活动的装置,姿态校正装置根据S204计算得到的姿态校正参数驱动承载装置进行姿态调整,以使激光雷达的由当前姿态调整有模板姿态。参见图7所示,校正装置根据姿态校正参数将激光雷达的姿态调整为:激光雷达的水平面平行于地面。
根据图2的描述,检测点云中的地面点,根据地面点建立地面坐标系,以及计算激光雷达在雷达坐标系中的当前姿态和地面坐标系中目标姿态之间的姿态校正参数,基于姿态校正参数控制激光雷达的承载装置进行旋转和/或平移,使激光雷达由当前姿态调整为目标姿态,实现自动校正激光雷达的姿态,解决人工进行姿态校正带来的效率低和不准确的问题。
上述详细阐述了本申请实施例的一种激光雷达的姿态校正方法,下面提供了本申请实施例的一种姿态校正装置(以下简称装置3)。
图8所示的装置3可以实现图2所示实施例的激光雷达的姿态校正方法,装置3包括获取单元301、检测单元302、计算单元303和调整单元304。
获取单元301,用于获取激光雷达扫描生成的点云;
检测单元302,用于检测所述点云中的地面点得到地面点集合;
计算单元303,用于根据所述地面点集合建立地面坐标系;
根据所述激光雷达在雷达坐标系统中的当前姿态和所述地面坐标系统中的目标姿态计算姿态校正参数;
调整单元304,用于根据所述姿态校正参数控制承载装置将所述激光雷达由所述当前姿态调整为所述目标姿态。
可选的,所述检测所述点云中的地面点得到地面点集合包括:
将所述点云根据预设的角度间隔划分为多个扇区;
根据预设的距离间隔将所述扇区划分为多个容器;
确定各个容器中的代表点,以及将同一扇区内各个容器中的代表点进行直线拟合得到拟合直线;
在所述拟合直线满足斜率阈值条件时,将参与直线拟合的容器中满足高度阈值条件的点标记为地面点。
可选的,所述检测所述点云中的地面点得到地面点集合,包括:
获取预先训练的深度学习网络;
根据所述深度学校网络检测所述点云中的地面点得到地面点集合。
可选的,所述激光雷达的水平面和地面之间的夹角小于或等于预设角度,
可选的,所述点云为第一点云;
其中,所述检测所述点云中的地面点得到地面点集合包括,包括:
在所述激光雷达的水平面和地面的夹角大于预设角度时,确定所述第一点云的法向量;
通过所述承载装置控制所述激光雷达沿平行于所述地面的第一方向进行扫描生成第二点云;
通过所述承载装置控制所述激光雷达沿平行于所述地面的第二方向上进行扫描生成第三点云;其中,所述第一方向和所述第二方向相互垂直;
将所述第一点云中平行于所述第一方向和所述第二方向的点检测为地面点。
可选的,所述根据所述地面点集合建立地面坐标系,包括:
获取预先训练的深度学习网络;
根据所述深度学校网络计算所述地面点集合的法向量;
基于所述法向量建立所述地面坐标系;其中,所述地面坐标系的x轴和y轴构成所述地面。
本申请实施例和图1~图7的方法实施例基于同一构思,其带来的技术效果也相同,具体过程可参照图1~图7的方法实施例的描述,此处不再赘述。
所述装置3可以为实现相关功能的现场可编程门阵列(field-programmable gate array,FPGA),专用集成芯片,系统芯片(system on chip,SoC),中央处理器(central processor unit,CPU),网络处理器(network processor,NP),数字信号处理电路,微控制器(micro controller unit,MCU),还可以采用可编程控制器(programmable logic device,PLD)或其他集成芯片。
上述详细阐述了本申请实施例的一种激光雷达的姿态校正方法,下面提供了本申请实施例的一种基于姿态校正装置(以下简称装置4)。
图9为本申请实施例提供的一种装置结构示意图,以下简称装置4,装置4可以集成于上述实施例的激光雷达或承载装置中,如图4所示,该装置包括:存储器402、处理器401、发射器404以及接收器403。
存储器402可以是独立的物理单元,与处理器401、发射器404以及接收器403可以通过总线连接。存储器402、处理器401、发射器404以及接收器401也可以集成在一起,通过硬件实现等。
发射器404用于发射信号,接收器403用于接收信号。
存储器402用于存储实现以上方法实施例,或者装置实施例各个模块的程序,处理器401调用该程序,执行以上方法实施例的操作。
可选地,当上述实施例的激光雷达的姿态校正方法中的部分或全部通过软件实现时,装置也可以只包括处理器。用于存储程序的存储器位于装置之外,处理器通过电路/电线与存储器连接,用于读取并执行存储器中存储的程序。
处理器可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器还可以包括上述种类的存储器的组合。
上述实施例中,发送单元或发射器执行上述各个方法实施例发送的步骤,接收单元或接收器执行上述各个方法实施例接收的步骤,其它步骤由其他单元或处理器执行。发送单元和接收单元可以组成收发单元,接收器和发射器可以组成收发器。
本申请实施例还提供了一种计算机存储介质,存储有计算机程序,该计算机程序用于执行上述实施例提供的激光雷达的姿态校正方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的激光雷达的姿态校正方法。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置 的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。

Claims (10)

  1. 一种激光雷达的姿态校正方法,其特征在于,包括:
    获取激光雷达扫描生成的点云;
    检测所述点云中的地面点得到地面点集合;
    根据所述地面点集合建立地面坐标系;
    根据所述激光雷达在雷达坐标系统中的当前姿态和所述地面坐标系统中的目标姿态计算姿态校正参数;
    根据所述姿态校正参数控制承载装置将所述激光雷达由所述当前姿态调整为所述目标姿态。
  2. 根据权利要求1所述的方法,其特征在于,所述检测所述点云中的地面点得到地面点集合包括:
    将所述点云根据预设的角度间隔划分为多个扇区;
    根据预设的距离间隔将所述扇区划分为多个容器;
    确定各个容器中的代表点,以及将同一扇区内各个容器中的代表点进行直线拟合得到拟合直线;
    在所述拟合直线满足斜率阈值条件时,将参与直线拟合的容器中满足高度阈值条件的点标记为地面点。
  3. 根据权利要求1所述的方法,其特征在于,所述检测所述点云中的地面点得到地面点集合,包括:
    获取预先训练的深度学习网络;
    根据所述深度学校网络检测所述点云中的地面点得到地面点集合。
  4. 根据权利要求2或3所述的方法,其特征在于,所述激光雷达的水平面和地面之间的夹角小于或等于预设角度。
  5. 根据权利要求1所述的方法,其特征在于,所述点云为第一点云;
    其中,所述检测所述点云中的地面点得到地面点集合,包括:
    在所述激光雷达的水平面和地面的夹角大于预设角度时,确定所述第一点云的法向量;
    通过所述承载装置控制所述激光雷达沿平行于所述地面的第一方向进行扫描生成第二点云;
    通过所述承载装置控制所述激光雷达沿平行于所述地面的第二方向上进行扫描生成第三点云;其中,所述第一方向和所述第二方向相互垂直;
    将所述第一点云中平行于所述第一方向和所述第二方向的点检测为地面点。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述地面点集合建立地面坐标系,包括:
    获取预先训练的深度学习网络;
    根据所述深度学校网络计算所述地面点集合的法向量;
    基于所述法向量建立所述地面坐标系;其中,所述地面坐标系的x轴和y轴构成所述地面。
  7. 一种姿态校正装置,其特征在于,包括:
    获取单元,用于获取激光雷达扫描生成的点云;
    检测单元,用于检测所述点云中的地面点得到地面点集合;
    计算单元,用于根据所述地面点集合建立地面坐标系;
    根据所述激光雷达在雷达坐标系统中的当前姿态和所述地面坐标系统中的目标姿态计算姿态校正参数;
    调整单元,用于根据所述姿态校正参数控制承载装置将所述激光雷达由所述当前姿态调整为所述目标姿态。
  8. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1至6任意一项所述的方法。
  9. 一种姿态校正装置,其特征在于,包括处理器和存储器,存储器用于存储计算机程序或指令,所述处理器用于执行所述存储器中的计算机程序或指令实现如权利要求1至6任意一项所述的方法。
  10. 一种姿态校正系统,其特征在于,包括:如权利要求7或9所述的姿态校正装置、激光雷达和承载装置;其中,所述承载装置用于承载所述激光雷达,所述承载装置包括无人机、车辆或机械臂。
PCT/CN2020/081826 2020-03-27 2020-03-27 激光雷达的姿态校正方法、装置和系统 WO2021189468A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/081826 WO2021189468A1 (zh) 2020-03-27 2020-03-27 激光雷达的姿态校正方法、装置和系统
CN202310645293.8A CN116930933A (zh) 2020-03-27 2020-03-27 激光雷达的姿态校正方法和装置
CN202080005491.2A CN113748357B (zh) 2020-03-27 2020-03-27 激光雷达的姿态校正方法、装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/081826 WO2021189468A1 (zh) 2020-03-27 2020-03-27 激光雷达的姿态校正方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2021189468A1 true WO2021189468A1 (zh) 2021-09-30

Family

ID=77891513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081826 WO2021189468A1 (zh) 2020-03-27 2020-03-27 激光雷达的姿态校正方法、装置和系统

Country Status (2)

Country Link
CN (2) CN113748357B (zh)
WO (1) WO2021189468A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114136194A (zh) * 2021-10-12 2022-03-04 江苏丰尚智能科技有限公司 仓内物料体积监测方法、装置、监测设备和存储介质
CN114353780A (zh) * 2021-12-31 2022-04-15 高德软件有限公司 姿态优化方法及设备
CN114812408A (zh) * 2022-04-07 2022-07-29 中车青岛四方车辆研究所有限公司 扫石器距离轨面高度的测量方法及测量系统
CN114866685A (zh) * 2022-03-16 2022-08-05 金钱猫科技股份有限公司 一种激光摄像装置的姿态矫正方法和系统
CN114994700A (zh) * 2022-05-19 2022-09-02 瑞诺(济南)动力科技有限公司 一种流动机械的定位方法、设备及介质
CN115015889A (zh) * 2022-05-31 2022-09-06 襄阳达安汽车检测中心有限公司 激光雷达位姿调整方法、装置、设备及可读存储介质
CN115077385A (zh) * 2022-07-05 2022-09-20 北京斯年智驾科技有限公司 无人集卡集装箱位姿测量方法及其测量系统
CN115079128A (zh) * 2022-08-23 2022-09-20 深圳市欢创科技有限公司 一种激光雷达点云数据去畸变的方法、装置及机器人
CN115272248A (zh) * 2022-08-01 2022-11-01 无锡海纳智能科技有限公司 一种风机姿态的智能检测方法以及电子设备
CN115267751A (zh) * 2022-08-19 2022-11-01 广州小鹏自动驾驶科技有限公司 传感器标定方法、装置、车辆及存储介质
CN116125446A (zh) * 2023-01-31 2023-05-16 清华大学 旋转驱动式多线激光雷达三维重建装置的标定方法及装置
CN116151628A (zh) * 2023-04-19 2023-05-23 深圳市岩土综合勘察设计有限公司 隧道施工中地面沉降的监测与预警系统
CN114994700B (zh) * 2022-05-19 2024-06-11 瑞诺(济南)动力科技有限公司 一种流动机械的定位方法、设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114870364B (zh) * 2022-04-19 2023-12-19 深圳市华屹医疗科技有限公司 健身器械控制方法、健身器械及存储介质
CN115079126B (zh) * 2022-05-12 2024-05-14 探维科技(北京)有限公司 点云处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596860A (zh) * 2018-05-10 2018-09-28 芜湖航飞科技股份有限公司 一种基于三维激光雷达的地面点云分割方法
CN109001711A (zh) * 2018-06-05 2018-12-14 北京智行者科技有限公司 多线激光雷达标定方法
CN109425365A (zh) * 2017-08-23 2019-03-05 腾讯科技(深圳)有限公司 激光扫描设备标定的方法、装置、设备及存储介质
CN109696663A (zh) * 2019-02-21 2019-04-30 北京大学 一种车载三维激光雷达标定方法和系统
EP3550326A1 (en) * 2018-04-03 2019-10-09 Continental Automotive GmbH Calibration of a sensor arrangement
CN110796128A (zh) * 2020-01-06 2020-02-14 中智行科技有限公司 一种地面点识别方法、装置及存储介质和终端设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105223583B (zh) * 2015-09-10 2017-06-13 清华大学 一种基于三维激光雷达的目标车辆航向角计算方法
CN108732584B (zh) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 用于更新地图的方法和装置
CN108932736B (zh) * 2018-05-30 2022-10-11 南昌大学 二维激光雷达点云数据处理方法以及动态机器人位姿校准方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109425365A (zh) * 2017-08-23 2019-03-05 腾讯科技(深圳)有限公司 激光扫描设备标定的方法、装置、设备及存储介质
EP3550326A1 (en) * 2018-04-03 2019-10-09 Continental Automotive GmbH Calibration of a sensor arrangement
CN108596860A (zh) * 2018-05-10 2018-09-28 芜湖航飞科技股份有限公司 一种基于三维激光雷达的地面点云分割方法
CN109001711A (zh) * 2018-06-05 2018-12-14 北京智行者科技有限公司 多线激光雷达标定方法
CN109696663A (zh) * 2019-02-21 2019-04-30 北京大学 一种车载三维激光雷达标定方法和系统
CN110796128A (zh) * 2020-01-06 2020-02-14 中智行科技有限公司 一种地面点识别方法、装置及存储介质和终端设备

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114136194A (zh) * 2021-10-12 2022-03-04 江苏丰尚智能科技有限公司 仓内物料体积监测方法、装置、监测设备和存储介质
CN114353780A (zh) * 2021-12-31 2022-04-15 高德软件有限公司 姿态优化方法及设备
CN114353780B (zh) * 2021-12-31 2024-04-02 高德软件有限公司 姿态优化方法及设备
CN114866685B (zh) * 2022-03-16 2023-05-26 金钱猫科技股份有限公司 一种激光摄像装置的姿态矫正方法和系统
CN114866685A (zh) * 2022-03-16 2022-08-05 金钱猫科技股份有限公司 一种激光摄像装置的姿态矫正方法和系统
CN114812408A (zh) * 2022-04-07 2022-07-29 中车青岛四方车辆研究所有限公司 扫石器距离轨面高度的测量方法及测量系统
CN114812408B (zh) * 2022-04-07 2023-08-22 中车青岛四方车辆研究所有限公司 扫石器距离轨面高度的测量方法及测量系统
CN114994700A (zh) * 2022-05-19 2022-09-02 瑞诺(济南)动力科技有限公司 一种流动机械的定位方法、设备及介质
CN114994700B (zh) * 2022-05-19 2024-06-11 瑞诺(济南)动力科技有限公司 一种流动机械的定位方法、设备及介质
CN115015889A (zh) * 2022-05-31 2022-09-06 襄阳达安汽车检测中心有限公司 激光雷达位姿调整方法、装置、设备及可读存储介质
CN115077385A (zh) * 2022-07-05 2022-09-20 北京斯年智驾科技有限公司 无人集卡集装箱位姿测量方法及其测量系统
CN115077385B (zh) * 2022-07-05 2023-09-26 北京斯年智驾科技有限公司 无人集卡集装箱位姿测量方法及其测量系统
CN115272248A (zh) * 2022-08-01 2022-11-01 无锡海纳智能科技有限公司 一种风机姿态的智能检测方法以及电子设备
CN115272248B (zh) * 2022-08-01 2024-02-13 无锡海纳智能科技有限公司 一种风机姿态的智能检测方法以及电子设备
CN115267751A (zh) * 2022-08-19 2022-11-01 广州小鹏自动驾驶科技有限公司 传感器标定方法、装置、车辆及存储介质
CN115079128A (zh) * 2022-08-23 2022-09-20 深圳市欢创科技有限公司 一种激光雷达点云数据去畸变的方法、装置及机器人
CN116125446A (zh) * 2023-01-31 2023-05-16 清华大学 旋转驱动式多线激光雷达三维重建装置的标定方法及装置
CN116125446B (zh) * 2023-01-31 2023-09-05 清华大学 旋转驱动式多线激光雷达三维重建装置的标定方法及装置
CN116151628A (zh) * 2023-04-19 2023-05-23 深圳市岩土综合勘察设计有限公司 隧道施工中地面沉降的监测与预警系统

Also Published As

Publication number Publication date
CN113748357B (zh) 2023-06-30
CN113748357A (zh) 2021-12-03
CN116930933A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2021189468A1 (zh) 激光雷达的姿态校正方法、装置和系统
US11042723B2 (en) Systems and methods for depth map sampling
US10677907B2 (en) Method to determine the orientation of a target vehicle
US9521317B2 (en) Method and apparatus for detecting obstacle based on monocular camera
CN108377380B (zh) 影像扫描系统及其方法
CN111860295B (zh) 基于无人车的障碍物检测方法、装置、设备以及存储介质
WO2021098448A1 (zh) 传感器标定方法及装置、存储介质、标定系统和程序产品
WO2021016854A1 (zh) 一种标定方法、设备、可移动平台及存储介质
CN106569225B (zh) 一种基于测距传感器的无人车实时避障方法
US20210004566A1 (en) Method and apparatus for 3d object bounding for 2d image data
CN111380510B (zh) 重定位方法及装置、机器人
CN111123242B (zh) 一种基于激光雷达和相机的联合标定方法及计算机可读存储介质
CN111522022B (zh) 基于激光雷达的机器人进行动态目标检测方法
JP2019032700A (ja) 情報処理装置、情報処理方法、プログラムおよび移動体
WO2018091685A1 (en) Self-calibrating sensor system for a wheeled vehicle
CN113569958A (zh) 激光点云数据聚类方法、装置、设备及介质
WO2021189479A1 (zh) 路基传感器的位姿校正方法、装置和路基传感器
JP6813436B2 (ja) 情報処理装置、移動体、情報処理方法、およびプログラム
CN115100287A (zh) 外参标定方法及机器人
US20210404843A1 (en) Information processing apparatus, control method for information processing apparatus, and storage medium
US20240112363A1 (en) Position estimation system, position estimation method, and program
WO2022160101A1 (zh) 朝向估计方法、装置、可移动平台及可读存储介质
KR102624644B1 (ko) 벡터 맵을 이용한 이동체의 맵 매칭 위치 추정 방법
EP4202834A1 (en) Systems and methods for generating three-dimensional reconstructions of environments
CN117250956A (zh) 一种多观测源融合的移动机器人避障方法和避障装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.01.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20926407

Country of ref document: EP

Kind code of ref document: A1