WO2024037144A1 - 一种激光雷达遮挡判断与清洗方法及装置 - Google Patents

一种激光雷达遮挡判断与清洗方法及装置 Download PDF

Info

Publication number
WO2024037144A1
WO2024037144A1 PCT/CN2023/099181 CN2023099181W WO2024037144A1 WO 2024037144 A1 WO2024037144 A1 WO 2024037144A1 CN 2023099181 W CN2023099181 W CN 2023099181W WO 2024037144 A1 WO2024037144 A1 WO 2024037144A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cleaning
depth map
maximum
declination
Prior art date
Application number
PCT/CN2023/099181
Other languages
English (en)
French (fr)
Inventor
谢国涛
梁豪
胡满江
魏辰峰
秦晓辉
徐彪
秦兆博
王晓伟
秦洪懋
边有钢
丁荣军
Original Assignee
湖南大学无锡智能控制研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南大学无锡智能控制研究院 filed Critical 湖南大学无锡智能控制研究院
Publication of WO2024037144A1 publication Critical patent/WO2024037144A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B08CLEANING
    • B08BCLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
    • B08B3/00Cleaning by methods involving the use or presence of liquid or steam
    • B08B3/02Cleaning by the force of jets or sprays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the invention relates to the field of automatic driving technology, and in particular to a method and device for laser radar occlusion judgment and cleaning.
  • the key technologies for autonomous driving mainly include three parts: environment perception, decision-making and planning, and control execution.
  • Proper environmental perception is related to the continuation of autonomous driving technology.
  • Sensors used for environmental perception mainly include millimeter-wave radar, cameras, and lidar.
  • lidar has the advantages of high angular resolution and range resolution, strong anti-interference ability, and the ability to obtain a variety of target image information (depth, reflectivity, etc.).
  • lidar may be blocked by obstacles, which poses a serious threat to safe driving. How to ensure that lidar works properly is a problem that needs to be solved urgently.
  • Chinese patent document CN111429400A provides a method, device, system and medium for detecting dirt on a lidar window. This solution needs to determine the obstruction based on the preset distance between the obstacle and the lidar window. The process is complicated, and when the distance between the obstacle and the lidar window cannot be determined, the obstacle cannot be detected and cleaned. .
  • the object of the present invention is to provide a method and device for lidar occlusion judgment and cleaning to overcome or at least alleviate at least one of the above-mentioned defects of the prior art.
  • the present invention provides a lidar occlusion judgment and cleaning method, which includes:
  • each frame of raw point cloud data into a depth map including: establishing a coordinate system with the lidar as the center.
  • For each point cloud in each frame of raw point cloud data obtain a point cloud based on the x-coordinate and y-coordinate of the point cloud on the horizontal plane.
  • the declination angle ⁇ relative to the origin in the horizontal plane is obtained based on the z coordinate of the point cloud on the vertical plane and the distance of the point cloud projection on the horizontal plane relative to the origin.
  • the declination angle ⁇ of the point cloud relative to the origin in the vertical direction is obtained.
  • the parameter information of the points in the depth map includes: declination angle ⁇ , declination angle ⁇ and signal strength;
  • the shaded area is determined in the following way:
  • the missing part of the point cloud is used as the shadow area, and the surrounding points of the shadow area are used to determine the maximum and minimum horizontal declination angles of the shadow area.
  • the method also includes:
  • the method further includes: outputting the maximum and minimum vertical declination angles of the shadow area;
  • Determining the cleaning range based on the maximum and minimum horizontal declination angles includes: determining the cleaning range based on the maximum and minimum horizontal declination angles, combined with the maximum and minimum vertical declination angles.
  • determining the cleaning scope includes:
  • the area on the lidar surface corresponding to the maximum and minimum horizontal declination angle and vertical declination angle is determined as the cleaning area.
  • Embodiments of the present invention also provide a laser radar occlusion judgment and cleaning device, which includes:
  • Lidar used to collect raw point cloud data of each frame with point cloud coordinates and signal strength
  • Processor used to convert each frame of raw point cloud data into a depth map, including: establishing a coordinate system with the lidar as the center, and for each point cloud in each frame of raw point cloud data, according to the x coordinate of the point cloud on the horizontal plane and The y coordinate obtains the declination angle ⁇ of the point cloud in the horizontal plane relative to the origin. According to the z coordinate of the point cloud in the vertical plane and the distance of the point cloud projection on the horizontal plane relative to the origin, the declination angle ⁇ of the point cloud in the vertical direction relative to the origin is obtained.
  • parameter information of the points in the depth map includes: declination angle ⁇ , declination angle ⁇ and signal strength; when the shadow area in the depth map exceeds the threshold, it is determined to be abnormal. , and output the maximum and minimum horizontal declination angle of the shadow area;
  • Cleaning equipment is used to determine the cleaning range based on the maximum and minimum horizontal deflection angles to clean the lidar surface.
  • the processor determines the shadow area in the following manner:
  • the missing part of the point cloud is used as the shadow area, and the surrounding points of the shadow area are used to determine the maximum and minimum horizontal declination angles of the shadow area.
  • the processor is also used for:
  • the processor is also used for:
  • the cleaning range is determined based on the maximum and minimum horizontal declination angles, combined with the maximum and minimum vertical declination angles.
  • the processor is also used for:
  • the area on the lidar surface corresponding to the maximum and minimum horizontal declination angle and vertical declination angle is determined as the cleaning area.
  • the present invention has the following advantages: by converting each frame of original point cloud data collected by the lidar into a depth map, the shaded part of the depth map is used to determine the location where the lidar surface needs to be cleaned, and the corresponding nozzle is controlled for cleaning. , can clean the radar in a targeted manner, ensuring the normal operation of the lidar and the timely and efficient cleaning of obstructions.
  • Figure 1 is a schematic flowchart of a lidar occlusion judgment and cleaning method provided by an embodiment of the present invention.
  • Figure 2 is a schematic diagram of converting raw point cloud data collected by lidar into a depth map in an example of the present invention.
  • Figure 3 is a schematic structural diagram of the cleaning equipment in an example of the present invention.
  • Figure 4 is a schematic structural block diagram of a laser radar occlusion judgment and cleaning device provided by an embodiment of the present invention.
  • Embodiments of the present invention provide a lidar occlusion judgment and cleaning method, as shown in Figure 1, including:
  • Step 10 Collect each frame of original point cloud data with point cloud coordinates and signal strength through lidar.
  • Step 20 Convert each frame of raw point cloud data into a depth map, including: establishing a coordinate system with the lidar as the center, and for each point cloud in each frame of raw point cloud data, based on the x coordinate and y coordinate of the point cloud on the horizontal plane Obtain the declination angle ⁇ of the point cloud in the horizontal plane relative to the origin. According to the z coordinate of the point cloud in the vertical plane and the distance of the point cloud projection on the horizontal plane relative to the origin, the declination angle ⁇ of the point cloud in the vertical direction relative to the origin is obtained. According to the declination angle ⁇ and declination angle ⁇ determine the position of the point cloud in the depth map; the parameter information of the points in the depth map includes: declination angle ⁇ , declination angle ⁇ and signal strength.
  • Step 30 When the shadow area in the depth map exceeds the threshold, it is determined to be abnormal, and the maximum and minimum horizontal declination angles of the shadow area are output.
  • Step 40 Determine the cleaning range based on the maximum and minimum horizontal deflection angles, and clean the lidar surface.
  • the shaded area is determined in the following way:
  • the missing part of the point cloud is used as the shadow area, and the surrounding points of the shadow area are used to determine the maximum and minimum horizontal declination angles of the shadow area.
  • the method may also include: determining whether there is an area in the depth map where the signal strength is lower than a preset value, and if so, determining whether the area where the signal strength is lower than the preset value is a shaded area. That is, not only the missing part of the point cloud is used as the shadow area, but the area where the signal strength is lower than the preset value also belongs to the shadow area, and then the maximum and minimum horizontal declination angles of the shadow area are determined using the peripheral points of the shadow area.
  • the selection method of this peripheral point can be set in distress.
  • the nearest point to the shaded area that does not belong to the shaded area is used as the surrounding point.
  • the method also includes:
  • Determining the cleaning range based on the maximum and minimum horizontal declination angles includes: based on the maximum and minimum horizontal declination angles, And combine the maximum and minimum vertical deflection angle to determine the cleaning range.
  • the area on the lidar surface corresponding to the maximum and minimum horizontal declination angle and vertical declination angle is determined as the cleaning area.
  • the lidar occlusion judgment and cleaning method provided by the present invention is introduced below through a specific example, which includes: first converting the original point cloud data of each frame with point cloud coordinates and intensity collected by the lidar into a depth map; the target classification detection model determines the depth The graph is tested, and the test results include normal and abnormal. Under normal circumstances, there is no need for cleaning. Under abnormal circumstances, determine the need for cleaning and output the location that needs cleaning, and use cleaning equipment for cleaning.
  • the target classification and detection model is a pre-trained deep learning classification model, and the categories mainly include two judgments on the depth map, namely normal and abnormal. Under normal circumstances, no cleaning is required. Under abnormal circumstances, it is determined that cleaning is required and the location that needs cleaning is output.
  • the threshold is a preset value and can be flexibly adjusted according to actual needs. For a small part of the occlusion, it is not considered to have a large impact, and the threshold here is mainly set by the specific task.
  • the identification of shadow areas is mainly based on judging whether there are missing point clouds in some areas of the depth map.
  • determine the peripheral points of the shadow area use the peripheral points to find the maximum and minimum horizontal and vertical declination angles of the area, and determine the position of the shadow based on the horizontal and vertical declination angles.
  • the cleaning control module controls the nozzle to clean from top to bottom within the declination angle range, without determining the vertical declination range.
  • the deep learning classification model in this example can be trained using manually labeled data sets during pre-training. Practice until the final detection results using this model meet the preset conditions, for example, the error range meets the preset value, or converges within a certain range to meet the actual application needs.
  • the method provided by the invention is particularly suitable for opaque shields.
  • the present invention does not limit the specific target classification detection and judgment method, as long as the detection effect of the present invention can be achieved, it can be applied, and this article does not limit this.
  • the cleaning equipment controls the nozzle, etc. to clean:
  • the cleaning equipment determines which nozzle needs to work and for how long based on the shadow position in the depth map.
  • the cleaning equipment mainly includes two parts: accepting occlusion information to make decisions on cleaning work and outputting cleaning instructions to drive specific nozzle work.
  • the target classification detection model determines that the current lidar is unobstructed, the output is normal, the cleaning equipment will turn off the nozzle, and the lidar works normally.
  • the cleaning equipment When the target classification detection model determines that the current lidar is blocked, the cleaning equipment will make a decision to open and close the nozzle based on the blocking position information sent by the target classification detection model. This decision is mainly based on the corresponding relationship between the nozzle and the lidar surface. After receiving the abnormal command, the cleaning equipment will control the corresponding nozzles according to the position determined by the occlusion, so that the area not blocked by the lidar can continue to work normally and achieve the purpose of energy saving.
  • the output instructions of the cleaning equipment mainly include information such as the number, opening and closing of the working nozzle, and working time.
  • the specific nozzle working duration, nozzle angle, and number of nozzles can be flexibly adjusted according to actual needs, and this article will not go into details.
  • Figure 3 shows a schematic structural diagram of the cleaning equipment.
  • the cleaning equipment may include: a fixed bracket 31 , a connection hole 32 , a nozzle 33 , a nozzle 34 and a lidar 35 .
  • the fixed bracket 31 mainly plays a connecting role, connecting the car body, the laser radar 35, the nozzle 33, etc. together;
  • the connection hole 32 mainly fixes the bracket 31 on the car body;
  • the nozzle 34 connects the nozzle 33 and the cleaning fluid storage tank; the nozzle 33
  • the fixed bracket 31 is fixed at a certain angle, and the nozzle 34 is also fixed. The number of nozzles can be flexibly set according to actual needs.
  • the position of the cleaning lidar can be adjusted; the flushing surface is the nozzle 33 for cleaning the laser.
  • the surface position of the radar 35 can be adjusted between different flushing surfaces by adjusting the angle and position of the nozzle fixed by the bracket; the laser radar 35 is fixed on the base of the fixed bracket 31.
  • Figure 3 is only an example of a cleaning device. It is easy to understand that the cleaning device can also adopt other structures, which is not limited in this article.
  • An embodiment of the present invention also provides a laser radar occlusion judgment and cleaning device, as shown in Figure 4, including:
  • Lidar 41 used to collect raw point cloud data of each frame with point cloud coordinates and signal strength
  • Processor 42 used to convert each frame of raw point cloud data into a depth map, including: establishing a coordinate system with the lidar as the center, and for each point cloud in each frame of raw point cloud data, according to the x coordinate of the point cloud on the horizontal plane and the y coordinate to obtain the declination angle ⁇ of the point cloud in the horizontal plane relative to the origin. According to the z coordinate of the point cloud in the vertical plane and the distance of the point cloud projection on the horizontal plane relative to the origin, the declination angle ⁇ of the point cloud in the vertical direction relative to the origin is obtained.
  • the declination angle ⁇ and declination angle ⁇ determine the position of the point cloud in the depth map;
  • the parameter information of the points in the depth map includes: declination angle ⁇ , declination angle ⁇ and signal strength; when the shadow area in the depth map exceeds the threshold, it is judged to be Exception, and output the maximum and minimum horizontal declination angle of the shadow area;
  • the cleaning equipment 43 is used to determine the cleaning range according to the maximum and minimum horizontal deflection angles, and clean the lidar surface.
  • the processor 42 determines the shadow area in the following manner:
  • the missing part of the point cloud is used as the shadow area, and the surrounding points of the shadow area are used to determine the maximum and minimum horizontal declination angles of the shadow area.
  • the processor 42 may also be configured to determine whether there is an area in the depth map where the signal strength is lower than a preset value, and if so, determine whether the area where the signal strength is lower than the preset value is a shadow area.
  • the processor 42 can also be used for:
  • the cleaning range is determined based on the maximum and minimum horizontal declination angles, combined with the maximum and minimum vertical declination angles.
  • the processor 42 can also be used for:
  • the area on the lidar surface corresponding to the maximum and minimum horizontal declination angle and vertical declination angle is determined as the cleaning area.
  • the radar by converting each frame of original point cloud data collected by the lidar into a depth map, using the shaded part of the depth map to determine the location where the lidar surface needs to be cleaned, and controlling the corresponding nozzle for cleaning, the radar can be cleaned in a targeted manner to ensure that the laser The normal operation of the radar and timely and efficient cleaning of obstructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明公开了一种激光雷达遮挡判断与清洗方法及装置,该方法包括:通过激光雷达采集带有点云坐标和信号强度的每帧原始点云数据;将每帧原始点云数据转换为深度图;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角;根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗。本发明中,通过将激光雷达采集的每帧原始点云数据转换为深度图,以深度图的阴影部分确定激光雷达表面需要清洗的位置,控制相应喷头进行清洗,能够针对性清洁雷达,保证激光雷达的正常工作和遮挡物的及时高效清洗。

Description

一种激光雷达遮挡判断与清洗方法及装置 技术领域
本发明涉及自动驾驶技术领域,特别是一种激光雷达遮挡判断与清洗方法及装置。
背景技术
自动驾驶关键技术主要有三部分:环境感知、决策规划、控制执行。做好环境感知关乎自动驾驶后续技术的延续。环境感知使用的传感器主要有毫米波雷达、摄像头、激光雷达。其中激光雷达具有角分辨率和距离分辨率较高、抗干扰能力强和能获得目标多种图像信息(深度、反射率等)等优点。
随着自动驾驶技术逐步的成熟,对于安全行驶的需要越发的强烈。环境感知作为自动驾驶的眼睛,确保激光雷达等的正常工作十分关键。
但是,激光雷达会存在被障碍物覆盖而产生遮挡的情况,这种情况对安全行驶带来了严重的威胁。如何确保激光雷达正常工作是当前急需解决的一个问题。
中国专利文献CN111429400A提供一种激光雷达视窗污物的检测方法、装置、系统及介质。该方案对于遮挡物的确定需要根据障碍物位于激光雷达视窗位置的预设距离进行判断,过程较为复杂,且当无法确定障碍物与激光雷达视窗位置之间的距离时,无法检测和清洗障碍物。
发明内容
本发明的目的在于提供一种激光雷达遮挡判断与清洗方法及装置,来克服或至少减轻现有技术的上述缺陷中的至少一个。
为实现上述目的,本发明提供一种激光雷达遮挡判断与清洗方法,包括:
通过激光雷达采集带有点云坐标和信号强度的每帧原始点云数据;
将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;
当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和 最小的水平偏角;
根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗。
优选的,通过下述方式确定阴影区域:
判断深度图中是否存在点云缺失;
当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角。
优选的,该方法还包括:
判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域为阴影区域。
优选的,该方法还包括:输出阴影区域的最大和最小的竖直偏角;
根据最大和最小的水平偏角确定清洗范围包括:根据最大和最小的水平偏角,并结合最大和最小的竖直偏角,确定清洗范围。
优选的,确定清洗范围包括:
确定清洗角度范围;或者
确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
本发明实施例还提供一种激光雷达遮挡判断与清洗装置,包括:
激光雷达,用于采集带有点云坐标和信号强度的每帧原始点云数据;
处理器,用于将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角;
清洁设备,用于根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗。
优选的,处理器通过下述方式确定阴影区域:
判断深度图中是否存在点云缺失;
当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角。
优选的,处理器还用于:
判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域为阴影区域。
优选的,处理器还用于:
输出阴影区域的最大和最小的竖直偏角;
根据最大和最小的水平偏角,并结合最大和最小的竖直偏角,确定清洗范围。
优选的,处理器还用于:
确定清洗角度范围;或者
确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
本发明由于采取以上技术方案,其具有以下优点:通过将激光雷达采集的每帧原始点云数据转换为深度图,以深度图的阴影部分确定激光雷达表面需要清洗的位置,控制相应喷头进行清洗,能够针对性清洁雷达,保证激光雷达的正常工作和遮挡物的及时高效清洗。
附图说明
图1为本发明实施例提供的激光雷达遮挡判断与清洗方法的流程示意图。
图2为本发明示例中将激光雷达采集的原始点云数据转换为深度图的示意图。
图3为本发明示例中清洁设备的结构示意图。
图4为本发明实施例提供的激光雷达遮挡判断与清洗装置的结构示意框图。
具体实施方式
在附图中,使用相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面结合附图对本发明的实施例进行详细说明。
在本发明的描述中,术语“中心”、“纵向”、“横向”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明保护范围的限制。
在不冲突的情况下,本发明各实施例及各实施方式中的技术特征可以相互组合,并不局限于该技术特征所在的实施例或实施方式中。
下面结合附图以及具体实施例对本发明做进一步的说明,需要指出的是,下面仅以一种最优化的技术方案对本发明的技术方案以及设计原理进行详细阐述,但本发明的保护范围并不仅限于此。
本文涉及下列术语,为便于理解,对其含义说明如下。本领域技术人员应当理解,下列术语也可能有其它名称,但在不脱离其含义的情形下,其它任何名称都应当被认为与本文所列术语一致。
本发明实施例提供一种激光雷达遮挡判断与清洗方法,如图1所示,包括:
步骤10,通过激光雷达采集带有点云坐标和信号强度的每帧原始点云数据。
步骤20,将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度。
步骤30,当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角。
步骤40,根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗。
其中,通过下述方式确定阴影区域:
判断深度图中是否存在点云缺失;
当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角。
其中,还可以包括:判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域为阴影区域。即,不仅仅以点云缺失部分为阴影区域,信号强度低于预设值的区域也属于阴影区域,进而利用阴影区域周边点确定阴影区域的最大和最小的水平偏角。
该周边点的选择方式可以遇险设定。例如,以距离阴影区域最近的不属于阴影区域的点作为周边点。
其中,该方法还包括:
输出阴影区域的最大和最小的竖直偏角;
根据最大和最小的水平偏角确定清洗范围包括:根据最大和最小的水平偏角, 并结合最大和最小的竖直偏角,确定清洗范围。
其中,确定清洗范围包括:
确定清洗角度范围;或者
确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
下面通过一具体示例介绍本发明提供的激光雷达遮挡判断与清洗方法,包括:将激光雷达采集的带有点云坐标、强度的每帧原始点云数据首先转换为深度图;目标分类检测模型对深度图进行检测,检测结果包括正常和异常。正常情况下,不需要进行清洁处理,异常情况下,确定需要进行清洁处理并输出需要清洁的位置,利用清洁设备进行清洁。
其中,目标分类检测模型为预训练好的深度学习分类模型,类别主要包含对深度图的两种判断,分别为正常和异常。正常情况下,不需要进行清洁处理,异常情况下,确定需要进行清洁处理并输出需要清洁的位置。
其中,如图2所示,设激光雷达相机水平面的右侧为x轴正方向,前方为y轴正方向,竖直向上为z轴正方向;通过比较同一个点云的x与y坐标可以得到该点云在水平面相对原点的偏角α;比较点云的z轴坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β。最后将生成的带有两个偏角(α,β)和强度信息的点按照两个偏角组成深度图,深度图具有的信息主要为点的两个偏角和强度信息。
本示例中,当遮挡导致的深度图中的阴影面积超过一定的阈值时,判断为异常情况,否则为正常情况。该阈值为预设值,可以根据实际需要灵活调整。对于小部分的遮挡认为不具有较大的影响,这里的阈值主要由特定任务进行设置。
其中,对于阴影面积的识别主要通过判断深度图中部分区域是否存在点云缺失。当存在点云缺失时,确定阴影区域周边点,利用周边点寻找该区域的最大和最小的水平以及竖直偏角,以水平以及竖直偏角确定阴影的位置。阴影的位置与激光雷达表面的位置之间具有数学对应关系,可以根据阴影的位置确定对应激光雷达表面的位置,将此信息输出到清洗控制模块中,对应的喷头进行工作。
其中,在确定阴影的位置时,可以仅确定阴影对应的水平偏角的范围,由清洗控制模块控制喷头在该偏角范围内自上而下进行清洗,就无需确定竖直偏角范围。
本示例中的深度学习分类模型,在预训练时可以利用人工标注数据集进行训 练,直至最终使用该模型的检测结果满足预设条件,例如误差范围满足预设值,或者收敛在一定范围内,满足实际应用需求。
本发明提供的方法,尤其适用于不透光遮挡物。通过较长的实践观察,发现实际场景中干扰激光雷达效果的主要元素为泥巴类的遮挡,此类遮挡反映在深度图中的效果为产生对应阴影面积,因此根据最终的深度图效果进行判断激光雷达是否正常。
本发明不限定具体的目标分类检测判断方法,只要能够实现本发明的检测效果均可应用,本文对此不作限制。
本示例中,接收到需要清洁的位置后,清洁设备控制喷头等进行清洁:
清洁设备根据深度图中阴影位置判断需要哪个喷头工作以及工作的时间。在本示例中,清洁设备主要包含接受遮挡信息对清洗工作进行决策并输出清洗指令驱动特定喷头工作两部分内容。
当目标分类检测模型判断当前激光雷达无遮挡时,输出为正常,清洁设备将关闭喷头,激光雷达正常工作。
当目标分类检测模型判断当前激光雷达有遮挡时,清洁设备将结合目标分类检测模型发送的遮挡位置信息进行喷头开闭的决策。此决策依据主要为喷头与激光雷达表面的对应关系。收到异常指令后清洁设备将根据对遮挡判断的位置控制相应的喷头等进行工作,使得激光雷达未遮挡区域能够继续正常工作,达到节能的目的。
清洁设备的输出指令内容主要包含工作喷头的编号、开闭、以及工作时间等信息。
本示例中,具体喷头工作持续时间、以及喷头角度、喷头数量可以根据实际需要灵活调整,本文不过多赘述。
图3示出清洁设备的结构示意图。如图3所示,清洁设备可以包括:固定支架31、连接孔32、喷头33、喷管34和激光雷达35。固定支架31主要起连接作用,将车体、激光雷达35、喷头33等连接到一起;连接孔32主要将支架31固定在车体上;喷管34连接喷头33和清洗液存储箱;喷头33由固定支架31以一定的角度固定,进而将喷管34也进行固定,其中喷头的数量可以根据实际需要灵活设置,调节喷头的角度即可调节清洗激光雷达的位置;冲洗面为喷头33清洗激光雷达35的表面位置,调节支架固定喷头的角度和位置可以在不同的冲洗面之间选择;激光雷达35固定在固定支架31的底座上。
图3仅为清洁设备的示例,容易理解,清洁设备也可以采用其它构造,本文对此不作限制。
本发明实施例还提供一种激光雷达遮挡判断与清洗装置,如图4所示,包括:
激光雷达41,用于采集带有点云坐标和信号强度的每帧原始点云数据;
处理器42,用于将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角;
清洁设备43,用于根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗。
其中,处理器42通过下述方式确定阴影区域:
判断深度图中是否存在点云缺失;
当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角。
其中,处理器42还可以用于:判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域为阴影区域。
其中,处理器42还可以用于:
输出阴影区域的最大和最小的竖直偏角;
根据最大和最小的水平偏角,并结合最大和最小的竖直偏角,确定清洗范围。
其中,处理器42还可以用于:
确定清洗角度范围;或者
确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
本发明中,通过将激光雷达采集的每帧原始点云数据转换为深度图,以深度图的阴影部分确定激光雷达表面需要清洗的位置,控制相应喷头进行清洗,能够针对性清洁雷达,保证激光雷达的正常工作和遮挡物的及时高效清洗。
最后需要指出的是:以上实施例仅用以说明本发明的技术方案,而非对其限制。本领域的普通技术人员应当理解:可以对前述各实施例所记载的技术方案进 行修改,或者对其中部分技术特征进行等同替换;这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (4)

  1. 一种激光雷达遮挡判断与清洗方法,其特征在于,包括:
    通过激光雷达采集带有点云坐标和信号强度的每帧原始点云数据;
    将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;
    当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角;
    根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗;
    其中,通过下述方式确定阴影区域:
    判断深度图中是否存在点云缺失;
    当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角;
    所述激光雷达遮挡判断与清洗方法还包括:
    判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域也为阴影区域;
    输出阴影区域的最大和最小的竖直偏角,根据最大和最小的水平偏角确定清洗范围包括:根据最大和最小的水平偏角,并结合最大和最小的竖直偏角,确定清洗范围。
  2. 根据权利要求1所述的方法,其特征在于,确定清洗范围包括:
    确定清洗角度范围;或者
    确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
  3. 一种激光雷达遮挡判断与清洗装置,其特征在于,包括:
    激光雷达,用于采集带有点云坐标和信号强度的每帧原始点云数据;
    处理器,用于将每帧原始点云数据转换为深度图,包括:以激光雷达为中心建立坐标系,对于每帧原始点云数据中的每一点云,根据点云在水平面的x 坐标与y坐标得到点云在水平面相对原点的偏角α,根据点云在竖直面的z坐标和点云投影在水平面相对原点的距离得到点云在竖直方向相对原点的偏角β,根据偏角α和偏角β确定点云在深度图中的位置;深度图中的点的参数信息包括:偏角α、偏角β和信号强度;当深度图中阴影面积超过阈值时,判断为异常,并输出阴影区域的最大和最小的水平偏角;
    清洁设备,用于根据最大和最小的水平偏角确定清洗范围,对激光雷达表面进行清洗;
    其中,所述处理器通过下述方式确定阴影区域:
    判断深度图中是否存在点云缺失;
    当存在点云缺失时,以点云缺失部分为阴影区域,利用阴影区域周边点确定阴影区域的最大和最小的水平偏角;
    其中,所述处理器还用于:
    判断深度图中是否存在信号强度低于预设值的区域,若存在,判定信号强度低于预设值的区域也为阴影区域;
    其中,所述处理器还用于:
    输出阴影区域的最大和最小的竖直偏角;
    根据最大和最小的水平偏角,并结合最大和最小的竖直偏角,确定清洗范围。
  4. 根据权利要求3所述的装置,其特征在于,所述处理器还用于:
    确定清洗角度范围;或者
    确定激光雷达表面与最大和最小的水平偏角以及竖直偏角对应的区域,作为清洗区域。
PCT/CN2023/099181 2022-08-17 2023-06-08 一种激光雷达遮挡判断与清洗方法及装置 WO2024037144A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210984537.0 2022-08-17
CN202210984537.0A CN115359121B (zh) 2022-08-17 2022-08-17 一种激光雷达遮挡判断与清洗方法及装置

Publications (1)

Publication Number Publication Date
WO2024037144A1 true WO2024037144A1 (zh) 2024-02-22

Family

ID=84002771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/099181 WO2024037144A1 (zh) 2022-08-17 2023-06-08 一种激光雷达遮挡判断与清洗方法及装置

Country Status (2)

Country Link
CN (1) CN115359121B (zh)
WO (1) WO2024037144A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359121B (zh) * 2022-08-17 2023-05-12 湖南大学无锡智能控制研究院 一种激光雷达遮挡判断与清洗方法及装置
CN116047540B (zh) * 2023-02-07 2024-03-22 湖南大学无锡智能控制研究院 基于点云强度信息的激光雷达自遮挡判断的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040257556A1 (en) * 2003-06-20 2004-12-23 Denso Corporation Object recognition apparatus designed to detect adhesion of dirt to radar
CN111429400A (zh) * 2020-02-21 2020-07-17 深圳市镭神智能系统有限公司 一种激光雷达视窗污物的检测方法、装置、系统及介质
CN112824926A (zh) * 2019-11-20 2021-05-21 上海为彪汽配制造有限公司 一种无人机雷达清洁方法和无人机
CN114488073A (zh) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 激光雷达采集到的点云数据的处理方法
CN115359121A (zh) * 2022-08-17 2022-11-18 湖南大学无锡智能控制研究院 一种激光雷达遮挡判断与清洗方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803324B1 (en) * 2017-01-03 2020-10-13 Waylens, Inc. Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
US10970815B2 (en) * 2018-07-10 2021-04-06 Raytheon Company Multi-source image fusion
CN110586578B (zh) * 2019-10-22 2021-01-29 北京易控智驾科技有限公司 激光雷达的清洁防护一体化装置、清洁控制方法及系统
CN112132929B (zh) * 2020-09-01 2024-01-26 北京布科思科技有限公司 一种基于深度视觉和单线激光雷达的栅格地图标记方法
CN114324382A (zh) * 2020-09-30 2022-04-12 北京小米移动软件有限公司 面板端子清洁度检测方法及面板端子清洁度检测装置
CN113246916A (zh) * 2021-07-02 2021-08-13 上汽通用汽车有限公司 一种车辆清洗控制方法、装置、系统及存储介质
CN114632755A (zh) * 2022-04-08 2022-06-17 阿波罗智能技术(北京)有限公司 一种雷达清洗装置、雷达系统及雷达清洗装置的控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040257556A1 (en) * 2003-06-20 2004-12-23 Denso Corporation Object recognition apparatus designed to detect adhesion of dirt to radar
CN112824926A (zh) * 2019-11-20 2021-05-21 上海为彪汽配制造有限公司 一种无人机雷达清洁方法和无人机
CN111429400A (zh) * 2020-02-21 2020-07-17 深圳市镭神智能系统有限公司 一种激光雷达视窗污物的检测方法、装置、系统及介质
CN114488073A (zh) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 激光雷达采集到的点云数据的处理方法
CN115359121A (zh) * 2022-08-17 2022-11-18 湖南大学无锡智能控制研究院 一种激光雷达遮挡判断与清洗方法及装置

Also Published As

Publication number Publication date
CN115359121B (zh) 2023-05-12
CN115359121A (zh) 2022-11-18

Similar Documents

Publication Publication Date Title
WO2024037144A1 (zh) 一种激光雷达遮挡判断与清洗方法及装置
Zhao et al. SLAM in a dynamic large outdoor environment using a laser scanner
CN104933409B (zh) 一种基于全景图像点线特征的车位识别方法
CN105404844A (zh) 一种基于多线激光雷达的道路边界检测方法
CN110427873B (zh) 一种清洗车、应用于环卫领域的智能行人防喷溅系统及方法
CN110403528A (zh) 一种基于清洁机器人提高清洁覆盖率的方法和系统
CN103533231A (zh) 车载摄像机除污装置的诊断装置、诊断方法和车辆系统
CN111622296A (zh) 挖掘机安全避障系统和方法
Wei et al. Multi-sensor fusion glass detection for robot navigation and mapping
CN113741435A (zh) 障碍物规避方法、装置、决策器、存储介质、芯片及机器人
KR20210148109A (ko) 에고 부분 제외를 포함하는 충돌 검출 훈련 세트를 생성하기 위한 방법
CN108427119A (zh) 基于超声波传感器实现低矮障碍物跟踪检测的方法
CN115465293A (zh) 一种多传感器安全自认知及安全处理装置及方法
US20220259914A1 (en) Obstacle detection device, and method
JP5021913B2 (ja) 海上における対象物の捜索方法及びシステム並びに対象物の捜索方法を実行する記録媒体
CN109799815A (zh) 一种自动探索路径作业方法
CN113589829A (zh) 一种移动机器人的多传感器区域避障方法
CN116047540B (zh) 基于点云强度信息的激光雷达自遮挡判断的方法及装置
Nishida et al. Development of intelligent automatic door system
TWI689392B (zh) 機械臂控制方法及機械臂
JP4008179B2 (ja) 水上浮遊物の探知・追尾方法及びその装置
CN116919247A (zh) 贴边识别方法、装置、计算机设备及介质
CN113721615B (zh) 一种基于机器视觉的海航路径规划方法及系统
CN111646349B (zh) 一种基于tof图像的电梯保护方法及装置
CN114545925A (zh) 一种复合机器人控制方法及复合机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854050

Country of ref document: EP

Kind code of ref document: A1