CN113034419B - Method and device for objective quality evaluation of radar point cloud for machine vision tasks - Google Patents

Method and device for objective quality evaluation of radar point cloud for machine vision tasks Download PDF

Info

Publication number
CN113034419B
CN113034419B CN201911233989.XA CN201911233989A CN113034419B CN 113034419 B CN113034419 B CN 113034419B CN 201911233989 A CN201911233989 A CN 201911233989A CN 113034419 B CN113034419 B CN 113034419B
Authority
CN
China
Prior art keywords
point cloud
quality evaluation
calculating
noise
machine vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911233989.XA
Other languages
Chinese (zh)
Other versions
CN113034419A (en
Inventor
徐异凌
赵恒�
杨琦
管云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201911233989.XA priority Critical patent/CN113034419B/en
Publication of CN113034419A publication Critical patent/CN113034419A/en
Application granted granted Critical
Publication of CN113034419B publication Critical patent/CN113034419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本发明提供了一种面向机器视觉任务的雷达点云客观质量评价方法及装置,包括:球域划分步骤:将第一点云和第二点云分别分为多个相邻的球域,球域的半径与点云数据中视觉任务目标最长边a有关;点集方向向量计算步骤:基于每个球域计算对应点集的特征值和特征向量,组成点集的方向的向量,计算第一点云和第二点云在相同球域中的向量差值;模型分数获取步骤:以各球领域点的个数为权值,将各球邻域计算的向量差值进行加权求和,得到模型分数。本发明可以较好的估计基于雷达点云的特定任务(如点云分类、分割、识别等)的服务质量,并且有较好的鲁棒性。

Figure 201911233989

The invention provides a method and device for evaluating the objective quality of a radar point cloud oriented to a machine vision task, including: a spherical domain division step: dividing the first point cloud and the second point cloud into a plurality of adjacent spherical domains, respectively. The radius of the domain is related to the longest side a of the visual task target in the point cloud data; the calculation step of the direction vector of the point set: calculate the eigenvalues and eigenvectors of the corresponding point set based on each spherical domain, form the vector of the direction of the point set, and calculate the first The vector difference between one point cloud and the second point cloud in the same spherical domain; the model score acquisition steps: take the number of points in each spherical domain as the weight, and weight and sum the vector differences calculated by each spherical neighborhood, Get the model score. The invention can better estimate the service quality of specific tasks (such as point cloud classification, segmentation, identification, etc.) based on radar point clouds, and has better robustness.

Figure 201911233989

Description

面向机器视觉任务的雷达点云客观质量评价方法及装置Method and device for objective quality evaluation of radar point cloud for machine vision tasks

技术领域technical field

本发明涉及雷达点云机器视觉任务和点云客观质量评价领域,具体地,涉及一 种面向机器视觉任务的雷达点云客观质量评价方法及装置。The invention relates to the field of radar point cloud machine vision tasks and point cloud objective quality evaluation, and in particular, to a method and device for radar point cloud objective quality evaluation oriented to machine vision tasks.

背景技术Background technique

近几十年来,激光雷达扫描技术和系统日趋成熟,应用也越来越广泛。由激光 雷达可获得三维数据,这类数据也被称为雷达点云,并且能够反映目标的基本结构 信息。基于三维激光点云数据进行的目标分割、目标识别、目标分类、目标配准等 特定视觉任务研究均取得了成果。如专利文献CN 108981616A公开了一种基于无人 机激光雷达经验模型反演人工林有效叶面积指数的方法,应用于森林资源调查、森 林立地质量评价和森林生产力估测研究领域。In recent decades, lidar scanning technology and systems have become more mature and widely used. Three-dimensional data can be obtained from lidar, which is also called radar point cloud and can reflect the basic structural information of the target. Target segmentation, target recognition, target classification, target registration and other specific vision tasks based on 3D laser point cloud data have achieved results. For example, the patent document CN 108981616A discloses a method for inverting the effective leaf area index of a plantation based on the UAV lidar empirical model, which is applied to the research fields of forest resource investigation, forest site quality evaluation and forest productivity estimation.

基于激光点云的无人驾驶技术是当下研究的热点,并已应用于实际的驾驶过程。但是车载LiDAR(Light Detection And Ranging,即激光探测与测量,也就是激光 雷达)获取数据的方式导致点云数据存在阴影、遮挡等现象,造成特定的视觉任务 无法达到理想效果,例如会导致点云分割算法的识别率变低,鲁棒性变差,进而造 成计算资源的浪费。需要一种数据评价算法来避免这种浪费。Unmanned driving technology based on laser point cloud is a hot research topic and has been applied to the actual driving process. However, the way in which vehicle-mounted LiDAR (Light Detection And Ranging, that is, laser detection and measurement, that is, lidar) acquires data leads to shadows, occlusions, etc. in point cloud data, which makes certain visual tasks unable to achieve ideal results, such as point clouds. The recognition rate of the segmentation algorithm becomes lower and the robustness becomes worse, resulting in a waste of computing resources. A data evaluation algorithm is needed to avoid this waste.

由于LiDAR数据的稀疏性与不均匀性(近处密集,远处稀疏),并且知道特定任务中目标体积往往有占比很小的特点。现有的通测条件使用的点对点,点对面失真质 量评价,显然无法很好的表达LiDAR中特定视觉任务,例如在雷达点云分割任务中, 对加噪声数据,远处的我们不关注的点由于稀疏所以误差会较大,近处的我们关注 的某些点我们误差又会较小。并且由于目标体积较小,噪声若不均匀,计算出的RMSE 与特定任务结果并无直接关联,因此这几种经典失真的评价模型在雷达点云数据中 鲁棒性较差。另外,由于LiDAR数据的稀疏性,法向量,曲率这些局部性质是很难 准确得到并进行利用的。Due to the sparsity and inhomogeneity of LiDAR data (dense near, sparse far), and knowing that the target volume in a specific task often has the characteristics of a small proportion. The point-to-point and point-to-point distortion quality evaluation used in the existing pass-through conditions obviously cannot express the specific vision tasks in LiDAR, such as in the radar point cloud segmentation task, for noise data, distant points we do not pay attention to Due to sparseness, the error will be larger, and some of the points we are concerned about nearby will have a smaller error. And because the target volume is small and the noise is not uniform, the calculated RMSE is not directly related to the specific task results, so these several classical distortion evaluation models are less robust in radar point cloud data. In addition, due to the sparseness of LiDAR data, local properties such as normal vector and curvature are difficult to obtain and utilize accurately.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的缺陷,本发明的目的是提供一种面向机器视觉任务的雷达点云客观质量评价方法及装置。In view of the defects in the prior art, the purpose of the present invention is to provide a method and device for evaluating the objective quality of radar point cloud oriented to machine vision tasks.

针对现有技术(两种经典算法)中的缺陷,本发明的目的是提供一种面向特定 视觉任务服务质量的雷达点云评价方法,采用此模型可以较好的估计基于雷达点云 的特定任务(如点云分类、分割、识别等)的服务质量,并且有较好的鲁棒性。In view of the defects in the prior art (two classical algorithms), the purpose of the present invention is to provide a radar point cloud evaluation method oriented to the quality of service of specific visual tasks, which can better estimate the specific tasks based on radar point clouds by using this model. (such as point cloud classification, segmentation, recognition, etc.) quality of service, and has better robustness.

具体的,通过上述模型可以得到特定视觉任务的指标与模型分数的线性关系, 进而可以评估此LiDAR点云数据的服务质量,并且得到的数据模型评分是可以估 计此视觉任务的服务质量。Specifically, through the above model, the linear relationship between the index of a specific visual task and the model score can be obtained, and then the service quality of the LiDAR point cloud data can be evaluated, and the obtained data model score can estimate the service quality of the visual task.

为实现上述目的,本发明采用了以下技术方案:To achieve the above object, the present invention has adopted the following technical solutions:

本文提供了一种基于服务质量的评价模型,即使用模型对数据评分来估计该数据对在特定任务下实现的正确率,这种思维在之前是无人提出过的。This paper provides an evaluation model based on service quality, that is, using the model to score data to estimate the correct rate of the data pair under a specific task, this kind of thinking has not been proposed before.

具体模型思路如下。The specific model ideas are as follows.

1.将参考点云a与点云b分别分为若干相邻球域,该球域的半径大小是视觉任 务目标尺寸大小的乘数,所述分出球域半径大小为视觉任务目标最长边长度 一半的不超过2倍的乘数。1. Divide the reference point cloud a and the point cloud b into several adjacent spherical domains, the radius of the spherical domain is a multiplier of the size of the visual task target, and the radius of the divided spherical domain is the longest visual task target. A multiplier of no more than 2 times half the length of the side.

2.基于每个球域计算其点集的特征值与特征向量,按照一定的组合方式,获得 代表该球域中点集的方向的向量表示,使用求向量距离的方式求得点云a与 点云b在相同球域中的向量差值。2. Calculate the eigenvalues and eigenvectors of its point set based on each sphere, obtain a vector representation representing the direction of the point set in the sphere according to a certain combination, and use the vector distance method to obtain the point cloud a and the point The vector difference of cloud b in the same spherical domain.

3.按照各球邻域某种属性为权值,将各球邻域计算的向量的差进行加权求和, 得到模型分数。3. According to a certain attribute of each spherical neighborhood as a weight, the difference of the vectors calculated by each spherical neighborhood is weighted and summed to obtain the model score.

4.验证评价模型输出分数的有效性,将参考点云a与点云b带入特定视觉任务 中,计算其对应的准确率指标大小,此时模型相对分数应当与对应指标大小 有一定的线性关系,若线性度较好,则可以利用此模型来估测数据对应的服 务质量。4. Verify the validity of the output score of the evaluation model, bring the reference point cloud a and point cloud b into a specific vision task, and calculate the corresponding accuracy index size. At this time, the relative score of the model should have a certain linearity with the corresponding index size. If the linearity is good, this model can be used to estimate the service quality corresponding to the data.

优选地,所述特定视觉任务包括以下任一种:Preferably, the specific vision task includes any of the following:

点云分类、分割、识别、跟踪等视觉任务。Vision tasks such as point cloud classification, segmentation, recognition, and tracking.

优选地,所述与球域有关的乘数包括以下任一种:Preferably, the multiplier related to the spherical domain includes any of the following:

1.2,1.5,1.7等不大于2的数1.2, 1.5, 1.7 and other numbers not greater than 2

优选地,所述模型验证方法噪声种类包括以下任一种:Preferably, the noise types of the model verification method include any of the following:

高斯噪声,随机噪声,区域噪声等噪声种类Gaussian noise, random noise, regional noise and other noise types

优选地,所述模型特征向量组合方式包括以下任一种:Preferably, the combination of the model feature vectors includes any of the following:

相乘累加,选择其中一或两个特征值与对应特征向量相乘累加等方式Multiply and accumulate, select one or two eigenvalues and the corresponding eigenvector to multiply and accumulate, etc.

优选地,所述模型衡量向量差异的方法包括以下任一种:Preferably, the method for measuring the vector difference of the model includes any of the following:

L1范数,L2范数、高斯核函数等方式L1 norm, L2 norm, Gaussian kernel function, etc.

优选地,所述模型验证方法视觉任务准确性指标包括以下任一种:Preferably, the visual task accuracy index of the model verification method includes any of the following:

IoU(交并比),AP(平均准确率),召回率等指标IoU (intersection ratio), AP (average precision), recall and other indicators

本发明还提供了一种面向机器视觉任务的雷达点云客观质量评价系统,包括: 加噪模块、球域分割模块、计算球域点集的向量表示模块、计算向量差值模块、计 算质量评价分数模块。The present invention also provides a radar point cloud objective quality evaluation system oriented to machine vision tasks, including: a noise adding module, a spherical domain segmentation module, a vector representation module for calculating spherical domain point sets, a vector difference calculation module, and a calculation quality evaluation module. Fractions module.

输入点云数据模块:输入原始点云数据;Input point cloud data module: input original point cloud data;

加噪模块:加噪处理原始点云数据得到加噪点云数据;Noise-adding module: Add noise to the original point cloud data to obtain the noise-added point cloud data;

球域分割模块:将原始点云、加噪点云分别分出球域;Spherical domain segmentation module: separate the original point cloud and the noised point cloud into the spherical domain;

计算球域点集方向的向量表示模块:计算球域点集的特征值与特征向量,根据 所述特征值与特征向量,计算向量表示;Calculate the vector representation module of the spherical domain point set direction: calculate the eigenvalues and eigenvectors of the spherical domain point set, and calculate the vector representation according to the eigenvalues and eigenvectors;

计算向量差值模块:计算原始点云和加噪点云在相同球域中的向量差值;Calculate the vector difference module: calculate the vector difference between the original point cloud and the noised point cloud in the same spherical domain;

计算质量评价分数模块:以各球邻域属性为权值,计算质量评价分数。Calculate the quality evaluation score module: use the attributes of each ball neighborhood as the weight to calculate the quality evaluation score.

优选地,还包括:Preferably, it also includes:

模型分数验证模块:将第一点云和第二点云带入基于深度学习点云分割任务中,计算对应的准确率指标IoU的大小。Model score verification module: Bring the first point cloud and the second point cloud into the point cloud segmentation task based on deep learning, and calculate the size of the corresponding accuracy index IoU.

与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

1.鲁棒性增强,本发明采用的是基于权值加和的方法,初步解决了雷达数据近 处密集,远处稀疏的难题。具体的,由于实际应用中,车载LiDAR中更加注重近处 物体,特定视觉任务的目标也往往在近处,因而本发明是以任务目标为中心建立的 权值加和,所以鲁棒性会增强。1. Robustness enhancement, the present invention adopts a method based on the sum of weights, which preliminarily solves the problem that radar data is dense in the vicinity and sparse in the distance. Specifically, in practical applications, vehicle-mounted LiDAR pays more attention to near objects, and the target of a specific visual task is often near. Therefore, the present invention is based on the sum of the weights established by the task target, so the robustness will be enhanced. .

2.复杂度降低。由于点云数据数据量过大,点对点,点对面的方式复杂度将会 很高。本发明采用相邻球邻域计算的方式,大大减少了运算次数。2. The complexity is reduced. Due to the large amount of point cloud data, the point-to-point, point-to-point approach will be very complicated. The present invention adopts the method of calculating the neighborhood of adjacent spheres, which greatly reduces the number of operations.

3.基于结构相似性。点对点,点对面评价的做法忽略了结构的相似性,对视觉 任务(分割,识别,配准等)的点云目标的评价是不会很准确的。本发明考虑到了 雷达点云的稀疏性,因而采用基于球邻域下的点集方向的方法,充分考虑到了稀疏 点云的结构相似性,所以结果与视觉任务的指标会有较好的相关性。3. Based on structural similarity. The approach of point-to-point and point-to-point evaluation ignores the similarity of structure, and the evaluation of point cloud objects for visual tasks (segmentation, recognition, registration, etc.) will not be very accurate. The present invention takes into account the sparseness of radar point clouds, and therefore adopts the method based on the direction of the point set in the spherical neighborhood, and fully considers the structural similarity of the sparse point clouds, so the results will have a good correlation with the indicators of the visual task. .

附图说明Description of drawings

通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目 的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:

附图1是本发明实施例中一种基于视觉任务的雷达点云数据质量评价方法流程图;1 is a flowchart of a method for evaluating the quality of radar point cloud data based on visual tasks in an embodiment of the present invention;

附图2是本发明实施例中的一种模型使用场景。FIG. 2 is a model usage scenario in an embodiment of the present invention.

附图3是本发明实施例中一种面向视觉任务的雷达点云客观质量评价系统功能框图。FIG. 3 is a functional block diagram of a visual task-oriented radar point cloud objective quality evaluation system according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人 员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于 本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.

先介绍其具体发明实施的参数设定与方法选择。具体实施过程如下。First, the parameter setting and method selection of its specific invention implementation are introduced. The specific implementation process is as follows.

如图1所示,根据本发明提供的一种基于视觉任务的雷达点云数据质量评价方法,包括以下步骤:As shown in FIG. 1 , a method for evaluating the quality of radar point cloud data based on visual tasks provided by the present invention includes the following steps:

1.参考点云a与加噪点云b分别分为若干相邻球域,观察点云数据中视觉任务 目标最长边为x,为了选择能尽量将目标整体计算且不影响模型鲁棒性,球 域的半径设置为1.5*(x/2),进而球邻域的数量也相应给定了。1. The reference point cloud a and the noised point cloud b are divided into several adjacent spherical domains respectively. The longest side of the visual task target in the observation point cloud data is x. In order to select the target that can be calculated as a whole as much as possible without affecting the robustness of the model, The radius of the sphere is set to 1.5*(x/2), and the number of sphere neighborhoods is given accordingly.

2.基于各球域计算其点集的特征值与特征向量,设特征值a1,a2,a3,特征向量 α1,α2,α3,按照以下公式组合成为点集的方向L,2. Calculate the eigenvalues and eigenvectors of the point set based on each spherical domain, set the eigenvalues a1, a2, a3, and the eigenvectors α1, α2, α3, and combine them into the direction L of the point set according to the following formula,

L=a1*α1+a2*α2+a3*α3L=a1*α1+a2*α2+a3*α3

按照此组合方式,就获得代表该球域中点集的方向的向量,使用L1范数求 得点云a与点云b在相同球域中的向量差值,这样可以把其向量大小和方 向差异都体现出来。According to this combination method, the vector representing the direction of the point set in the spherical domain is obtained, and the L1 norm is used to obtain the vector difference between point cloud a and point cloud b in the same spherical domain, so that the vector size and direction difference can be calculated. are all manifested.

3.以各球邻域点的个数为权值,将各球邻域计算的向量的差进行加权求和(即 密度加权),得到模型分数。3. Taking the number of points in each sphere neighborhood as the weight, the difference of the vectors calculated by each sphere neighborhood is weighted and summed (that is, density weighting) to obtain the model score.

4.验证其模型有效性,验证模型分数的有效性,将参考点云a与点云b带入基 于深度学习点云分割任务中,将验证集加梯度噪声,并且计算其对应的准确 率指标IoU(交并比)的大小,同时计算本发明提供的评价模型,此时模型相 对分数与对应指标大小有一定的线性关系。此线性度可用线性相关指标 PLCC,KROCC,SROCC进行衡量。若线性度较好,则可以利用此模型来估 测数据对应的服务质量。4. Verify the validity of its model, verify the validity of the model score, bring the reference point cloud a and point cloud b into the point cloud segmentation task based on deep learning, add gradient noise to the verification set, and calculate its corresponding accuracy index The size of the IoU (intersection and union ratio) is calculated at the same time as the evaluation model provided by the present invention. At this time, the relative score of the model has a certain linear relationship with the size of the corresponding index. This linearity can be measured by linear correlation indicators PLCC, KROCC, SROCC. If the linearity is good, this model can be used to estimate the service quality corresponding to the data.

优选地,所述特定视觉任务包括以下任一种:Preferably, the specific vision task includes any of the following:

点云分类、分割、识别、跟踪等视觉任务。Vision tasks such as point cloud classification, segmentation, recognition, and tracking.

优选地,所述模型中点云b噪声种类包括以下任一种:Preferably, the noise types of point cloud b in the model include any of the following:

高斯噪声,随机噪声,区域噪声等噪声种类Gaussian noise, random noise, regional noise and other noise types

优选地,所述与球域有关的乘数包括以下任一种:Preferably, the multiplier related to the spherical domain includes any of the following:

1.2,1.5,1.7等不大于2的数1.2, 1.5, 1.7 and other numbers not greater than 2

优选地,所述模型特征向量组合方式包括以下任一种:Preferably, the combination of the model feature vectors includes any of the following:

相乘累加,选择其中一或两个特征值与对应特征向量相乘累加等方式Multiply and accumulate, select one or two eigenvalues and the corresponding eigenvector to multiply and accumulate, etc.

优选地,所述模型衡量向量差异的方法包括以下任一种:Preferably, the method for measuring the vector difference of the model includes any of the following:

L1范数,L2范数、径向基函数等方式L1 norm, L2 norm, radial basis function, etc.

优选地,所述模型验证方法视觉任务准确性指标包括以下任一种:Preferably, the visual task accuracy index of the model verification method includes any of the following:

IoU(交并比),AP(平均准确率),召回率等指标IoU (intersection ratio), AP (average precision), recall and other indicators

本发明还提供了一种面向机器视觉任务的雷达点云客观质量评价系统,包括: 加噪模块、球域分割模块、计算球域点集的向量表示模块、计算向量差值模块、计 算质量评价分数模块。The present invention also provides a radar point cloud objective quality evaluation system oriented to machine vision tasks, including: a noise adding module, a spherical domain segmentation module, a vector representation module for calculating spherical domain point sets, a vector difference calculation module, and a calculation quality evaluation module. Fractions module.

输入点云数据模块:输入原始点云数据;Input point cloud data module: input original point cloud data;

加噪模块:加噪处理原始点云数据得到加噪点云数据;Noise-adding module: Add noise to the original point cloud data to obtain the noise-added point cloud data;

球域分割模块:将原始点云、加噪点云分别分出球域;Spherical domain segmentation module: separate the original point cloud and the noised point cloud into the spherical domain;

计算球域点集方向的向量表示模块:计算球域点集的特征值与特征向量,根据所述 特征值与特征向量,计算向量表示;Calculate the vector representation module of the spherical domain point set direction: calculate the eigenvalue and the eigenvector of the spherical domain point set, and calculate the vector representation according to the described eigenvalue and the eigenvector;

计算向量差值模块:计算原始点云和加噪点云在相同球域中的向量差值;Calculate the vector difference module: calculate the vector difference between the original point cloud and the noised point cloud in the same spherical domain;

计算质量评价分数模块:以各球邻域属性为权值,计算质量评价分数。Calculate the quality evaluation score module: use the attributes of each ball neighborhood as the weight to calculate the quality evaluation score.

优选地,还包括:Preferably, it also includes:

模型分数验证模块:将第一点云和第二点云带入基于深度学习点云分割任务中,计算 对应的准确率指标IoU的大小。Model score verification module: Bring the first point cloud and the second point cloud into the point cloud segmentation task based on deep learning, and calculate the size of the corresponding accuracy index IoU.

基于上述表述,以下给出具体一种应用实例:Based on the above statement, a specific application example is given below:

以自动驾驶中的激光雷达目标检测任务为例,激光雷达采集到的点云数据非常庞大。 海量点云数据十分不利于计算机下一步的传输及存储,也为后续的工作的开展设置了障 碍。如果能在检测之前,利用模型计算估测此目标检测任务的结果,进而优先选择模型评价质量较高的数据,增加其任务准确率。Taking the LiDAR target detection task in autonomous driving as an example, the point cloud data collected by LiDAR is very large. Massive point cloud data is not conducive to the next transmission and storage of the computer, and also sets up obstacles for the subsequent work. If the result of the target detection task can be estimated by using the model calculation before detection, and then the data with higher quality of model evaluation can be preferentially selected to increase the accuracy of the task.

如附图2,介绍了一种该模型评价算法的应用场景,简述如下:在无人驾驶中, 若车载LiDAR得到一组特定任务的雷达数据,可以通过我们的模型算法(如附图1) 可以得到此数据分数估计,由此判定该数据对其视觉任务是否可行。因而车辆视觉 任务采用可以决定是否使用此数据,大大提高帧利用率与节省计算资源。As shown in Figure 2, an application scenario of the model evaluation algorithm is introduced, which is briefly described as follows: In unmanned driving, if the vehicle-mounted LiDAR obtains a set of radar data for a specific task, we can use our model algorithm (as shown in Figure 1). ) can obtain a score estimate for this data, from which it can be determined whether the data is feasible for its vision task. Therefore, the vehicle vision task can decide whether to use this data, which greatly improves frame utilization and saves computing resources.

具体的,specific,

1.使用3维参考点云a计算球邻域,1. Calculate the spherical neighborhood using the 3D reference point cloud a,

2.基于相同球邻域,与加噪点云b同时计算点集方向特征,得到此方向特征。2. Based on the same spherical neighborhood, calculate the directional feature of the point set at the same time as the noised point cloud b, and obtain the directional feature.

3.使用某种向量差值的计算方式得到点集方向特征差值。这里常用的方法有L1范数,L2范数等。3. Use a certain vector difference calculation method to obtain the point set direction feature difference. The commonly used methods here are L1 norm, L2 norm, etc.

4.利用球邻域加权得到分数F。误差越大。分数越大。4. Use the sphere neighborhood weighting to get the score F. The bigger the error. The higher the score.

5.根据F的大小可以估计出该点云b对此视觉任务的影响大小,若F>Q(阈值), 则认为此数据无法完成正常视觉任务,舍弃该数据;若F<Q,则使用该数据 进行特定视觉任务。5. According to the size of F, the influence of the point cloud b on the visual task can be estimated. If F>Q (threshold), it is considered that the data cannot complete the normal visual task, and the data is discarded; if F<Q, use This data performs specific vision tasks.

如附图3所示,本实施例还提供了一种面向机器视觉任务的雷达点云客观质量 评价装置,包括:加噪模块、球域分割模块、计算球域点集的向量表示模块、计算 向量差值模块、计算质量评价分数模块。As shown in FIG. 3 , this embodiment also provides a device for evaluating the objective quality of radar point clouds for machine vision tasks, including: a noise adding module, a spherical segmentation module, a vector representation module for calculating spherical point sets, a calculation The vector difference module, the module for calculating the quality evaluation score.

输入点云数据模块:输入原始点云数据;Input point cloud data module: input original point cloud data;

加噪模块:加噪处理原始点云数据得到加噪点云数据;Noise-adding module: Add noise to the original point cloud data to obtain the noise-added point cloud data;

球域分割模块:将原始点云、加噪点云分别分出球域;Spherical domain segmentation module: separate the original point cloud and the noised point cloud into the spherical domain;

计算球域点集方向的向量表示模块:计算球域点集的特征值与特征向量,根据所述 特征值与特征向量,计算向量表示;Calculate the vector representation module of the spherical domain point set direction: calculate the eigenvalue and the eigenvector of the spherical domain point set, and calculate the vector representation according to the described eigenvalue and the eigenvector;

计算向量差值模块:计算原始点云和加噪点云在相同球域中的向量差值;Calculate the vector difference module: calculate the vector difference between the original point cloud and the noised point cloud in the same spherical domain;

计算质量评价分数模块:以各球邻域属性为权值,计算质量评价分数。Calculate the quality evaluation score module: use the attributes of each ball neighborhood as the weight to calculate the quality evaluation score.

综上可知,本发明的面向机器视觉任务的雷达点云客观质量评价方法及系统, 可以针对特定视觉任务(如点云分类、分割、识别等),考虑点云在此模型下的评 分,评估此输入数据针对此次任务的有效性,进而提高资源利用率与视觉任务的准 确性。To sum up, the objective quality evaluation method and system of radar point cloud for machine vision tasks of the present invention can be aimed at specific vision tasks (such as point cloud classification, segmentation, recognition, etc.) The validity of this input data for this task in turn improves resource utilization and the accuracy of the vision task.

本实施中所提供的基于视觉任务的雷达点云数据客观质量评价装置中各个功 能模块与上述实施例中基于视觉任务的雷达点云数据客观质量评价方法所分别相 对应,那么装置中所具有的结构和技术要素可由生成方法相应转换形成,在此省略 说明不再赘述。Each functional module in the visual task-based radar point cloud data objective quality evaluation device provided in this implementation corresponds to the visual task-based radar point cloud data objective quality evaluation method in the above-mentioned embodiment. The structure and technical elements can be formed by corresponding conversion of the generation method, and the description is omitted here and will not be repeated here.

本发明虽然已以较佳实施例公开如上,但其并不是用来限定本发明,任何本领 域技术人员在不脱离本发明的精神和范围内,都可以利用上述揭示的方法和技术内 容对本发明技术方案做出可能的变动和修改,因此,凡是未脱离本发明技术方案的 内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化及修饰, 均属于本发明技术方案的保护范围。Although the present invention has been disclosed above with preferred embodiments, it is not intended to limit the present invention. Any person skilled in the art can use the methods and technical contents disclosed above to improve the present invention without departing from the spirit and scope of the present invention. The technical solutions are subject to possible changes and modifications. Therefore, any simple modifications, equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention without departing from the content of the technical solutions of the present invention belong to the technical solutions of the present invention. protected range.

本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统及 其各个装置、模块、单元以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统及其各个装置、模块、单元以逻辑门、开关、专用集成电路、可编程逻辑控制 器以及嵌入式微控制器等的形式来实现相同功能。所以,本发明提供的系统及其各项装 置、模块、单元可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置、 模块、单元也可以视为硬件部件内的结构;也可以将用于实现各种功能的装置、模块、 单元视为既可以是实现方法的软件模块又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system provided by the present invention and its various devices, modules and units in the form of purely computer-readable program codes, the system provided by the present invention and its various devices can be implemented by logically programming the method steps. , modules, and units realize the same function in the form of logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded microcontrollers. Therefore, the system provided by the present invention and its various devices, modules and units can be regarded as a kind of hardware components, and the devices, modules and units included in it for realizing various functions can also be regarded as hardware components. The device, module and unit for realizing various functions can also be regarded as both a software module for realizing a method and a structure within a hardware component.

以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上 述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改, 这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的 特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be arbitrarily combined with each other without conflict.

Claims (14)

1. A machine vision task-oriented sparse point cloud objective quality evaluation method is characterized by comprising the following steps:
inputting original point cloud data and noise point cloud data obtained by noise processing of the original point cloud;
respectively dividing the original point cloud and the noise point cloud into spherical areas, wherein the radius of each divided spherical area is a multiplier of the size of the target dimension of the visual task;
calculating a characteristic value and a characteristic vector of the ball domain point set;
calculating a direction vector of the point set in the spherical domain according to the characteristic value and the characteristic vector;
calculating a vector difference value of the original point cloud and the noise-added point cloud in the same spherical domain;
taking the attribute of each ball neighborhood as a weight, carrying out weighted summation on the vector difference values calculated by each ball neighborhood to obtain a quality evaluation score,
the method for calculating the direction vector of the point set in the spherical domain is a combined calculation method of multiplying the maximum characteristic value by the corresponding characteristic vector or accumulating the multiplied characteristic values by the characteristic vectors;
and the neighborhood attribute of each ball is the number of neighborhood points of each ball.
2. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1,
the sparse point cloud is a radar point cloud.
3. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, further comprising:
and bringing the original point cloud and the noise point cloud into a visual task, calculating the corresponding accuracy index, and if the quality evaluation score and the accuracy index have a linear relationship, estimating the service quality corresponding to the data by using the sparse point cloud objective quality evaluation method, otherwise, not estimating the service quality corresponding to the data.
4. The machine vision task-oriented objective quality evaluation method for sparse point clouds according to claim 1,
the multiplier that the radius of the ball separating area is the size of the visual task target is that the radius of the ball separating area is not more than 2 times of the length of the longest edge of the visual task target.
5. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2,
the method for calculating the vector difference value of the original point cloud and the noisy point cloud in the same spherical domain is L1 norm or L2 norm or radial basis function.
6. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, wherein the vision task comprises any one of the following:
and (4) carrying out point cloud classification, segmentation, identification and tracking on visual tasks.
7. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, wherein the noise processing noise category comprises any one of:
gaussian noise, random noise, regional noise.
8. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 3, wherein the accuracy index comprises any one of the following:
cross-over ratio IoU, average accuracy AP, recall index.
9. The machine vision task-oriented sparse point cloud objective quality evaluation method as claimed in claim 1 or claim 2, comprising:
inputting sparse point cloud data;
calculating a quality evaluation score by using a method for calculating the quality evaluation score;
comparing the quality evaluation score with a preset threshold value: if the quality evaluation score is smaller than the threshold value, the data is valid and can be used continuously, otherwise, the data cannot be used and is discarded.
10. The utility model provides a machine vision task oriented sparse point cloud objective quality evaluation device which characterized in that includes:
an input point cloud data module: inputting original point cloud data;
a noise adding module: denoising the original point cloud data to obtain noisy point cloud data;
a spherical region segmentation module: respectively dividing the original point cloud and the noise point cloud into spherical areas, wherein the radius of each divided spherical area is a multiplier of the size of a visual task target;
a module for calculating the direction vector of the ball domain point set: calculating a characteristic value and a characteristic vector of the spherical domain point set, and calculating a direction vector of the spherical domain point set according to the characteristic value and the characteristic vector;
a calculate vector difference module: calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain;
a quality evaluation score calculating module: calculating a quality evaluation score by taking the neighborhood attribute of each ball as a weight;
the method for calculating the direction vector of the sphere domain point set is a combined calculation method of multiplying the maximum characteristic value by the corresponding characteristic vector or accumulating the multiplied characteristic values and the characteristic vectors;
and the neighborhood attribute of each ball is the number of neighborhood points of each ball.
11. The machine vision task-oriented sparse point cloud objective quality assessment device as claimed in claim 10,
the sparse point cloud is a radar point cloud.
12. The device for objective quality evaluation of sparse point cloud for machine vision task according to claim 10 or claim 11, further comprising:
a data validity judging module: and bringing the original point cloud and the noise point cloud into a visual task, calculating the corresponding accuracy index size, and if the quality evaluation score and the accuracy index size have a linear relation, the sparse point cloud objective quality evaluation device can be used for estimating the service quality corresponding to the data, otherwise, the sparse point cloud objective quality evaluation device cannot be used for estimating the service quality corresponding to the data.
13. The machine vision task-oriented sparse point cloud objective quality evaluation device as claimed in claim 10 or 11,
the multiplier that the radius of the ball separating area is the size of the visual task target is that the radius of the ball separating area is not more than 2 times of the length of the longest edge of the visual task target.
14. The machine vision task-oriented sparse point cloud objective quality assessment device as claimed in claim 10 or claim 11,
the method for calculating the vector difference value of the original point cloud and the noise point cloud in the same spherical domain is L1 norm or L2 norm.
CN201911233989.XA 2019-12-05 2019-12-05 Method and device for objective quality evaluation of radar point cloud for machine vision tasks Active CN113034419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911233989.XA CN113034419B (en) 2019-12-05 2019-12-05 Method and device for objective quality evaluation of radar point cloud for machine vision tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911233989.XA CN113034419B (en) 2019-12-05 2019-12-05 Method and device for objective quality evaluation of radar point cloud for machine vision tasks

Publications (2)

Publication Number Publication Date
CN113034419A CN113034419A (en) 2021-06-25
CN113034419B true CN113034419B (en) 2022-09-09

Family

ID=76450723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911233989.XA Active CN113034419B (en) 2019-12-05 2019-12-05 Method and device for objective quality evaluation of radar point cloud for machine vision tasks

Country Status (1)

Country Link
CN (1) CN113034419B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452583B (en) * 2023-06-14 2023-09-12 南京信息工程大学 Point cloud defect detection method, device, system and storage medium
CN117611805B (en) * 2023-12-20 2024-07-19 济南大学 A method and device for extracting 3D abnormal regions from regular 3D semantic point clouds

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
CN110246112A (en) * 2019-01-21 2019-09-17 厦门大学 Three-dimensional point cloud quality evaluating method in the room laser scanning SLAM based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
CN110246112A (en) * 2019-01-21 2019-09-17 厦门大学 Three-dimensional point cloud quality evaluating method in the room laser scanning SLAM based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Quality Metric for 3D LiDAR Point Cloud based on Vision Tasks;Heng Zhao et.al;《2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB) 》;20210319;第1-5页 *
Derivation of tree skeletons and error assessment using LiDAR point cloud data of varying quality;M. Bremer et.al;《ISPRS Journal of Photogrammetry and Remote Sensing》;20130415;第39-50页 *
地面三维激光扫描点云质量评价技术研究与展望;花向红 等;《地理空间信息》;20180831;第16卷(第8期);第1-7页 *

Also Published As

Publication number Publication date
CN113034419A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN111862222B (en) Target detection method and electronic equipment
CN113486961A (en) Radar RD image target detection method and system based on deep learning under low signal-to-noise ratio and computer equipment
CN103164692A (en) Special vehicle instrument automatic identification system and algorithm based on computer vision
Zhang et al. An improved edge detection algorithm based on canny operator
Pyo et al. Front collision warning based on vehicle detection using CNN
CN113034419B (en) Method and device for objective quality evaluation of radar point cloud for machine vision tasks
WO2022099528A1 (en) Method and apparatus for calculating normal vector of point cloud, computer device, and storage medium
US20210366203A1 (en) Method of processing point cloud data based on neural network
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN104463876A (en) Adaptive-filtering-based rapid multi-circle detection method for image under complex background
CN117011274A (en) Automatic glass bottle detection system and method thereof
Choi et al. Comparative analysis of generalized intersection over union and error matrix for vegetation cover classification assessment
JP2014106725A (en) Point group analyzer, point group analysis method and point group analysis program
CN113902898A (en) Training of target detection model, target detection method, device, equipment and medium
Gálvez et al. Immunological-based approach for accurate fitting of 3D noisy data points with Bézier surfaces
CN115937520A (en) Point cloud moving target segmentation method based on semantic information guidance
CN109978855A (en) A kind of method for detecting change of remote sensing image and device
George On convergence of regularized modified Newton's method for nonlinear ill-posed problems
CN112906519B (en) Vehicle type identification method and device
CN106485716A (en) A kind of many regarding SAR image segmentation method with Gamma mixed model based on region division
CN118778059A (en) A safety protection system and method for urban railway platform doors based on laser detection
Xue et al. Detection of Various Types of Metal Surface Defects Based on Image Processing.
Vishwakarma et al. Two‐dimensional DFT with sliding and hopping windows for edge map generation of road images
CN102214292A (en) Illumination processing method for human face images
Daudt et al. Learning to understand earth observation images with weak and unreliable ground truth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant