CN114764746A - Super-resolution method and device for laser radar, electronic device and storage medium - Google Patents
Super-resolution method and device for laser radar, electronic device and storage medium Download PDFInfo
- Publication number
- CN114764746A CN114764746A CN202111109236.5A CN202111109236A CN114764746A CN 114764746 A CN114764746 A CN 114764746A CN 202111109236 A CN202111109236 A CN 202111109236A CN 114764746 A CN114764746 A CN 114764746A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- super
- resolution
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000012549 training Methods 0.000 claims abstract description 52
- 230000006870 function Effects 0.000 claims description 122
- 238000004590 computer program Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 11
- 238000005457 optimization Methods 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000009827 uniform distribution Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明提供一种激光雷达的超分辨率方法和装置、电子设备及存储介质,其中所述方法包括:将原始点云输入到判别式模型中,得到与原始点云对应的点云块的形状嵌入;原始点云是对扫描得到的激光雷达点云数据进行约束后得到;将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;目标网格点对应的形状嵌入值是基于点云块的形状嵌入通过线性插值得到的;符号距离函数值用于得到超分辨重建结果。所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。本发明能够在任意分辨率上实现对新激光雷达数据的快速超分辨重建过程。
The present invention provides a laser radar super-resolution method and device, electronic equipment and storage medium, wherein the method includes: inputting an original point cloud into a discriminant model to obtain the shape of a point cloud block corresponding to the original point cloud Embedding; the original point cloud is obtained by constraining the scanned lidar point cloud data; the three-dimensional coordinates of the target grid points at the target resolution and their corresponding shape embedding values are spliced together and input into the generative model, The signed distance function value of the target grid point is obtained; the shape embedding value corresponding to the target grid point is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to obtain the super-resolution reconstruction result. The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data. The invention can realize the fast super-resolution reconstruction process of new laser radar data at any resolution.
Description
技术领域technical field
本发明涉及光学领域,尤其涉及一种激光雷达的超分辨率方法和装置、电子设备及存储介质。The present invention relates to the field of optics, and in particular, to a super-resolution method and device of a laser radar, an electronic device and a storage medium.
背景技术Background technique
激光雷达是一种光学遥感技术,通过向目标发射脉冲激光来测量目标的距离等参数。超分辨率是通过硬件或软件的方法提高原有图像的分辨率的过程。符号距离函数可以表述为一个度量空间中的点到该空间的一个子集区域边界的带符号距离值:点在区域边界内部为正,外部为负,边界上为0。Lidar is an optical remote sensing technology that measures parameters such as the distance of the target by emitting pulsed laser light at the target. Super-resolution is the process of increasing the resolution of the original image by means of hardware or software. The signed distance function can be expressed as the signed distance value of a point in a metric space to the boundary of a subset of the space: points are positive inside the boundary of the area, negative outside, and 0 on the boundary.
单独对激光雷达数据的处理、对图像的超分辨重建的现有技术均较多,而围绕激光雷达数据的超分辨重建技术研究较少。且在这些技术中,大多采用了结合多帧图像或结合3D激光雷达数据与2D图像数据的方法实现超分辨重建,没有在单帧3D激光雷达数据上直接进行超分辨率重建的技术,且存在复杂度高、处理速度慢,设备成本高的缺陷。更重要的是,现有技术的超分辨重建局限于重建到指定高分辨率,而不能实现重建到任意分辨率的目标。There are many existing technologies for processing lidar data alone and for super-resolution reconstruction of images, while there are few studies on super-resolution reconstruction technology around lidar data. And in these technologies, most of them use the method of combining multiple frames of images or combining 3D lidar data and 2D image data to achieve super-resolution reconstruction. There is no technology that directly performs super-resolution reconstruction on single-frame 3D lidar data, and there are The defects of high complexity, slow processing speed and high equipment cost. More importantly, the super-resolution reconstruction in the prior art is limited to reconstruction to a specified high resolution, and cannot achieve the goal of reconstruction to any resolution.
发明内容SUMMARY OF THE INVENTION
本发明提供一种激光雷达的超分辨率方法和装置、电子设备及存储介质,用以解决现有技术中存在的技术缺陷。The present invention provides a super-resolution method and device of a laser radar, an electronic device and a storage medium, so as to solve the technical defects existing in the prior art.
本发明提供一种激光雷达的超分辨率方法,包括:The present invention provides a super-resolution method for laser radar, including:
将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;Input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is obtained by constraining the laser radar point cloud data obtained by scanning;
将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;The three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value are spliced together and input into the generative model to obtain the signed distance function value of the target grid point; the target grid point corresponds to The shape embedding value of is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to form the point cloud result after super-resolution reconstruction;
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
根据本发明所述的激光雷达的超分辨率方法,其中,所述将目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值之后,包括:According to the super-resolution method of lidar according to the present invention, wherein the three-dimensional coordinates of the target grid point and their corresponding shape embedded values are spliced together and input into a generative model to obtain the target grid After the point's signed distance function value, include:
基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点,以构成超分辨重建后的点云结果。Based on the comparison result between the absolute value of the signed distance function value of the target grid point and the preset threshold, the grid point for optimization and completion is selected to form a point cloud result after super-resolution reconstruction.
根据本发明所述的激光雷达的超分辨率方法,其中,所述基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点,包括:According to the super-resolution method of lidar according to the present invention, wherein, based on the comparison result of the absolute value of the signed distance function value of the target grid point and the preset threshold, the grid for optimization and completion is selected. Grid points, including:
将所述目标网格点的符号距离函数值的绝对值和预设阈值进行比较,得到比较结果;Comparing the absolute value of the symbol distance function value of the target grid point with a preset threshold to obtain a comparison result;
若所述比较结果为所述目标网格点的符号距离函数值的绝对值小于预设阈值,将所述目标网格点作为用于优化补全的网格点。If the comparison result is that the absolute value of the signed distance function value of the target grid point is smaller than the preset threshold, the target grid point is used as the grid point for optimization completion.
根据本发明所述的激光雷达的超分辨率方法,其中,所述方法还包括:The super-resolution method for lidar according to the present invention, wherein the method further comprises:
将原始点云输入到所述判别式模型中,得到对应的相同分辨率下补全后的、带有类别概率值的点云;Input the original point cloud into the discriminant model, and obtain the corresponding point cloud with the category probability value after completion under the same resolution;
其中,所述判别式模型是基于点云样本数据和对应的重建后的点云类别样本进行训练后得到。The discriminant model is obtained after training based on point cloud sample data and corresponding reconstructed point cloud category samples.
根据本发明所述的激光雷达的超分辨率方法,其中,所述判别式模型和生成式模型协同训练的损失函数为:The super-resolution method for lidar according to the present invention, wherein the loss function of the co-training of the discriminative model and the generative model for:
其中,表示判别式模型的第一项损失函数,为二分类交叉熵损失函数:i表示判别式模型的每一层,一共m层;j表示第i层得到的网格中第j个点,每一层一共ni个点;yi,j表示第i层第j个点存在的真实概率,pi,j表示第i层第j个点存在的预测概率;in, Represents the first loss function of the discriminant model, which is a two-category cross-entropy loss function: i represents each layer of the discriminant model, a total of m layers; j represents the jth point in the grid obtained by the i-th layer, each There are a total of n i points in the layer; y i, j represent the real probability of the existence of the j-th point in the i-th layer, and p i, j represent the predicted probability of the existence of the j-th point in the i-th layer;
表示判别式模型的第二项损失函数,为多分类交叉熵损失函数:n指补全后的点云中的n个点,k为类别的总数,yi,c指第i个点属于第c类的真实概率,pi,c为预测出的第i个点属于第c类的预测概率; Represents the second loss function of the discriminant model, which is a multi-class cross-entropy loss function: n refers to n points in the completed point cloud, k is the total number of categories, y i, c refers to the i-th point belonging to the The true probability of class c, p i,c is the predicted probability that the predicted i-th point belongs to the c-th class;
表示生成式模型的损失函数,Ω为整个三维空间,Ω0为物体表面,函数φ为生成式模型表征的符号距离函数,x为三维坐标点,为符号距离函数的梯度,n(x)为坐标点x的法向量,ψ(φ (x))=exp(-α|φ(x)|),α>>1。 Represents the loss function of the generative model, Ω is the entire three-dimensional space, Ω 0 is the surface of the object, the function φ is the signed distance function represented by the generative model, x is the three-dimensional coordinate point, is the gradient of the signed distance function, n(x) is the normal vector of the coordinate point x, ψ(φ(x))=exp(-α|φ(x)|), α>>1.
本发明还提供了一种激光雷达的超分辨率装置,包括:The present invention also provides a super-resolution device for lidar, comprising:
形状嵌入确定模块,用于将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;The shape embedding determination module is used to input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is the laser radar point cloud obtained by scanning After the data is constrained, it is obtained;
符号距离函数值确定模块,用于将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;The signed distance function value determination module is used for splicing together the three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value and inputting them into the generative model to obtain the signed distance function of the target grid point. value; the shape embedding value corresponding to the target grid point is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to form the point cloud result after super-resolution reconstruction;
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
根据本发明所述的激光雷达的超分辨率装置,其中,所述激光雷达的超分辨率装置还包括:The super-resolution device for lidar according to the present invention, wherein the super-resolution device for lidar further comprises:
网格点筛选模块,用于基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点;a grid point screening module for selecting grid points for optimization and completion based on the comparison result of the absolute value of the signed distance function value of the target grid point and a preset threshold;
超分辨重建模块,用于基于所述用于优化补全的网格点,对所述目标点云进行超分辨重建,得到优化后的目标点云。A super-resolution reconstruction module, configured to perform super-resolution reconstruction on the target point cloud based on the grid points used for optimization and completion to obtain an optimized target point cloud.
根据本发明所述的激光雷达的超分辨率装置,其中,所述激光雷达的超分辨率装置还包括分类模块,所述分类模块用于:The super-resolution device for lidar according to the present invention, wherein the super-resolution device for lidar further includes a classification module, and the classification module is used for:
将原始点云输入到所述判别式模型中,得到对应的相同分辨率下补全后的、带有类别概率值的点云;Input the original point cloud into the discriminant model, and obtain the corresponding point cloud with the category probability value after completion under the same resolution;
其中,所述判别式模型是基于点云样本数据和对应的重建后的点云类别样本进行训练后得到。The discriminant model is obtained after training based on point cloud sample data and corresponding reconstructed point cloud category samples.
本发明还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任一种所述激光雷达的超分辨率方法的步骤。The present invention also provides an electronic device, comprising a memory, a processor and a computer program stored in the memory and running on the processor, when the processor executes the program, the laser radar as described above can be implemented by the processor. Steps of a super-resolution method.
本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上述任一种所述激光雷达的超分辨率方法的步骤。The present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of any one of the above-mentioned super-resolution methods for lidar.
本发明结合生成式模型与判别式模型,通过对已有的高分辨率数据的学习,实现对新激光雷达数据的快速超分辨重建过程。由于本发明建模了场景物体的符号距离函数,能够预测出用于重建到任意分辨率结果的符号距离函数,实现了对物体表面的连续表征,故能够在任意分辨率下实现对激光雷达点云数据的超分辨重建;通过对大批量场景数据中的信息进行了学习,使得补全的过程在利用了几何学原理的基础上进一步利用了大数据先验知识,从而取得了比现有技术更优的超分辨重建结果;对激光雷达扫描得到的目标点云数据能够进行快速的补全、提高分辨率的处理,从而实现了实时反馈优化完善后的点云结果的效果;本发明无需复杂硬件设备的支持,只需要利用所述简单电子设备及存储介质,并执行所述程序对点云数据进行处理即可实现超分辨重建,故可以直接应用到现有的使用了激光雷达设备的系统中,可移植性强。The invention combines the generative model and the discriminative model, and realizes the fast super-resolution reconstruction process of the new laser radar data by learning the existing high-resolution data. Since the present invention models the symbolic distance function of the scene object, it can predict the symbolic distance function used to reconstruct the result to any resolution, and realizes the continuous representation of the object surface, so it can realize the detection of the lidar point at any resolution. Super-resolution reconstruction of cloud data; by learning the information in large batches of scene data, the completion process further utilizes the prior knowledge of big data on the basis of using geometric principles, thus achieving better performance than the existing technology. Better super-resolution reconstruction results; the target point cloud data scanned by lidar can be quickly completed and processed to improve resolution, thereby realizing the effect of real-time feedback of the optimized and perfected point cloud results; the present invention does not require complicated With the support of hardware equipment, super-resolution reconstruction can be realized by using the simple electronic equipment and storage medium and executing the program to process the point cloud data, so it can be directly applied to the existing system using lidar equipment. medium, with strong portability.
附图说明Description of drawings
为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are the For some embodiments of the invention, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1是本发明提供的激光雷达的超分辨率方法的流程示意图;1 is a schematic flowchart of a super-resolution method of a laser radar provided by the present invention;
图2是本发明提供的激光雷达的超分辨率装置的结构示意图;2 is a schematic structural diagram of a super-resolution device for a laser radar provided by the present invention;
图3是本发明提供的电子设备的结构示意图。FIG. 3 is a schematic structural diagram of an electronic device provided by the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明中的附图,对本发明中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions in the present invention will be clearly and completely described below with reference to the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention. , not all examples. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
下面结合图1描述本发明的一种激光雷达的超分辨率方法,该方法包括:A super-resolution method of a laser radar of the present invention is described below in conjunction with FIG. 1, and the method includes:
S1、将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;S1. Input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is obtained by constraining the laser radar point cloud data obtained by scanning ;
原始点云用于生成原始点云对应的点云块的形状嵌入。对扫描得到的激光雷达点云数据进行约束,可以得到包含若干个点云块的原始点云。约束具体可以是:将激光雷达扫描得到的点云数据约束到一个指定的长方体范围内,该范围即为实际应用场景中所需要关注的部分。The original point cloud is used to generate the shape embedding of the point cloud block corresponding to the original point cloud. By constraining the scanned lidar point cloud data, the original point cloud containing several point cloud blocks can be obtained. Constraints can specifically be: constrain the point cloud data scanned by the lidar to a specified cuboid range, which is the part that needs attention in practical application scenarios.
在所述判别式模型中:前半部分使用U-Net结构、并连接若干个卷积神经网络层,将稀疏点云补全并给每个点进行分类,并将分类结果输出;后半部分连接若干个卷积神经网络层,最终输出点云块的形状嵌入。此时,所述判别式模型有以下输入输出形式:输入“约束后的激光雷达点云”;输出“对应的相同分辨率下补全后的、带有类别概率值的点云”、“点云块的形状嵌入”。In the discriminative model: the first half uses the U-Net structure and connects several convolutional neural network layers, completes the sparse point cloud and classifies each point, and outputs the classification results; the second half connects Several convolutional neural network layers, and finally output the shape embedding of the point cloud block. At this time, the discriminant model has the following input and output forms: input "constraint lidar point cloud"; output "corresponding completed point cloud with class probability value at the same resolution", "point cloud" Shape Embedding of Cloud Blocks".
S2、将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述生成式模型是由若干个全连接层构成的神经网络;所述符号距离函数值用于构成超分辨重建后的点云结果;S2, the three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value are spliced together and input into the generative model to obtain the signed distance function value of the target grid point; the target grid The shape embedding value corresponding to the point is obtained by linear interpolation based on the shape embedding of the point cloud block; the generative model is a neural network composed of several fully connected layers; the signed distance function value is used to form a super-resolution The reconstructed point cloud results;
在将这些点输入到生成式模型中时,不只是将点的三维坐标(x、 y、z值)输入,而是将坐标拼接上“从判别式模型中预测出点云块的形状嵌入,再线性插值得到的每个目标网格点的形状嵌入”,即输入有这样的形式:(S1,S2,...,Sn,x,y,z),其中(S1,S2,...,Sn)为(x,y,z)点对应的形状嵌入向量,共n个维度。When these points are input into the generative model, not only the three-dimensional coordinates (x, y, z values) of the points are input, but the coordinates are spliced together "The shape embedding of the point cloud block is predicted from the discriminative model, The shape embedding of each target grid point obtained by linear interpolation", that is, the input has the form: (S 1 ,S 2 ,...,S n ,x,y,z), where (S 1 ,S 2 ,...,S n ) is the shape embedding vector corresponding to the (x, y, z) point, with a total of n dimensions.
训练过程的最终输出是:输入到生成式模型中的每个点所对应的符号距离函数值。The final output of the training process is the value of the signed distance function for each point input to the generative model.
所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。在训练过程中,使用自监督训练和有监督训练相结合的方法,通过梯度下降、反向传播的方式对预测形状嵌入的判别式模型和拟合符号距离函数的生成式模型进行优化。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data. During the training process, a combination of self-supervised training and supervised training is used to optimize the discriminative model for predicting shape embeddings and the generative model for fitting signed distance functions through gradient descent and backpropagation.
具体地,在训练阶段:首先将约束后的原始点云输入到一个判别式模型中,该网络将每一帧激光雷达扫描到的原始稀疏点云映射到其对应的分布均匀、稀疏程度低的点云数据的一些形状嵌入上。而后将该帧对应的优化重建点云中的点、在约束空间内随机取得的点以及每个点对应的形状嵌入输入到生成式模型中,对生成式模型进行拟合训练。在生成式模型这里,对多帧点云数据同时进行训练,训练过程中所有的点云共享同一个生成式模型。为了区分不同点云的符号距离函数,在输入中将点云的三维坐标拼接了表征该点所在场景及位置的形状嵌入值。由于每一帧点云存在很多相似的场景,所以在训练预测形状嵌入的判别式模型时将每个场景均匀划分成若干个点云块,对每一个块分别赋予一个形状嵌入值。对训练数据提供的优化重建的点云中的每个点,通过线性插值的方式得到该点对应的形状嵌入值,进而输入生成式模型中训练。由于整个计算过程可导,可以通过上述端到端的方式进行训练。Specifically, in the training phase: first, the constrained original point cloud is input into a discriminative model, and the network maps the original sparse point cloud scanned by each frame of lidar to its corresponding uniform distribution and low degree of sparsity. Some shape embeddings on the point cloud data. Then, the points in the optimized and reconstructed point cloud corresponding to the frame, the points randomly obtained in the constraint space, and the shape embedding corresponding to each point are input into the generative model, and the generative model is fitted and trained. In the generative model, multiple frames of point cloud data are trained at the same time, and all point clouds during the training process share the same generative model. In order to distinguish the signed distance functions of different point clouds, the three-dimensional coordinates of the point cloud are spliced in the input with the shape embedding value representing the scene and location of the point. Since there are many similar scenes in each frame of point cloud, each scene is evenly divided into several point cloud blocks when training the discriminative model for predicting shape embedding, and each block is assigned a shape embedding value. For each point in the optimized and reconstructed point cloud provided by the training data, the shape embedding value corresponding to the point is obtained by linear interpolation, and then input into the generative model for training. Since the entire calculation process is steerable, it can be trained in the above-mentioned end-to-end manner.
生成式模型中输入的三维坐标可以通过编码的方式进行改进。也就是将三维坐标映射到m维。这样生成式模型的输入就变成了(m+n) 维的数据。其中有m维编码后的坐标、n维形状嵌入。也就是分别把x、 y、z编码成:The 3D coordinates entered in the generative model can be improved by coding. That is, the three-dimensional coordinates are mapped to the m-dimension. In this way, the input to the generative model becomes (m+n) dimensional data. Among them, there are m-dimensional encoded coordinates and n-dimensional shape embeddings. That is, encode x, y, and z into:
(sin(πx),cos(πx),sin(2πx),cos(2πx),……)(sin(πx), cos(πx), sin(2πx), cos(2πx),…)
(sin(πy),cos(πy),sin(2πy),cos(2πy),……)(sin(πy), cos(πy), sin(2πy), cos(2πy),…)
(sin(πz),cos(πz),sin(2πz),cos(2πz),……)(sin(πz), cos(πz), sin(2πz), cos(2πz),…)
各为m/3维。Each is m/3 dimension.
本发明结合生成式模型与判别式模型,通过对已有的高分辨率数据的学习,实现对新激光雷达数据的快速超分辨重建过程,由于本发明建模了场景物体的符号距离函数,实现了对物体表面的连续表征,故能够在任意分辨率上实现对激光雷达点云数据的超分辨重建;这种特性使得用户可以根据应用场景自主选择重建后的分辨率大小,而不需要进行复杂的算法修改、硬件设备更换。同时,建模了物体的符号距离函数使得对物体的渲染成为可能,满足了更多应用需求。The invention combines the generative model and the discriminative model, and realizes the fast super-resolution reconstruction process of the new laser radar data by learning the existing high-resolution data. In order to continuously characterize the surface of the object, it can realize super-resolution reconstruction of lidar point cloud data at any resolution; this feature allows users to independently select the reconstructed resolution according to the application scenario, without the need for complex reconstruction. Algorithm modification and hardware device replacement. At the same time, modeling the signed distance function of the object makes it possible to render the object and meets more application requirements.
本发明通过对大批量场景数据中的信息进行了学习,使得补全的过程在利用了几何学原理的基础上进一步利用了大数据先验知识,从而取得了比现有技术更优的超分辨重建结果;经激光雷达扫描得到的目标点云数据能够快速进行补全、提高分辨率的处理,从而实现了实时反馈优化完善后的点云结果的效果;本发明无需复杂硬件设备的支持,只需要利用所述简单电子设备及存储介质,并执行所述程序对点云数据进行处理即可实现超分辨重建,故可以直接应用到现有的使用了激光雷达设备的系统中,可移植性强。The invention learns the information in the large batch of scene data, so that the completion process further utilizes the prior knowledge of the big data on the basis of using the geometrical principle, so as to obtain better super-resolution than the prior art. Reconstruction results; the target point cloud data obtained by the laser radar scanning can be quickly completed and processed to improve the resolution, thereby realizing the effect of real-time feedback of the optimized and perfected point cloud results; the present invention does not require the support of complex hardware equipment, only It is necessary to use the simple electronic equipment and storage medium, and execute the program to process the point cloud data to achieve super-resolution reconstruction, so it can be directly applied to existing systems using lidar equipment, with strong portability .
根据本发明所述的激光雷达的超分辨率方法,其中,所述将目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值之后,包括:According to the super-resolution method of lidar according to the present invention, wherein the three-dimensional coordinates of the target grid point and their corresponding shape embedded values are spliced together and input into a generative model to obtain the target grid After the point's signed distance function value, include:
基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点。这些得到的所有用于优化补全的网格点可以构成超分辨重建后的点云结果。Based on the comparison result between the absolute value of the signed distance function value of the target grid point and the preset threshold, a grid point for optimal completion is selected. All the obtained grid points for optimal completion can constitute the point cloud after super-resolution reconstruction.
也就是说,得到符号距离函数值后,可以限定一个较小的预设阈值(预设阈值可以设置为接近零的数,例如0.001、0.01等),将符号距离函数值绝对值小于该阈值的点视作符号距离函数值为零的点,即物体表面上的点,其他点视作不在表面上的点。这样便预测出了对输入点云进行优化改善、超分辨重建后得到的结果。That is to say, after obtaining the symbol distance function value, a smaller preset threshold value can be defined (the preset threshold value can be set to a number close to zero, such as 0.001, 0.01, etc.), and the absolute value of the symbol distance function value is smaller than the threshold value. A point is regarded as a point whose signed distance function value is zero, that is, a point on the surface of the object, and other points are regarded as a point not on the surface. In this way, the results obtained by optimizing and improving the input point cloud and super-resolution reconstruction are predicted.
根据本发明所述的激光雷达的超分辨率方法,其中,所述基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点,包括:According to the super-resolution method of lidar according to the present invention, wherein, based on the comparison result of the absolute value of the signed distance function value of the target grid point and the preset threshold, the grid for optimization and completion is selected. Grid points, including:
将所述目标网格点的符号距离函数值的绝对值和预设阈值进行比较,得到比较结果;Comparing the absolute value of the symbol distance function value of the target grid point with a preset threshold to obtain a comparison result;
若所述比较结果为所述目标网格点的符号距离函数值的绝对值小于预设阈值,将所述目标网格点作为用于优化补全的网格点。If the comparison result is that the absolute value of the signed distance function value of the target grid point is smaller than the preset threshold, the target grid point is used as the grid point for optimization completion.
若所述比较结果为所述目标网格点的符号距离函数值的绝对值不小于预设阈值,则将所述目标网格点视为不在物体表面的网格点,可以将不在物体表面的网格点丢弃,无需考虑。If the comparison result is that the absolute value of the signed distance function value of the target grid point is not less than the preset threshold, the target grid point is regarded as a grid point not on the surface of the object, and the grid point not on the surface of the object can be regarded as a grid point not on the surface of the object. Grid points are discarded without consideration.
根据本发明所述的激光雷达的超分辨率方法,其中,所述方法还包括:The super-resolution method for lidar according to the present invention, wherein the method further comprises:
将原始点云输入到所述判别式模型中,得到对应的相同分辨率下补全后的、带有类别概率值的点云;Input the original point cloud into the discriminant model, and obtain the corresponding point cloud with the category probability value after completion under the same resolution;
其中,所述判别式模型是基于点云样本数据和对应的重建后的点云类别样本进行训练后得到。The discriminant model is obtained after training based on point cloud sample data and corresponding reconstructed point cloud category samples.
在超分辨重建时,我们将稀疏点云输入超分辨重建的网络中,得到最终预测出的任意分辨率下的重建点云A、和输入点云对应的分辨率下带有类别信息的重建点云B。接着,对A中的每个点a,在B中通过最近邻搜索找到距离a最近的点b,并将点b的类别信息赋予点a。这样,A中的每个点都有了类别信息。即最终得到了任意分辨率下的、带有类别信息的超分辨点云重建结果。During super-resolution reconstruction, we input the sparse point cloud into the super-resolution reconstruction network to obtain the final predicted reconstructed point cloud A at any resolution, and the reconstructed point with category information at the resolution corresponding to the input point cloud. cloud B. Next, for each point a in A, find the point b closest to a through nearest neighbor search in B, and assign the category information of point b to point a. In this way, each point in A has class information. That is, a super-resolution point cloud reconstruction result with category information at any resolution is finally obtained.
由于判别式模型的输出连接到了生成式模型中,整个计算过程可导,所以可以通过以下损失函数的优化过程来实现对判别式模型、生成式模型的同时训练。根据本发明所述的激光雷达的超分辨率方法,其中,所述判别式模型和生成式模型协同训练的损失函数为:Since the output of the discriminative model is connected to the generative model, the entire calculation process can be guided, so the simultaneous training of the discriminative model and the generative model can be realized through the optimization process of the following loss function. The super-resolution method for lidar according to the present invention, wherein the loss function of the co-training of the discriminative model and the generative model for:
其中,表示判别式模型的第一项损失函数,为二分类交叉熵损失函数:(对应的训练过程是:输出点云块的形状嵌入的训练过程), i表示判别式模型的每一层,一共m层;j表示第i层得到的网格中第j个点,每一层一共ni个点;yi,j表示第i层第j个点存在的真实概率, pi,j表示第i层第j个点存在的预测概率;in, Represents the first loss function of the discriminant model, which is the two-category cross-entropy loss function: (the corresponding training process is: the training process of the shape embedding of the output point cloud block), i represents each layer of the discriminant model, a total of m layer; j represents the j-th point in the grid obtained by the i-th layer, and each layer has a total of n i points; y i, j represent the real probability of the j-th point in the i-th layer, p i, j represent the i-th point The predicted probability of the existence of the jth point in the layer;
表示判别式模型的第二项损失函数,为多分类交叉熵损失函数(对应的训练过程是:输出点云块对应的相同分辨率下补全后的、带有类别概率值的点云的训练过程),n指补全后的点云中的n个点, k为类别的总数,yi,c指第i个点属于第c类的真实概率(对于每个点i,有且只有一类下的真实概率为1,其他类下的真实概率为0,即该点的真实类别只能为某一类),pi,c为预测出的第i个点属于第c类的预测概率; Represents the second loss function of the discriminant model, which is the multi-class cross-entropy loss function (the corresponding training process is: the training of the point cloud with the category probability value after completion at the same resolution corresponding to the output point cloud block process), n refers to n points in the completed point cloud, k is the total number of categories, y i, c refers to the true probability that the i-th point belongs to the c-th category (for each point i, there is one and only one The true probability under the class is 1, the true probability under other classes is 0, that is, the true class of the point can only be a certain class), p i, c is the predicted probability that the i-th point belongs to the c-th class ;
在判别式模型中:前半部分使用U-Net结构、并连接若干个卷积神经网络层,将稀疏点云补全并给每个点进行分类,并将分类结果输出;后半部分连接若干个卷积神经网络层,最终输出点云块的形状嵌入。此时,我们的判别式模型有以下输入输出形式:输入“约束后的激光雷达点云”;输出“对应的相同分辨率下补全后的、带有类别概率值的点云”、“点云块的形状嵌入”。以上分类和生成点云块的形状嵌入的过程是同步进行的,在训练阶段也是一起协同训练的。In the discriminative model: the first half uses the U-Net structure and connects several convolutional neural network layers, completes the sparse point cloud and classifies each point, and outputs the classification results; the second half connects several A convolutional neural network layer that finally outputs the shape embedding of the point cloud patch. At this time, our discriminant model has the following input and output forms: input "constraint lidar point cloud"; output "corresponding completed point cloud with class probability value at the same resolution", "point cloud" Shape Embedding of Cloud Blocks". The above process of classifying and generating the shape embedding of point cloud blocks is carried out synchronously, and is also co-trained in the training phase.
表示生成式模型的损失函数,Ω为整个三维空间,Ω0为物体表面,函数φ为生成式模型表征的符号距离函数,x为三维坐标点,为符号距离函数的梯度,n(x)为坐标点x的法向量,ψ(φ (x))=exp(-α|φ(x)|),α>>1。在最小化该loss时:公式中第一个积分项使得符号距离函数在整个空间中梯度的模长趋于1(物理上反映了符号距离函数在空间场中的特性);第二个积分项里第一个加项使得在表面上的点的符号距离函数值趋于0(物理上反映了表面上的点到表面的距离为0),第二个加项使得表面上的点的梯度和该点的法向量趋于同向(物理上,梯度表征了函数值变化最快的方向,对于符号距离函数,表面法向量方向恰好为函数值变化最快的方向);第三个积分项利用exp指数函数,使得不在表面上的点的符号距离函数值趋于无穷大。 Represents the loss function of the generative model, Ω is the entire three-dimensional space, Ω 0 is the surface of the object, the function φ is the signed distance function represented by the generative model, x is the three-dimensional coordinate point, is the gradient of the signed distance function, n(x) is the normal vector of the coordinate point x, ψ(φ(x))=exp(-α|φ(x)|), α>>1. When minimizing the loss: the first integral term in the formula makes the modulo length of the gradient of the signed distance function in the entire space tend to 1 (physically reflecting the characteristics of the signed distance function in the space field); the second integral term The first addition term makes the value of the signed distance function of the point on the surface tend to 0 (physically reflecting that the distance from the point on the surface to the surface is 0), and the second addition term makes the gradient of the point on the surface and The normal vectors of the point tend to be in the same direction (physically, the gradient represents the direction in which the function value changes the fastest, and for the signed distance function, the surface normal vector direction happens to be the direction in which the function value changes the fastest); the third integral term uses exp is the exponential function such that the signed distance function values for points not on the surface tend to infinity.
在生成式模型中,利用该损失函数训练时:对于每一帧对应的优化重建点云中的点(在物体表面上),将预测出的它们的符号距离函数值优化到零;对于在约束空间内随机取得的点(不在物体表面上),可以将预测出的它们的符号距离函数值优化到它们的真实符号距离函数值上,但由于在超分辨率重建过程中关心的为物体表面,即符号距离函数值为零的点,故选择将这些不在表面上的点的预测符号距离函数值优化到无穷大,从而增强对表面的辨别程度。In the generative model, when training with this loss function: for the points in the optimized reconstructed point cloud (on the surface of the object) corresponding to each frame, optimize their predicted signed distance function values to zero; Randomly obtained points in space (not on the surface of the object) can optimize their predicted signed distance function values to their true signed distance function values, but since the object surface is concerned in the super-resolution reconstruction process, That is, the point with the signed distance function value is zero, so the predicted signed distance function value of these points not on the surface is chosen to be optimized to infinity, thereby enhancing the discrimination of the surface.
对于激光雷达数据,其点云存在较大的稀疏性。虽然可以只使用单独的退化的生成式模型对原始点云数据直接进行超分辨率重建,但得到的结果会存在分布不均匀、重建结果细节不理想、数据噪声较大等问题。故为了实现较好结果的超分辨重建,需要弥补这种稀疏性,即实现对稀疏区域的补全。本发明中生成式模型与判别式模型相结合的方式使得补全过程可以伴随着超分辨重建过程进行。For lidar data, the point cloud has large sparsity. Although it is possible to directly perform super-resolution reconstruction on the original point cloud data using only a single degenerate generative model, the obtained results will have problems such as uneven distribution, unsatisfactory reconstruction results, and large data noise. Therefore, in order to achieve super-resolution reconstruction with better results, it is necessary to make up for this sparsity, that is, to complete the sparse area. The combination of the generative model and the discriminative model in the present invention enables the completion process to be performed along with the super-resolution reconstruction process.
参见图2,下面对本发明提供的激光雷达的超分辨率装置进行描述,下文描述的激光雷达的超分辨率装置与上文描述的激光雷达的超分辨率方法可相互对应参照,所述激光雷达的超分辨率装置包括:Referring to FIG. 2 , the super-resolution device of lidar provided by the present invention is described below. The super-resolution device of lidar described below and the super-resolution method of lidar described above can be referred to each other correspondingly. The lidar The super-resolution devices include:
形状嵌入确定模块10,用于将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;The shape embedding
原始点云用于生成原始点云对应的点云块的形状嵌入,对扫描得到的激光雷达点云数据进行约束,可以得到包含若干个点云块的原始点云。约束具体可以是:首先将激光雷达扫描得到的点云数据约束到一个指定的长方体范围内,该范围即为实际应用场景中所需要关注的部分。The original point cloud is used to generate the shape embedding of the point cloud block corresponding to the original point cloud, and the original point cloud containing several point cloud blocks can be obtained by constraining the scanned lidar point cloud data. Constraints can be specifically: first, constrain the point cloud data scanned by the lidar to a specified cuboid range, which is the part that needs attention in the actual application scenario.
符号距离函数值确定模块20,用于将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;The signed distance function
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
在将这些点输入到生成式模型中时,不只是将点的三维坐标(x、 y、z值)输入,而是将坐标拼接上“从判别式模型中预测出点云块的形状嵌入,再线性插值得到的每个目标网格点的形状嵌入值(也就是形状嵌入向量)”即输入有这样的形式:(S1,S2,...,Sn,x,y,z),其中 (S1,S2,...,Sn)为(x,y,z)点对应的形状嵌入向量,共n个维度。When these points are input into the generative model, not only the three-dimensional coordinates (x, y, z values) of the points are input, but the coordinates are spliced together "The shape embedding of the point cloud block is predicted from the discriminative model, The shape embedding value (that is, the shape embedding vector) of each target grid point obtained by linear interpolation", that is, the input has the form: (S 1 , S 2 , ..., Sn , x, y, z) , where (S 1 , S 2 ,..., S n ) is the shape embedding vector corresponding to the point (x, y, z), with a total of n dimensions.
训练过程的最终输出是:输入到生成式模型中的每个点所对应的符号距离函数值。The final output of the training process is the value of the signed distance function for each point input to the generative model.
所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。在训练过程中,使用自监督训练和有监督训练相结合的方法,通过梯度下降、反向传播的方式对预测形状嵌入的判别式模型和拟合符号距离函数的生成式模型进行优化。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data. During the training process, a combination of self-supervised training and supervised training is used to optimize the discriminative model for predicting shape embeddings and the generative model for fitting signed distance functions through gradient descent and backpropagation.
具体地,在训练阶段:首先将约束后的原始点云输入到一个判别式模型中,该网络将每一帧激光雷达扫描到的原始稀疏点云映射到其对应的分布均匀、稀疏程度低的点云数据的一些形状嵌入上。而后将该帧对应的优化重建点云中的点、在约束空间内随机取得的点以及每个点对应的形状嵌入输入到生成式模型中,对生成式模型进行拟合训练。在生成式模型这里,对多帧点云数据同时进行训练,训练过程中所有的点云共享同一个生成式模型。为了区分不同点云的符号距离函数,在输入中将点云的三维坐标拼接了表征该点所在场景及位置的形状嵌入值。由于每一帧点云存在很多相似的场景,所以在训练预测形状嵌入的判别式模型时将每个场景均匀划分成若干个点云块,对每一个块分别赋予一个形状嵌入值。对训练数据提供的优化重建的点云中的每个点,通过线性插值的方式得到该点对应的形状嵌入值,进而输入生成式模型中训练。由于整个计算过程可导,可以通过上述端到端的方式进行训练。Specifically, in the training phase: first, the constrained original point cloud is input into a discriminative model, and the network maps the original sparse point cloud scanned by each frame of lidar to its corresponding uniform distribution and low degree of sparsity. Some shape embeddings on the point cloud data. Then, the points in the optimized and reconstructed point cloud corresponding to the frame, the points randomly obtained in the constraint space, and the shape embedding corresponding to each point are input into the generative model, and the generative model is fitted and trained. In the generative model, multiple frames of point cloud data are trained at the same time, and all point clouds during the training process share the same generative model. In order to distinguish the signed distance functions of different point clouds, the three-dimensional coordinates of the point cloud are spliced in the input with the shape embedding value representing the scene and location of the point. Since there are many similar scenes in each frame of point cloud, each scene is evenly divided into several point cloud blocks when training the discriminative model for predicting shape embedding, and each block is assigned a shape embedding value. For each point in the optimized and reconstructed point cloud provided by the training data, the shape embedding value corresponding to the point is obtained by linear interpolation, and then input into the generative model for training. Since the entire calculation process is steerable, it can be trained in the above-mentioned end-to-end manner.
根据本发明所述的激光雷达的超分辨率装置,其中,所述激光雷达的超分辨率装置还包括:The super-resolution device for lidar according to the present invention, wherein the super-resolution device for lidar further comprises:
超分辨重建模块,用于基于所述目标网格点的符号距离函数值的绝对值和预设阈值的比较结果,选取出用于优化补全的网格点,以构成超分辨重建后的点云结果。The super-resolution reconstruction module is used to select the grid points for optimization and completion based on the comparison result of the absolute value of the symbol distance function value of the target grid point and the preset threshold, so as to form the point after the super-resolution reconstruction Cloud results.
也就是说,得到符号距离函数值后,可以限定一个较小的预设阈值(预设阈值可以设置为接近零的数,例如0.001、0.01等),将符号距离函数值绝对值小于该阈值的点视作符号距离函数值为零的点,即为物体表面上的点,其他点视作不在表面上的点。这样便预测出了对输入点云进行优化改善、超分辨重建后得到的结果。That is to say, after obtaining the symbol distance function value, a smaller preset threshold value can be defined (the preset threshold value can be set to a number close to zero, such as 0.001, 0.01, etc.), and the absolute value of the symbol distance function value is smaller than the threshold value. A point is regarded as a point whose signed distance function value is zero, that is, a point on the surface of the object, and other points are regarded as a point not on the surface. In this way, the results obtained by optimizing and improving the input point cloud and super-resolution reconstruction are predicted.
根据本发明所述的激光雷达的超分辨率装置,其中,所述超分辨重建模块具体用于:According to the super-resolution device for lidar according to the present invention, the super-resolution reconstruction module is specifically used for:
将所述目标网格点的符号距离函数值的绝对值和预设阈值进行比较,得到比较结果;Comparing the absolute value of the symbol distance function value of the target grid point with a preset threshold to obtain a comparison result;
若所述比较结果为所述目标网格点的符号距离函数值的绝对值小于预设阈值,将所述目标网格点作为用于优化补全的网格点。If the comparison result is that the absolute value of the signed distance function value of the target grid point is smaller than the preset threshold, the target grid point is used as the grid point for optimization completion.
若所述比较结果为所述目标网格点的符号距离函数值的绝对值不小于预设阈值,则将所述目标网格点视为不在物体表面的网格点,可以将不在物体表面的网格点丢弃,无需考虑。If the comparison result is that the absolute value of the signed distance function value of the target grid point is not less than the preset threshold, the target grid point is regarded as a grid point not on the surface of the object, and the grid point not on the surface of the object can be regarded as a grid point not on the surface of the object. Grid points are discarded without consideration.
由于判别式模型的输出连接到了生成式模型中,整个计算过程可导,所以可以通过对以下损失函数的优化过程来实现对判别式模型、生成式模型的同时训练。根据本发明所述的激光雷达的超分辨率方法,其中,Since the output of the discriminative model is connected to the generative model, the entire calculation process can be guided, so the simultaneous training of the discriminative model and the generative model can be realized by optimizing the following loss function. According to the super-resolution method of lidar according to the present invention, wherein,
所述判别式模型和生成式模型协同训练的损失函数为:Loss function for co-training of the discriminative model and generative model for:
其中,表示判别式模型的第一项损失函数(对应的训练过程是:输出点云块的形状嵌入的训练过程),i表示判别式模型的每一层,一共m层;j表示第i层得到的网格中第j个点,每一层一共ni个点; yi,j表示第i层第j个点存在的真实概率,pi,j表示第i层第j个点存在的预测概率;in, Represents the first loss function of the discriminant model (the corresponding training process is: the training process of the shape embedding of the output point cloud block), i represents each layer of the discriminant model, a total of m layers; j represents the i-th layer obtained The jth point in the grid, there are a total of n i points in each layer; y i, j represent the real probability of the existence of the jth point in the i-th layer, p i, j represent the predicted probability of the existence of the j-th point in the i-th layer ;
表示判别式模型的第二项损失函数(对应的训练过程是:输出点云块对应的相同分辨率下补全后的、带有类别概率值的点云的训练过程),n指补全后的点云中的n个点,k为类别的总数,yi,c指第i 个点属于第c类的真实概率(对于每个点i,有且只有一类下的真实概率为1,其他类下的真实概率为0,即该点的真实类别只能为某一类),pi,c为预测出的第i个点属于第c类的预测概率; Indicates the second loss function of the discriminant model (the corresponding training process is: the training process of the completed point cloud with the category probability value corresponding to the output point cloud block at the same resolution), n refers to the completed point cloud n points in the point cloud of , k is the total number of categories, y i,c refers to the true probability that the i-th point belongs to the c-th category (for each point i, the true probability under one and only one category is 1, The true probability of other classes is 0, that is, the true class of the point can only be a certain class), p i,c is the predicted probability that the i-th point belongs to the c-th class;
在判别式模型中:前半部分使用U-Net结构、并连接若干个卷积神经网络层,将稀疏点云补全并给每个点进行分类,并将分类结果输出;后半部分连接若干个卷积神经网络层,最终输出点云块的形状嵌入。此时,我们的判别式模型有以下输入输出形式:输入“约束后的激光雷达点云”;输出“对应的相同分辨率下补全后的、带有类别概率值的点云”、“点云块的形状嵌入”。以上分类和生成点云块的形状嵌入的过程是同步进行的,在训练阶段也是一起协同训练的。In the discriminative model: the first half uses the U-Net structure and connects several convolutional neural network layers, completes the sparse point cloud and classifies each point, and outputs the classification results; the second half connects several A convolutional neural network layer that finally outputs the shape embedding of the point cloud patch. At this time, our discriminant model has the following input and output forms: input "constraint lidar point cloud"; output "corresponding completed point cloud with class probability value at the same resolution", "point cloud" Shape Embedding of Cloud Blocks". The above process of classifying and generating the shape embedding of point cloud blocks is carried out synchronously, and is also co-trained in the training phase.
表示生成式模型的损失函数,Ω为整个三维空间,Ω0为物体表面,函数φ为生成式模型表征的符号距离函数,x为三维坐标点,为符号距离函数的梯度,n(x)为坐标点x的法向量,ψ(φ (x))=exp(-α|φ(x)|),α>>1。在最小化该loss时:公式中第一个积分项使得符号距离函数在整个空间中梯度模长趋于1(物理上反映了符号距离函数在空间场中的特性);第二个积分项里第一个加项使得在表面上的点的符号距离函数值趋于0(物理上反映了表面上的点到表面的距离为0),第二个加项使得表面上的点的梯度和该点的法向量趋于同向(物理上,梯度表征了函数值变化最快的方向,对于符号距离函数,表面法向量方向恰好为函数值变化最快的方向);第三个积分项利用exp指数函数,使得不在表面上的点的符号距离函数值趋于无穷大。 Represents the loss function of the generative model, Ω is the entire three-dimensional space, Ω 0 is the surface of the object, the function φ is the signed distance function represented by the generative model, x is the three-dimensional coordinate point, is the gradient of the signed distance function, n(x) is the normal vector of the coordinate point x, ψ(φ(x))=exp(-α|φ(x)|), α>>1. When minimizing the loss: the first integral term in the formula makes the gradient modulus length of the signed distance function tend to 1 in the entire space (physically reflecting the characteristics of the signed distance function in the space field); the second integral term The first addition term makes the signed distance function value of the point on the surface tend to 0 (physically reflects the distance from the point on the surface to the surface is 0), and the second addition term makes the gradient of the point on the surface and the The normal vectors of the points tend to be in the same direction (physically, the gradient represents the direction in which the value of the function changes the fastest. For the signed distance function, the direction of the surface normal vector is exactly the direction in which the value of the function changes the fastest); the third integral term uses exp Exponential function such that the value of the signed distance function for points not on the surface tends to infinity.
在生成式模型中,利用该损失函数训练时:对于每一帧对应的优化重建点云中的点(在物体表面上),将预测出的它们的符号距离函数值优化到零;对于在约束空间内随机取得的点(不在物体表面上),可以将预测出的它们的符号距离函数值优化到它们的真实符号距离函数值上,但由于在超分辨率重建过程中关心的为物体表面,即符号距离函数值为零的点,故选择将这些不在表面上的点的预测符号距离函数值优化到无穷大,从而增强对表面的辨别程度。In the generative model, when training with this loss function: for the points in the optimized reconstructed point cloud (on the surface of the object) corresponding to each frame, optimize their predicted signed distance function values to zero; Randomly obtained points in space (not on the surface of the object) can optimize their predicted signed distance function values to their true signed distance function values, but since the object surface is concerned in the super-resolution reconstruction process, That is, the point with the signed distance function value is zero, so the predicted signed distance function value of these points not on the surface is chosen to be optimized to infinity, thereby enhancing the discrimination of the surface.
对于激光雷达数据,其点云存在较大的稀疏性。虽然可以只使用单独的退化的生成式模型对原始点云数据直接进行超分辨率重建,但得到的结果会存在分布不均匀、重建结果细节不理想、数据噪声较大等问题。故为了实现较好结果的超分辨重建,需要弥补这种稀疏性,即实现对稀疏区域的补全。本发明中生成式模型与判别式模型相结合的方式使得补全过程可以伴随着超分辨重建过程进行。For lidar data, the point cloud has large sparsity. Although it is possible to directly perform super-resolution reconstruction on the original point cloud data using only a single degenerate generative model, the obtained results will have problems such as uneven distribution, unsatisfactory reconstruction results, and large data noise. Therefore, in order to achieve super-resolution reconstruction with better results, it is necessary to make up for this sparsity, that is, to complete the sparse area. The combination of the generative model and the discriminative model in the present invention enables the completion process to be performed along with the super-resolution reconstruction process.
根据本发明所述的激光雷达的超分辨率装置,其中,所述激光雷达的超分辨率装置还包括分类模块,所述分类模块用于:The super-resolution device for lidar according to the present invention, wherein the super-resolution device for lidar further includes a classification module, and the classification module is used for:
将原始点云输入到所述判别式模型中,得到对应的相同分辨率下补全后的、带有类别概率值的点云;Input the original point cloud into the discriminant model, and obtain the corresponding point cloud with the category probability value after completion under the same resolution;
其中,所述判别式模型是基于点云样本数据和对应的重建后的点云类别样本进行训练后得到。The discriminant model is obtained after training based on point cloud sample data and corresponding reconstructed point cloud category samples.
图3示例了一种电子设备的实体结构示意图,该电子设备可以包括:处理器(processor)310、通信接口(Communications Interface)320、存储器(memory)330和通信总线340,其中,处理器310,通信接口 320,存储器330通过通信总线340完成相互间的通信。处理器310 可以调用存储器330中的逻辑指令,以执行激光雷达的超分辨率方法,该方法包括:3 illustrates a schematic diagram of the physical structure of an electronic device, the electronic device may include: a processor (processor) 310, a communication interface (Communications Interface) 320, a memory (memory) 330 and a
S1、将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;S1. Input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is obtained by constraining the laser radar point cloud data obtained by scanning ;
S2、将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;S2, the three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value are spliced together and input into the generative model to obtain the signed distance function value of the target grid point; the target grid The shape embedding value corresponding to the point is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to form the point cloud result after super-resolution reconstruction;
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
此外,上述的存储器330中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。In addition, the above-mentioned logic instructions in the
另一方面,本发明还提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法所提供的激光雷达的超分辨率方法,该方法包括:In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer When executed, the computer can execute the super-resolution method of the lidar provided by the above methods, and the method includes:
S1、将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;S1. Input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is obtained by constraining the laser radar point cloud data obtained by scanning ;
S2、将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;S2, the three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value are spliced together and input into the generative model to obtain the signed distance function value of the target grid point; the target grid The shape embedding value corresponding to the point is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to form the point cloud result after super-resolution reconstruction;
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
又一方面,本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各提供的激光雷达的超分辨率方法,该方法包括:In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored, and the computer program is implemented when executed by a processor to execute the above-mentioned super-resolution methods for lidars, the Methods include:
S1、将原始点云输入到判别式模型中,得到与所述原始点云对应的点云块的形状嵌入;其中,所述原始点云是对扫描得到的激光雷达点云数据进行约束后得到;S1. Input the original point cloud into the discriminant model, and obtain the shape embedding of the point cloud block corresponding to the original point cloud; wherein, the original point cloud is obtained by constraining the laser radar point cloud data obtained by scanning ;
S2、将目标分辨率的目标网格点的三维坐标及其对应的形状嵌入值拼接在一起后输入到生成式模型中,得到所述目标网格点的符号距离函数值;所述目标网格点对应的形状嵌入值是基于所述点云块的形状嵌入通过线性插值得到的;所述符号距离函数值用于构成超分辨重建后的点云结果;S2, the three-dimensional coordinates of the target grid point of the target resolution and the corresponding shape embedded value are spliced together and input into the generative model to obtain the signed distance function value of the target grid point; the target grid The shape embedding value corresponding to the point is obtained by linear interpolation based on the shape embedding of the point cloud block; the signed distance function value is used to form the point cloud result after super-resolution reconstruction;
其中,所述判别式模型和生成式模型是基于点云样本数据和对应的重建后的点云样本数据进行协同训练后得到。The discriminative model and the generative model are obtained after co-training based on the point cloud sample data and the corresponding reconstructed point cloud sample data.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that each embodiment can be implemented by means of software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on this understanding, the above-mentioned technical solutions can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic A disc, an optical disc, etc., includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still be The technical solutions described in the foregoing embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111109236.5A CN114764746B (en) | 2021-09-22 | 2021-09-22 | Laser radar super-resolution method and device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111109236.5A CN114764746B (en) | 2021-09-22 | 2021-09-22 | Laser radar super-resolution method and device, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114764746A true CN114764746A (en) | 2022-07-19 |
CN114764746B CN114764746B (en) | 2024-09-06 |
Family
ID=82365187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111109236.5A Active CN114764746B (en) | 2021-09-22 | 2021-09-22 | Laser radar super-resolution method and device, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114764746B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116128727A (en) * | 2023-02-02 | 2023-05-16 | 中国人民解放军国防科技大学 | Super-resolution method, system, equipment and medium for polarized radar image |
WO2024244310A1 (en) * | 2023-06-01 | 2024-12-05 | 北京天玛智控科技股份有限公司 | Three-dimensional laser point cloud vr display method and apparatus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493407A (en) * | 2018-11-19 | 2019-03-19 | 腾讯科技(深圳)有限公司 | Realize the method, apparatus and computer equipment of laser point cloud denseization |
US20200158869A1 (en) * | 2018-11-19 | 2020-05-21 | Elmira Amirloo Abolfathi | System, device and method of generating a high resolution and high accuracy point cloud |
CN112561796A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network |
-
2021
- 2021-09-22 CN CN202111109236.5A patent/CN114764746B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109493407A (en) * | 2018-11-19 | 2019-03-19 | 腾讯科技(深圳)有限公司 | Realize the method, apparatus and computer equipment of laser point cloud denseization |
US20200158869A1 (en) * | 2018-11-19 | 2020-05-21 | Elmira Amirloo Abolfathi | System, device and method of generating a high resolution and high accuracy point cloud |
CN112561796A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network |
Non-Patent Citations (2)
Title |
---|
TAKAYUKI SHINOHARA等: "Point2Wave: 3-D Point Cloud to Waveform Translation Using a Conditional Generative Adversarial Network With Dual Discriminators", IEEE, 2 September 2021 (2021-09-02) * |
龚道然 等: "激光雷达三维距离像超分辨重构方法研究", 《红外与激光工程》, vol. 49, no. 8, 31 August 2020 (2020-08-31) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116128727A (en) * | 2023-02-02 | 2023-05-16 | 中国人民解放军国防科技大学 | Super-resolution method, system, equipment and medium for polarized radar image |
WO2024244310A1 (en) * | 2023-06-01 | 2024-12-05 | 北京天玛智控科技股份有限公司 | Three-dimensional laser point cloud vr display method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN114764746B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7373554B2 (en) | Cross-domain image transformation | |
CN112529015B (en) | Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping | |
Gao et al. | LFT-Net: Local feature transformer network for point clouds analysis | |
CN110321910B (en) | Point cloud-oriented feature extraction method, device and device | |
CN111325851B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111028327B (en) | A processing method, device and equipment for a three-dimensional point cloud | |
WO2022193335A1 (en) | Point cloud data processing method and apparatus, and computer device and storage medium | |
Xue et al. | Robot target recognition using deep federated learning | |
JP2023533907A (en) | Image processing using self-attention-based neural networks | |
CN113449859A (en) | Data processing method and device | |
CN113095333B (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN114418030A (en) | Image classification method, and training method and device of image classification model | |
CN115294563A (en) | 3D point cloud analysis method and device based on Transformer and capable of enhancing local semantic learning ability | |
CN114764746B (en) | Laser radar super-resolution method and device, electronic device and storage medium | |
CN113643303A (en) | Three-dimensional image segmentation method based on two-way attention coding and decoding network | |
WO2023091249A1 (en) | Neural semantic fields for generalizable semantic segmentation of 3d scenes | |
Jemilda et al. | Moving object detection and tracking using genetic algorithm enabled extreme learning machine | |
CN116434303A (en) | Facial expression capturing method, device and medium based on multi-scale feature fusion | |
JP2018124990A (en) | Model generation apparatus, evaluation apparatus, model generation method, evaluation method, and program | |
EP4392935A1 (en) | Robustifying nerf model novel view synthesis to sparse data | |
CN110717405A (en) | Face feature point positioning method, device, medium and electronic equipment | |
CN114663579A (en) | Twin three-dimensional model generation method and device, electronic device and storage medium | |
WO2024234108A1 (en) | Method and system for accelerated operation of layers used in a machine learning model and differentiable point rendering using proximity attention | |
Zhang et al. | SE-DCGAN: a new method of semantic image restoration | |
EP4339826A2 (en) | Machine-learning for topologically-aware cad retrieval |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |