CN118097036B - A point cloud reconstruction system and method suitable for objects containing light-transmitting materials - Google Patents

A point cloud reconstruction system and method suitable for objects containing light-transmitting materials Download PDF

Info

Publication number
CN118097036B
CN118097036B CN202410506512.9A CN202410506512A CN118097036B CN 118097036 B CN118097036 B CN 118097036B CN 202410506512 A CN202410506512 A CN 202410506512A CN 118097036 B CN118097036 B CN 118097036B
Authority
CN
China
Prior art keywords
camera
point
points
point cloud
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410506512.9A
Other languages
Chinese (zh)
Other versions
CN118097036A (en
Inventor
张润含
江昊
陈勃然
黄冰心
黄一学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202410506512.9A priority Critical patent/CN118097036B/en
Publication of CN118097036A publication Critical patent/CN118097036A/en
Application granted granted Critical
Publication of CN118097036B publication Critical patent/CN118097036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种针对含有透光材料的物体点云重建系统及方法,系统包括转台、RGB‑D相机、相机支架、目标物体和Apriltag标定板,方法基于系统实现,包括启动系统,自动采集目标物体不同视角的数据;将RGB图像和深度图像对齐,提取RGB图像中的Apriltag角点,求解相机坐标系中的每个角点到Apriltag坐标系对应点的旋转矩阵和平移矩阵,得到RGB‑D相机的外部参数;将对齐后的不同视角的RGB图像和深度图像数据进行合并,剔除由于透光材料导致的异常点;点云分割与目标物体提取。本发明能够大幅度降低透光能力强造成的深度信息的误差,为三维建模和分析提供了重要工具,适用于需要高质量点云数据的应用场景。

The present invention discloses a point cloud reconstruction system and method for an object containing translucent materials. The system includes a turntable, an RGB-D camera, a camera bracket, a target object and an Apriltag calibration plate. The method is implemented based on the system, including starting the system, automatically collecting data of different viewing angles of the target object; aligning the RGB image and the depth image, extracting the Apriltag corner points in the RGB image, solving the rotation matrix and translation matrix from each corner point in the camera coordinate system to the corresponding point in the Apriltag coordinate system, and obtaining the external parameters of the RGB-D camera; merging the aligned RGB image and depth image data of different viewing angles, eliminating abnormal points caused by translucent materials; point cloud segmentation and target object extraction. The present invention can greatly reduce the error of depth information caused by strong light transmission ability, provides an important tool for three-dimensional modeling and analysis, and is suitable for application scenarios requiring high-quality point cloud data.

Description

一种适用于含有透光材料的物体点云重建系统及方法A point cloud reconstruction system and method suitable for objects containing light-transmitting materials

技术领域Technical Field

本发明属于三维视觉技术领域,具体涉及一种适用于含有透光材料的物体点云重建系统及方法。The present invention belongs to the field of three-dimensional vision technology, and in particular relates to a point cloud reconstruction system and method suitable for objects containing light-transmitting materials.

背景技术Background technique

三维视觉是计算机视觉的一个重要分支,专注于理解和处理三维空间中的图像和场景,涉及使用计算机和数字图像处理技术来获取、分析和呈现三维世界的信息。近年来,由于三维传感技术的飞速发展和三维几何数据的爆炸式增长,三维视觉研究突破传统的二维图像空间,实现三维空间的分析、理解和交互。3D vision is an important branch of computer vision, focusing on understanding and processing images and scenes in three-dimensional space, involving the use of computers and digital image processing technology to acquire, analyze and present information about the three-dimensional world. In recent years, due to the rapid development of three-dimensional sensing technology and the explosive growth of three-dimensional geometric data, 3D vision research has broken through the traditional two-dimensional image space and achieved the analysis, understanding and interaction of three-dimensional space.

目前针对含有透光材料的物体点云重建系统主要有基于特征点匹配的点云重建系统、体素化重建系统和三维扫描仪系统等,但各个系统均存在各种不足;其中基于特征点匹配的点云重建系统的主要缺陷有:1)计算复杂:特征点提取和匹配通常需要大量的计算资源和时间,尤其是在处理大规模点云时,这将会限制系统的实时性能,且透光材料的区域一般特征点较少,很难进行准确的匹配和重建;2)噪声和失真:特征点匹配容易受到传感器噪声、光照变化和失真的影响,这些因素可以导致误匹配或点云重建的不稳定性;3)数据依赖性:利用特征点匹配进行的点云重建,对于由物体表面透光材质引起深度相机发射的光脉冲出现透射现象,会使采集到的点云数据中出现大量异常点,这些异常点会影响点云特征提取算法对特征点的误判,最终导致重建失败或重建精度较低。体素化重建的主要缺陷是重建精度不够理想。三维扫描仪系统在性能上较为理想,但主要缺陷有:安装复杂,价格也偏高。综上可知,目前市场上迫切需要一种能对含有透光材料的物体进行三维点云重建的价格低廉、操作简单的系统。At present, the point cloud reconstruction systems for objects containing translucent materials mainly include point cloud reconstruction systems based on feature point matching, voxel reconstruction systems, and 3D scanner systems, but each system has various shortcomings; the main defects of point cloud reconstruction systems based on feature point matching are: 1) Complex calculation: Feature point extraction and matching usually require a lot of computing resources and time, especially when processing large-scale point clouds, which will limit the real-time performance of the system, and the area of translucent materials generally has fewer feature points, making it difficult to accurately match and reconstruct; 2) Noise and distortion: Feature point matching is easily affected by sensor noise, illumination changes, and distortion, which can lead to mismatching or instability in point cloud reconstruction; 3) Data dependence: Point cloud reconstruction using feature point matching will cause the light pulse emitted by the depth camera to be transmitted due to the translucent material on the surface of the object, which will cause a large number of abnormal points in the collected point cloud data. These abnormal points will affect the misjudgment of feature points by the point cloud feature extraction algorithm, and ultimately lead to reconstruction failure or low reconstruction accuracy. The main defect of voxel reconstruction is that the reconstruction accuracy is not ideal. The 3D scanner system is ideal in performance, but its main drawbacks are: complex installation and high price. In summary, there is an urgent need for a low-cost and easy-to-operate system that can reconstruct 3D point clouds of objects containing light-transmitting materials.

发明内容Summary of the invention

为了解决上述技术问题,本发明提出了一种能够大幅度降低透光能力强造成的目标物体的深度信息的误差,实现成本低、扩展性强的物体三维重建系统,包括转台、RGB-D相机、相机支架、目标物体和Apriltag标定板。In order to solve the above technical problems, the present invention proposes a system for three-dimensional reconstruction of objects, which can significantly reduce the error of depth information of a target object caused by strong light transmittance, and realize low-cost and highly scalable object reconstruction, including a turntable, an RGB-D camera, a camera bracket, a target object and an Apriltag calibration plate.

转台用于支撑和旋转目标物体,将其不同角度呈现给相机进行数据采集。RGB-D相机具备拍摄RGB彩色图像和捕捉深度信息的能力,用于采集物体的视觉数据。相机支架用于稳定安装RGB-D相机,确保它在数据采集过程中保持适当的位置和角度。目标物体部分表面由透光材料组成。Apriltag标定板放置于转台上,为图像提供定位和定向信息。The turntable is used to support and rotate the target object, presenting different angles of the object to the camera for data collection. The RGB-D camera has the ability to capture RGB color images and depth information, which is used to collect visual data of the object. The camera bracket is used to stably install the RGB-D camera to ensure that it maintains the appropriate position and angle during data collection. Part of the surface of the target object is composed of light-transmitting materials. The Apriltag calibration plate is placed on the turntable to provide positioning and orientation information for the image.

基于上述点云重建系统,本发明还提供一种适用于含有透光材料的物体点云重建方法,包括以下步骤:Based on the above point cloud reconstruction system, the present invention also provides a point cloud reconstruction method for an object containing light-transmitting materials, comprising the following steps:

步骤1,启动点云重建系统,自动采集目标物体不同视角的RGB图像和深度图像;Step 1: Start the point cloud reconstruction system to automatically collect RGB images and depth images of the target object from different perspectives;

步骤2,将RGB图像和深度图像对齐,提取RGB图像中的Apriltag角点,求解相机坐标系中的每个角点到Apriltag坐标系对应点的旋转矩阵和平移矩阵,得到RGB-D相机的外部参数;Step 2: Align the RGB image and the depth image, extract the Apriltag corner points in the RGB image, solve the rotation matrix and translation matrix from each corner point in the camera coordinate system to the corresponding point in the Apriltag coordinate system, and obtain the external parameters of the RGB-D camera;

步骤3,将对齐后的不同视角的RGB图像和深度图像数据进行融合,剔除与邻域点的欧氏距离平均值大于设定阈值的噪声点,并根据亮度梯度确定透光区域的具体位置,剔除由于透光材料导致的异常点,得到点云;Step 3: Fuse the aligned RGB images and depth image data of different viewing angles, remove noise points whose average Euclidean distance to neighboring points is greater than the set threshold, determine the specific position of the light-transmitting area according to the brightness gradient, remove abnormal points caused by light-transmitting materials, and obtain a point cloud;

步骤4,采用平面模型分割算法对剔除异常点后的点云进行分割,提取目标物体。Step 4: Use a plane model segmentation algorithm to segment the point cloud after removing abnormal points and extract the target object.

而且,所述步骤1中启动点云重建系统后,转台开始匀速旋转,使放置于其上的目标物体也随之匀速旋转。RGB-D相机根据实际需要每隔一定时间采集一张目标物体的RGB图像和一张目标物体的深度图像,以获取目标物体在不同视角的数据。Moreover, after starting the point cloud reconstruction system in step 1, the turntable starts to rotate at a constant speed, so that the target object placed on it also rotates at a constant speed. The RGB-D camera collects an RGB image and a depth image of the target object at regular intervals according to actual needs to obtain data of the target object at different perspectives.

而且,所述步骤2中通过RGB传感器和TOF传感器实际摆放的物理间隔和夹角得到RGB图像和深度图像的对齐系数,实现两者之间的对齐,对于每次RGB-D相机自动采集的数据,均利用对齐系数将RGB图像和深度图像对齐。对采集到的RGB图像进行灰度化处理,然后进行边缘提取,得到每个Apriltag的角点。像素坐标系以图像左上角为原点,u轴为水平向右,v轴为垂直向下。假设一幅RGB图像中有n个Apriltag角点,在RGB图像中的坐标为,对应在深度图像的深度值为,根据识别到的Apriltag角点的ID值,得到其在Apriltag坐标系的三维坐标值为z坐标为0,Apriltag坐标系的原点位于Apriltag标定板的左下角,x方向与y方向分别与Apriltag标定板初始位置的两边长重合,z轴垂直于标定板平面向上,该坐标系符合右手坐标系。Moreover, in step 2, the alignment coefficient of the RGB image and the depth image is obtained through the physical interval and angle of the actual placement of the RGB sensor and the TOF sensor to achieve alignment between the two. For each data automatically collected by the RGB-D camera, the alignment coefficient is used to align the RGB image and the depth image. The collected RGB image is grayed, and then edge extraction is performed to obtain the corner point of each April tag. The pixel coordinate system takes the upper left corner of the image as the origin, the u axis is horizontal to the right, and the v axis is vertically downward. Assuming that there are n Apriltag corner points in an RGB image, the coordinates in the RGB image are , corresponding to the depth value in the depth image is According to the ID value of the identified Apriltag corner point, its three-dimensional coordinate value in the Apriltag coordinate system is obtained as , the z coordinate is 0, the origin of the Apriltag coordinate system is located at the lower left corner of the Apriltag calibration plate, the x direction and the y direction coincide with the two sides of the initial position of the Apriltag calibration plate, and the z axis is perpendicular to the calibration plate plane and points upward. The coordinate system conforms to the right-hand coordinate system.

相机坐标系的坐标原点位于相机的光学中心,X轴指向相机的右侧,垂直于光学轴,与图像平面平行,Y轴指向相机的上方,垂直于光学轴,与X轴和图像平面平行,Z轴指向相机的观察方向,即指向被观察的场景,与光学轴平行;利用与RGB图像对齐的深度图,得到角点深度值,结合相机内参,将角点坐标从像素坐标系转换至相机坐标系中,坐标转换公式如下:The origin of the camera coordinate system is located at the optical center of the camera. The X -axis points to the right side of the camera, perpendicular to the optical axis, and parallel to the image plane. The Y -axis points to the top of the camera, perpendicular to the optical axis, and parallel to the X- axis and the image plane. The Z -axis points to the viewing direction of the camera, that is, to the observed scene, and is parallel to the optical axis. The depth map aligned with the RGB image is used to obtain the corner depth value. Combined with the camera intrinsic parameters, the corner coordinates are converted from the pixel coordinate system to the camera coordinate system. The coordinate conversion formula is as follows:

(1) (1)

式中,uv为像素点在像素坐标系中的坐标值,分别表示相机在像素坐标系中uv方向上的焦距,表示像主点的像素坐标,XY、Z为像素点在相机坐标系中的坐标值,像素点在相机坐标系中的Z值为深度值dWhere u and v are the coordinate values of the pixel point in the pixel coordinate system. Respectively represent the focal length of the camera in the u and v directions in the pixel coordinate system, represents the pixel coordinates of the principal point, X , Y, and Z are the coordinate values of the pixel point in the camera coordinate system, and the Z value of the pixel point in the camera coordinate system is the depth value d .

相机外参指的是相机的位姿,包括旋转矩阵R和平移矩阵t,假设存在多条光线,其中每一条都连接相机光学中心、三维目标点以及目标点在相机平面上的投影,将相机坐标系中的点用表示,Apriltag坐标系中的点用表示,则求解相机外参即为求解相机坐标系中的每个点到Apriltag坐标系对应点的旋转矩阵和平移矩阵;相机的外参通过调整使得表达式的值最小求得,表示2-范数的平方。Camera extrinsics refers to the camera's position and posture , including the rotation matrix R and the translation matrix t . Assume that there are multiple rays, each of which connects the camera optical center, the three-dimensional target point, and the projection of the target point on the camera plane. The point in the camera coordinate system is represented by Indicates that points in the Apriltag coordinate system are represented by In other words, solving the camera extrinsic parameters is to solve the rotation matrix and translation matrix of each point in the camera coordinate system to the corresponding point in the Apriltag coordinate system; the camera extrinsic parameters are adjusted by So that the expression The minimum value of is obtained, Represents the square of the 2-norm.

而且,所述步骤3中首先利用相机外参将对齐后的RGB图像和深度图像转换至Apriltag坐标系,然后根据当前时刻的Apriltag相比与初始时刻转过的角度,将该视角采集到的RGB图像和深度图像数据转换到最初时刻的Apriltag坐标系下,实现多角度视图融合。计算多幅不同视角RGB图像和深度图像融合后的点云数据中每个点与其K邻域内点的欧氏距离,并计算所有欧氏距离的均值和标准差,取距离阈值,其中为常数,即比例系数,再次遍历点云,剔除与K个邻域点的欧氏距离的平均值大于的点。对于透光材料引起的异常点,先利用Harris特征点检测方法提取RGB图像中的特征点,并将这些特征点均分成N个子集,接着计算子集内每个特征点周围区域的强度变化方向,以每个特征点为原点,作方向指向亮度变大的射线,若子集内存在一定数量的射线相交于共同的点,则认定这些点连接构成的闭合区域为透光材料区域,删除该区域内的特征点,逐个对子集里的特征点进行此判断删除操作,剔除异常数据。Moreover, in step 3, the aligned RGB image and depth image are first converted to the Apriltag coordinate system using the camera external parameters, and then the RGB image and depth image data collected from this perspective are converted to the Apriltag coordinate system at the initial moment according to the angle of rotation of the Apriltag at the current moment compared to the initial moment, so as to achieve multi-angle view fusion. Calculate the Euclidean distance between each point in the point cloud data after fusion of multiple RGB images and depth images from different perspectives and the points in its K neighborhood, and calculate the mean of all Euclidean distances and standard deviation , take the distance threshold ,in is a constant, i.e., the proportional coefficient. The point cloud is traversed again, and the points whose average Euclidean distance to the K neighboring points is greater than For abnormal points caused by translucent materials, the Harris feature point detection method is first used to extract feature points in the RGB image, and these feature points are evenly divided into N subsets. Then, the direction of intensity change of the area around each feature point in the subset is calculated. With each feature point as the origin, rays pointing to the direction of increased brightness are drawn. If a certain number of rays in the subset intersect at a common point, the closed area formed by these points is identified as the translucent material area, and the feature points in the area are deleted. This judgment and deletion operation is performed on the feature points in the subset one by one to eliminate abnormal data.

而且,所述步骤4中将步骤1-3不断收集的目标物体各个角度的RGB图像和深度信息统一到最初时刻的Apriltag坐标系下,剔除由于透光材料而引起异常点的数据处理后,得到多角度的融合后的点云数据。对每个点云进行筛选,在Aptiltag坐标系中,去除z坐标值小于0和xy大于标定板尺寸的点,得到目标物体和转台的点云信息,对于与目标物体有接触面积的转台难分割的情况,采用平面模型分割算法,将融合后的点云数据分割为体素,对于每个体素只保留其中心点,设置迭代次数阈值、距离阈值和内点数量阈值,随机选择M个体素的中心点,利用Ransac算法拟合平面方程,估计平面模型的参数。计算所有体素的中心点到估计平面的距离,将距离小于阈值的点作为内点,统计内点的数量,达到指定迭代次数或内点数量达到设定阈值,迭代停止。去除平面模型中的内点,即为去除转台点云,最终得到三维目标物体模型。Moreover, in step 4, the RGB images and depth information of the target object collected continuously in steps 1-3 at various angles are unified to the Apriltag coordinate system at the initial moment, and after the data processing of the abnormal points caused by the light-transmitting material is eliminated, the multi-angle fused point cloud data is obtained. Each point cloud is screened, and in the Aptiltag coordinate system, the points with z coordinate values less than 0 and x and y greater than the size of the calibration plate are removed to obtain the point cloud information of the target object and the turntable. For the case where the turntable with a contact area with the target object is difficult to segment, the plane model segmentation algorithm is used to segment the fused point cloud data into voxels, and only the center point of each voxel is retained. The iteration number threshold, distance threshold and internal point number threshold are set, and the center points of M voxels are randomly selected. The plane equation is fitted using the Ransac algorithm to estimate the parameters of the plane model. The distance from the center point of all voxels to the estimated plane is calculated, and the point with a distance less than the threshold is regarded as the internal point. The number of internal points is counted, and the iteration stops when the specified number of iterations is reached or the number of internal points reaches the set threshold. Removing the internal points in the plane model means removing the turntable point cloud, and finally obtaining the three-dimensional target object model.

与现有技术相比,本发明具有如下优点:Compared with the prior art, the present invention has the following advantages:

1)通过自动化旋转转台和定期采集图像,实现了数据的自动化采集,降低了用户的操作干预,使数据采集更加高效,有助于提高点云重建系统的实时性能和数据采集的一致性;1) Through the automated rotation of the turntable and regular image acquisition, data acquisition is automated, which reduces user intervention, makes data acquisition more efficient, and helps improve the real-time performance of the point cloud reconstruction system and the consistency of data acquisition;

2)通过分析捕获到的图像中的Apriltag标签信息,计算得到相机的外部参数,包括位置和方向信息,为后续的数据处理和点云拼接提供可靠的基础,提高了点云重建的精度;2) By analyzing the Apriltag information in the captured image, the camera's external parameters, including position and orientation information, are calculated, providing a reliable basis for subsequent data processing and point cloud stitching, and improving the accuracy of point cloud reconstruction;

3)通过旋转目标物体获取不同角度的图像,并进行多角度视图融合得到更全面的物体视图,有助于减少遮挡并提供更多几何信息,从而提高点云重建的准确性和完整性;3) Obtain images at different angles by rotating the target object and fusing the multi-angle views to obtain a more comprehensive view of the object, which helps reduce occlusion and provide more geometric information, thereby improving the accuracy and completeness of point cloud reconstruction;

4)考虑目标物体透光材料对深度信息的干扰,通过检测并去除由透光能力强造成的异常深度值的点,确保点云数据的准确性,解决了特征点匹配的噪声和失真问题;4) Considering the interference of the target object's translucent material on the depth information, the accuracy of the point cloud data is ensured by detecting and removing points with abnormal depth values caused by strong light transmittance, solving the noise and distortion problems of feature point matching;

5)相比市场上其他点云重建系统,本发明提供的点云重建系统结构相对简单,使用的设备较为常见,不需要复杂的安装过程,从而降低了系统的成本,并提供了操作的便捷性。5) Compared with other point cloud reconstruction systems on the market, the point cloud reconstruction system provided by the present invention has a relatively simple structure, uses relatively common equipment, and does not require a complicated installation process, thereby reducing the cost of the system and providing convenience in operation.

综上所述,本发明通过技术创新解决了目前点云重建系统存在的多种问题,提供了一种高效、精确、成本低廉且操作简单的适用于含有透光材料的物体点云重建系统及方法。In summary, the present invention solves various problems existing in the current point cloud reconstruction system through technological innovation, and provides an efficient, accurate, low-cost and easy-to-operate point cloud reconstruction system and method suitable for objects containing translucent materials.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the present invention or the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, the drawings described below are some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative work.

图1为本发明实施例中适用于含有透光材料的物体点云重建系统的结构装置图。FIG. 1 is a structural device diagram of a point cloud reconstruction system for an object containing light-transmitting materials according to an embodiment of the present invention.

图2为本发明实施例中适用于含有透光材料的物体点云重建方法的流程图。FIG. 2 is a flow chart of a method for reconstructing a point cloud of an object containing light-transmitting materials according to an embodiment of the present invention.

图3为本发明实施例中相机外参估计方法的流程图。FIG. 3 is a flow chart of a camera extrinsic parameter estimation method according to an embodiment of the present invention.

图4为本发明实施例中透光材料数据处理的流程图。FIG. 4 is a flow chart of data processing of light-transmitting materials in an embodiment of the present invention.

图5为本发明实施例中点云分割算法的流程图。FIG5 is a flow chart of a point cloud segmentation algorithm in an embodiment of the present invention.

具体实施方式Detailed ways

为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的首选实施例。但是,本申请可以以多种不同形式实现,并不限于本文所述的实施例。此外,提供的实施例目的为对本申请的公开内容更加透彻全面。In order to facilitate understanding of the present application, the present application will be described more fully below with reference to the relevant drawings. The preferred embodiments of the present application are given in the drawings. However, the present application can be implemented in a variety of different forms and is not limited to the embodiments described herein. In addition, the embodiments provided are intended to provide a more thorough and comprehensive disclosure of the present application.

除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as those commonly understood by those skilled in the art to which this application belongs. The terms used herein in the specification of this application are only for the purpose of describing specific embodiments and are not intended to limit this application. The term "and/or" used herein includes any and all combinations of one or more of the related listed items.

实施例1Example 1

如图1所示,本发明实施例1提供一种适用于含有透光材料的物体点云重建系统,包括转台、RGB-D相机、相机支架、汽车模型和Apriltag标定板。As shown in FIG1 , Embodiment 1 of the present invention provides a point cloud reconstruction system for an object containing light-transmitting materials, including a turntable, an RGB-D camera, a camera bracket, a car model, and an Apriltag calibration plate.

转台用于支撑和旋转目标物体,将其不同角度呈现给相机进行数据采集。RGB-D相机具备RGB(彩色图像)和深度(距离信息)捕捉能力,用于采集物体的视觉数据。相机支架用于稳定安装RGB-D相机,确保它在数据采集过程中保持适当的位置和角度。目标物体部分表面由透光材料组成,本实施例目标物体为汽车模型,其包括透光性质的钢化玻璃车窗。Apriltag标定板放置于转台上,为图像提供定位和定向信息。整个点云重建系统需放置在纯色背景前。The turntable is used to support and rotate the target object, presenting different angles of it to the camera for data collection. The RGB-D camera has RGB (color image) and depth (distance information) capture capabilities, which are used to collect visual data of the object. The camera bracket is used to stably install the RGB-D camera to ensure that it maintains the appropriate position and angle during data collection. Part of the surface of the target object is composed of light-transmitting materials. In this embodiment, the target object is a car model, which includes a tempered glass window with light-transmitting properties. The Apriltag calibration plate is placed on the turntable to provide positioning and orientation information for the image. The entire point cloud reconstruction system needs to be placed in front of a solid color background.

实施例2Example 2

基于实施例1提供的物体点云重建系统,本发明实施例2还提供一种适用于含有透光材料的物体点云重建方法,如图2所示,包括以下几个步骤:Based on the object point cloud reconstruction system provided in Example 1, Example 2 of the present invention further provides an object point cloud reconstruction method applicable to light-transmitting materials, as shown in FIG2 , comprising the following steps:

步骤1,自动采集目标物体不同视角的数据。Step 1: Automatically collect data from different perspectives of the target object.

启动物体点云重建系统后,转台开始匀速旋转,使放置于其上的汽车模型也随之匀速旋转。RGB-D相机根据实际需要每隔一定时间采集一张汽车模型的RGB图像和一张汽车模型的深度图像,以获取汽车模型在不同视角的数据,用于后续处理。After starting the object point cloud reconstruction system, the turntable starts to rotate at a constant speed, causing the car model placed on it to rotate at a constant speed. The RGB-D camera collects an RGB image and a depth image of the car model at regular intervals according to actual needs to obtain data of the car model at different perspectives for subsequent processing.

步骤2,将RGB图像和深度图像对齐,提取RGB图像中的Apriltag角点,求解相机坐标系中的每个角点到Apriltag坐标系对应点的旋转矩阵和平移矩阵,得到RGB-D相机的外部参数。Step 2: Align the RGB image and the depth image, extract the Apriltag corner points in the RGB image, solve the rotation matrix and translation matrix from each corner point in the camera coordinate system to the corresponding point in the Apriltag coordinate system, and obtain the external parameters of the RGB-D camera.

如图3所示,本发明实施例2进一步提供一种相机外参估计方法,具体步骤如下:As shown in FIG3 , Embodiment 2 of the present invention further provides a camera extrinsic parameter estimation method, and the specific steps are as follows:

由于RGB-D相机中的RGB传感器和TOF传感器存在一定的物理距离,所以需要通过RGB传感器和TOF传感器实际摆放的物理间隔和夹角得到RGB图像和深度图像的对齐系数,实现两者之间的对齐。对于每次RGB-D相机自动采集的数据,均需要利用对齐系数将RGB图像和深度图像对齐。Since there is a certain physical distance between the RGB sensor and the TOF sensor in the RGB-D camera, it is necessary to obtain the alignment coefficient of the RGB image and the depth image through the actual physical interval and angle between the RGB sensor and the TOF sensor to achieve alignment between the two. For each data automatically collected by the RGB-D camera, the alignment coefficient needs to be used to align the RGB image and the depth image.

对采集到的RGB图像进行灰度化处理,然后进行边缘提取,得到每个Apriltag的角点。像素坐标系以图像左上角为原点,u轴为水平向右,v轴为垂直向下。假设一幅RGB图像中出现15个完整的Apriltag,每个Apriltag含有4个角点,则有60个Apriltag角点,在RGB图像中的像素坐标为,对应在深度图像的深度值为。根据识别到的Apriltag角点的ID值,得到其在Apriltag坐标系的三维坐标值为z坐标为0。Apriltag坐标系的原点位于Apriltag标定板的左下角,x方向与y方向分别与Apriltag标定板初始位置的两边长重合,z轴垂直于标定板平面向上,该坐标系符合右手坐标系。The collected RGB image is grayed out, and then edge extraction is performed to obtain the corner points of each April tag. The pixel coordinate system takes the upper left corner of the image as the origin, the u axis is horizontal to the right, and the v axis is vertically downward. Assuming that there are 15 complete April tags in an RGB image, and each April tag has 4 corner points, there are 60 April tag corner points, and the pixel coordinates in the RGB image are , corresponding to the depth value in the depth image is According to the ID value of the identified Apriltag corner point, its three-dimensional coordinate value in the Apriltag coordinate system is obtained as , the z coordinate is 0. The origin of the Apriltag coordinate system is located at the lower left corner of the Apriltag calibration plate. The x and y directions coincide with the lengths of the two sides of the initial position of the Apriltag calibration plate. The z axis is perpendicular to the calibration plate plane and points upward. The coordinate system conforms to the right-hand coordinate system.

相机坐标系的坐标原点位于相机的光学中心(光学轴的交点),X轴指向相机的右侧,垂直于光学轴,与图像平面平行,Y轴指向相机的上方,垂直于光学轴,与X轴和图像平面平行,Z轴指向相机的观察方向,即指向被观察的场景,与光学轴平行;利用与RGB图像对齐的深度图,得到角点深度值,结合相机内参,将角点坐标从像素坐标系转换至相机坐标系中,坐标转换公式如下:The origin of the camera coordinate system is located at the optical center of the camera (the intersection of the optical axes). The X -axis points to the right of the camera, perpendicular to the optical axis, and parallel to the image plane. The Y -axis points to the top of the camera, perpendicular to the optical axis, and parallel to the X- axis and the image plane. The Z -axis points to the viewing direction of the camera, that is, to the observed scene, and is parallel to the optical axis. The depth map aligned with the RGB image is used to obtain the corner depth value. Combined with the camera intrinsic parameters, the corner coordinates are converted from the pixel coordinate system to the camera coordinate system. The coordinate conversion formula is as follows:

(1) (1)

式中,uv为像素点在像素坐标系中的坐标值,分别表示相机在像素坐标系中uv方向上的焦距,表示像主点的像素坐标,XY、Z为像素点在相机坐标系中的坐标值,像素点在相机坐标系中的Z值为深度值dWhere u and v are the coordinate values of the pixel point in the pixel coordinate system. Respectively represent the focal length of the camera in the u and v directions in the pixel coordinate system, represents the pixel coordinates of the principal point, X , Y, and Z are the coordinate values of the pixel point in the camera coordinate system, and the Z value of the pixel point in the camera coordinate system is the depth value d .

相机外参指的是相机的位姿,包括旋转矩阵R和平移矩阵t。假设存在多条光线,其中每一条都连接相机光学中心、三维目标点以及目标点在相机平面上的投影,将相机坐标系中的点用表示,Apriltag坐标系中的点用表示,则求解相机外参即为求解相机坐标系中的每个点到Apriltag坐标系对应点的旋转矩阵和平移矩阵;相机的外参通过调整使得表达式的值最小求得,表示2-范数的平方。本实施例采用Bundle Adjustment(BA)的非线性优化技术求解外参。BA通过最小化重投影误差来提高估计的准确性,能够实现高精度的转换。Camera extrinsics refers to the camera's position and posture , including the rotation matrix R and the translation matrix t . Assume that there are multiple rays, each of which connects the camera optical center, the three-dimensional target point, and the projection of the target point on the camera plane. Indicates that points in the Apriltag coordinate system are represented by In other words, solving the camera extrinsic parameters is to solve the rotation matrix and translation matrix of each point in the camera coordinate system to the corresponding point in the Apriltag coordinate system; the camera extrinsic parameters are adjusted by So that the expression The minimum value of is obtained, represents the square of the 2-norm. This embodiment uses the nonlinear optimization technology of Bundle Adjustment (BA) to solve the external parameters. BA improves the accuracy of estimation by minimizing the reprojection error and can achieve high-precision conversion.

步骤3,将对齐后的不同视角的RGB图像和深度图像数据进行融合,剔除与邻域点的欧氏距离平均值大于设定阈值的噪声点,并根据亮度梯度确定透光区域的具体位置,剔除由于透光材料导致的异常点。Step 3: Fuse the aligned RGB images and depth image data of different perspectives, remove noise points whose average Euclidean distance to neighboring points is greater than the set threshold, determine the specific position of the translucent area based on the brightness gradient, and remove abnormal points caused by translucent materials.

如图4所示,本发明实施例2还提供一种透光材料数据处理方法,具体如下:As shown in FIG. 4 , Embodiment 2 of the present invention further provides a method for processing data of a light-transmitting material, which is specifically as follows:

首先利用相机外参将对齐后的RGB图像和深度图像转换至Apriltag坐标系,然后根据当前时刻的Apriltag相比与初始时刻转过的角度,将该视角采集到的RGB图像和深度图像数据转换到最初时刻的Apriltag坐标系下,从而实现多角度视图融合。由于汽车模型的透光性车窗可能对TOF相机采集的深度信息产生干扰,基于汽车模型的连续封闭特性,计算多幅不同视角RGB图像和深度图像融合后的点云数据中每个点与其K邻域内点的欧氏距离(本实施例中K取20,具体实施时可根据情况取值),并计算所有欧氏距离的均值和标准差,取距离阈值,其中为常数,即比例系数,再次遍历点云,剔除与20个邻域点的欧氏距离的平均值大于的点。First, the aligned RGB image and depth image are converted to the Apriltag coordinate system using the camera extrinsics. Then, according to the angle that the Apriltag at the current moment has rotated compared to the initial moment, the RGB image and depth image data collected from this perspective are converted to the Apriltag coordinate system at the initial moment, thereby realizing multi-angle view fusion. Since the light-transmitting windows of the car model may interfere with the depth information collected by the TOF camera, based on the continuous closure characteristics of the car model, the Euclidean distance between each point in the point cloud data after the fusion of multiple RGB images and depth images from different perspectives and the points in its K neighborhood is calculated (in this embodiment, K is 20, and the value can be taken according to the situation during specific implementation), and the mean of all Euclidean distances is calculated. and standard deviation , take the distance threshold ,in is a constant, i.e., the proportional coefficient. The point cloud is traversed again, and the points whose average Euclidean distance to the 20 neighboring points is greater than point.

对于透光材料引起的异常点,先利用Harris特征点检测方法提取RGB图像中的特征点,并将这些特征点均分成N个子集(本实施例中N取10,具体实施时可根据情况取值),接着计算子集内每个特征点周围区域的强度变化方向,以每个特征点为原点,作方向指向亮度变大的射线,若子集内存在一定数量的射线相交于共同的点,则认定这些点连接构成的闭合区域为透光材料区域,删除该区域内的特征点。逐个对子集里的特征点进行上述判断删除操作,剔除异常数据。For abnormal points caused by translucent materials, the Harris feature point detection method is first used to extract feature points in the RGB image, and these feature points are evenly divided into N subsets (N is 10 in this embodiment, and the value can be taken according to the situation during specific implementation). Then, the intensity change direction of the area around each feature point in the subset is calculated, and each feature point is used as the origin to make rays pointing to the direction of increasing brightness. If there are a certain number of rays in the subset that intersect at a common point, the closed area formed by these points is determined to be a translucent material area, and the feature points in the area are deleted. The above judgment and deletion operations are performed on the feature points in the subset one by one to eliminate abnormal data.

步骤4,点云分割与目标物体提取。Step 4: point cloud segmentation and target object extraction.

通过步骤1-3不断收集目标物体各个角度的RGB图像和深度信息,并将其统一到最初时刻的Apriltag坐标系下,剔除由于透光材料而引起异常点的数据处理后,得到多角度的融合后的点云数据。Through steps 1-3, the RGB images and depth information of the target object at various angles are continuously collected and unified into the Apriltag coordinate system at the initial moment. After data processing of abnormal points caused by translucent materials is eliminated, multi-angle fused point cloud data is obtained.

为了将汽车模型从场景中提取出来,对每个点云进行筛选,在Aptiltag坐标系中,去除z坐标值小于0和xy大于标定板尺寸的点,从而得到目标物体和转台的点云信息。对于与汽车模型有接触面积的转台难分割的情况,采用平面模型分割算法,具体操作如图5所示,将融合后的点云数据分割为体素,对于每个体素只保留其中心点,设置迭代次数阈值、距离阈值和内点数量阈值(本实施例中最大迭代次数设为1000,距离阈值设为0.01,内点数量阈值设为原来点云数量的30%),随机选择M个体素的中心点(本实施例中M取50,具体实施时可根据情况取值),利用Ransac算法拟合平面方程,估计平面模型的参数。假设为M个体素中心点的集合,是点集P中的任意一点,是点集的中心,即,假设拟合的目标平面经过,且其法向量为,利用最小二乘法求解得到使中所有点到平面的距离即的值最小。定义矩阵,则:In order to extract the car model from the scene, each point cloud is screened, and in the Aptiltag coordinate system, the points with z coordinate values less than 0 and x and y values greater than the size of the calibration plate are removed to obtain the point cloud information of the target object and the turntable. For the case where the turntable that has a contact area with the car model is difficult to segment, a plane model segmentation algorithm is used. The specific operation is shown in Figure 5. The fused point cloud data is segmented into voxels. For each voxel, only its center point is retained. The iteration number threshold, distance threshold and internal point number threshold are set (in this embodiment, the maximum number of iterations is set to 1000, the distance threshold is set to 0.01, and the internal point number threshold is set to 30% of the original point cloud number). The center points of M voxels are randomly selected (in this embodiment, M is 50, and the value can be taken according to the situation during specific implementation), and the plane equation is fitted using the Ransac algorithm to estimate the parameters of the plane model. Assume that is the set of M voxel centers, is any point in the point set P, It is a point set The center of , assuming that the fitted target plane passes through , and its normal vector is , using the least squares method to solve make The distance from all points to the plane is The value of is the smallest. Define the matrix ,but:

(2) (2)

式中,表示矩阵的范数,上标T表示矩阵转置。In the formula, It represents the norm of the matrix and the superscript T represents the matrix transpose.

对矩阵A进行奇异值分解,得到,代入式(2)得:Perform singular value decomposition on matrix A and get , substituting into formula (2) we get:

(3) (3)

式中,V的酉矩阵;U的酉矩阵;是一个的矩阵,主对角线上的每个元素都为奇异值,除了主对角线上的元素以外全为0,即Where V is The unitary matrix of The unitary matrix of Is a In a matrix, each element on the main diagonal is a singular value, except for the elements on the main diagonal, which are all 0, that is, .

用一个矩阵W表示,即,则式(3)表示为:Will Use one The matrix W represents , then formula (3) is expressed as:

(4) (4)

式中,均为奇异值,且为矩阵W的元素。In the formula, , , are all singular values, and ; , , are the elements of the matrix W.

最后得出拟合的平面法向量为矩阵U的第三列,即Finally, the fitted plane normal vector is obtained is the third column of matrix U , that is .

计算所有体素的中心点到估计平面的距离,将距离小于阈值的点作为内点,统计内点的数量,达到指定迭代次数或内点数量达到设定阈值,迭代停止。去除平面模型中的内点,即为去除转台点云,最终得到三维目标物体模型。Calculate the distance from the center point of all voxels to the estimated plane, take the points with distance less than the threshold as inliers, count the number of inliers, and stop the iteration when the specified number of iterations is reached or the number of inliers reaches the set threshold. Remove the inliers in the plane model, that is, remove the turntable point cloud, and finally obtain the 3D target object model.

具体实施时,本发明技术方案提出的方法可由本领域技术人员采用计算机软件技术实现自动运行流程,实现方法的系统装置例如存储本发明技术方案相应计算机程序的计算机可读存储介质以及包括运行相应计算机程序的计算机设备,也应当在本发明的保护范围内。In specific implementation, the method proposed in the technical solution of the present invention can be implemented by technical personnel in this field using computer software technology to realize automatic operation process. System devices for implementing the method, such as computer-readable storage media storing the corresponding computer program of the technical solution of the present invention and computer equipment running the corresponding computer program, should also be within the protection scope of the present invention.

本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施案例,做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely examples of the spirit of the present invention. Those skilled in the art may make various modifications or additions to the specific implementation cases described, or replace them with similar methods, without departing from the spirit of the present invention or exceeding the scope defined by the appended claims.

Claims (8)

1.一种适用于含有透光材料的物体点云重建方法,其特征在于,1. A point cloud reconstruction method for an object containing light-transmitting materials, characterized in that: 物体点云重建过程基于点云重建系统实现,所述点云重建系统包括转台、RGB-D相机、相机支架、目标物体和Apriltag标定板;The object point cloud reconstruction process is implemented based on a point cloud reconstruction system, which includes a turntable, an RGB-D camera, a camera bracket, a target object and an Apriltag calibration plate; 转台用于支撑和旋转目标物体,将其不同角度呈现给相机进行数据采集;RGB-D相机具备拍摄RGB彩色图像和捕捉深度信息的能力,用于采集物体的视觉数据;相机支架用于稳定安装RGB-D相机;目标物体部分表面由透光材料组成;Apriltag标定板放置于转台上,为图像提供定位和定向信息;整个点云重建系统需放置在纯色背景前;The turntable is used to support and rotate the target object, presenting different angles of the object to the camera for data collection; the RGB-D camera has the ability to capture RGB color images and depth information, and is used to collect visual data of the object; the camera bracket is used to stably install the RGB-D camera; part of the surface of the target object is made of light-transmitting materials; the Apriltag calibration plate is placed on the turntable to provide positioning and orientation information for the image; the entire point cloud reconstruction system needs to be placed in front of a solid color background; 所述物体点云重建过程包括以下步骤:The object point cloud reconstruction process includes the following steps: 步骤1,启动点云重建系统,自动采集目标物体不同视角的RGB图像和深度图像;Step 1: Start the point cloud reconstruction system to automatically collect RGB images and depth images of the target object from different perspectives; 步骤2,将RGB图像和深度图像对齐,提取RGB图像中的Apriltag角点,求解相机坐标系中的每个角点到Apriltag坐标系对应点的旋转矩阵和平移矩阵,得到RGB-D相机的外参;Step 2: Align the RGB image and the depth image, extract the Apriltag corner points in the RGB image, solve the rotation matrix and translation matrix from each corner point in the camera coordinate system to the corresponding point in the Apriltag coordinate system, and obtain the external parameters of the RGB-D camera; 步骤3,将对齐后的不同视角的RGB图像和深度图像数据进行融合,剔除与邻域点的欧氏距离平均值大于设定阈值的噪声点,并根据亮度梯度确定透光区域的具体位置,剔除由于透光材料导致的异常点,得到点云;Step 3: Fuse the aligned RGB images and depth image data of different viewing angles, remove noise points whose average Euclidean distance to neighboring points is greater than the set threshold, determine the specific position of the light-transmitting area according to the brightness gradient, remove abnormal points caused by light-transmitting materials, and obtain a point cloud; 首先利用相机外参将对齐后的RGB图像和深度图像转换至Apriltag坐标系,然后根据当前时刻的Apriltag相比与初始时刻转过的角度,将该视角采集到的RGB图像和深度图像数据转换到最初时刻的Apriltag坐标系下,实现多角度视图融合;计算多幅不同视角RGB图像和深度图像融合后的点云数据中每个点与其K邻域内点的欧氏距离,并计算所有欧氏距离的均值和标准差,取距离阈值,其中为常数,即比例系数,再次遍历点云,剔除与K个邻域点的欧氏距离的平均值大于的点;对于透光材料引起的异常点,先利用Harris特征点检测方法提取RGB图像中的特征点,并将这些特征点均分成N个子集,接着计算子集内每个特征点周围区域的强度变化方向,以每个特征点为原点,作方向指向亮度变大的射线,若子集内存在一定数量的射线相交于共同的点,则认定这些点连接构成的闭合区域为透光材料区域,删除该区域内的特征点,逐个对子集里的特征点进行此判断删除操作,剔除异常数据;First, the aligned RGB image and depth image are converted to the Apriltag coordinate system using the camera extrinsic parameters. Then, according to the angle that the Apriltag at the current moment has rotated compared to the initial moment, the RGB image and depth image data collected from this perspective are converted to the Apriltag coordinate system at the initial moment to achieve multi-angle view fusion. The Euclidean distance between each point in the point cloud data after fusion of multiple RGB images and depth images from different perspectives and the points in its K neighborhood is calculated, and the mean of all Euclidean distances is calculated. and standard deviation , take the distance threshold ,in is a constant, i.e., the proportional coefficient. The point cloud is traversed again, and the points whose average Euclidean distance to the K neighboring points is greater than For abnormal points caused by translucent materials, the Harris feature point detection method is first used to extract feature points in the RGB image, and these feature points are evenly divided into N subsets. Then, the intensity change direction of the area around each feature point in the subset is calculated. With each feature point as the origin, rays pointing to the direction where the brightness increases are drawn. If a certain number of rays in the subset intersect at a common point, the closed area formed by these points is identified as the translucent material area, and the feature points in the area are deleted. This judgment and deletion operation is performed on the feature points in the subset one by one to eliminate abnormal data. 步骤4,采用平面模型分割算法对剔除异常点后的点云进行分割,提取目标物体。Step 4: Use a plane model segmentation algorithm to segment the point cloud after removing abnormal points and extract the target object. 2.如权利要求1所述的一种适用于含有透光材料的物体点云重建方法,其特征在于:步骤1中启动点云重建系统后,转台开始匀速旋转,使放置于其上的目标物体也随之匀速旋转;RGB-D相机根据实际需要每隔一定时间采集一张目标物体的RGB图像和一张目标物体的深度图像,以获取目标物体在不同视角的数据。2. A point cloud reconstruction method for an object containing a light-transmitting material as described in claim 1, characterized in that: after the point cloud reconstruction system is started in step 1, the turntable starts to rotate at a constant speed, so that the target object placed thereon also rotates at a constant speed; the RGB-D camera collects an RGB image of the target object and a depth image of the target object at regular intervals according to actual needs to obtain data of the target object at different viewing angles. 3.如权利要求1所述的一种适用于含有透光材料的物体点云重建方法,其特征在于:步骤2中通过RGB传感器和TOF传感器实际摆放的物理间隔和夹角得到RGB图像和深度图像的对齐系数,实现两者之间的对齐,对于每次RGB-D相机自动采集的数据,均利用对齐系数将RGB图像和深度图像对齐;对采集到的RGB图像进行灰度化处理,然后进行边缘提取,得到每个Apriltag角点;像素坐标系以图像左上角为原点,u轴为水平向右,v轴为垂直向下,假设一幅RGB图像中有n个Apriltag角点,在RGB图像中的像素坐标为,对应在深度图像的深度值为,根据识别到的Apriltag角点的ID值,得到其在Apriltag坐标系的三维坐标值为z坐标为0,Apriltag坐标系的原点位于Apriltag标定板的左下角,x方向与y方向分别与Apriltag标定板初始位置的两边长重合,z轴垂直于标定板平面向上,该坐标系符合右手坐标系。3. A point cloud reconstruction method for an object containing a light-transmitting material as described in claim 1, characterized in that: in step 2, the alignment coefficient of the RGB image and the depth image is obtained by the physical interval and angle of the actual placement of the RGB sensor and the TOF sensor to achieve alignment between the two, and for each data automatically collected by the RGB-D camera, the RGB image and the depth image are aligned using the alignment coefficient; the collected RGB image is grayed, and then edge extraction is performed to obtain each Apriltag corner point; the pixel coordinate system takes the upper left corner of the image as the origin, the u axis is horizontal to the right, and the v axis is vertical downward. Assuming that there are n Apriltag corner points in an RGB image, the pixel coordinates in the RGB image are , corresponding to the depth value in the depth image is According to the ID value of the identified Apriltag corner point, its three-dimensional coordinate value in the Apriltag coordinate system is obtained as , the z coordinate is 0, the origin of the Apriltag coordinate system is located at the lower left corner of the Apriltag calibration plate, the x direction and the y direction coincide with the two sides of the initial position of the Apriltag calibration plate respectively, the z axis is perpendicular to the calibration plate plane and points upward, and the coordinate system conforms to the right-hand coordinate system. 4.如权利要求3所述的一种适用于含有透光材料的物体点云重建方法,其特征在于:步骤2中相机坐标系的坐标原点位于相机的光学中心,X轴指向相机的右侧,垂直于光学轴,与图像平面平行,Y轴指向相机的上方,垂直于光学轴,与X轴和图像平面平行,Z轴指向相机的观察方向,即指向被观察的场景,与光学轴平行;利用与RGB图像对齐的深度图,得到角点深度值,结合相机内参,将角点坐标从像素坐标系转换至相机坐标系中,坐标转换公式如下:4. A point cloud reconstruction method for an object containing a light-transmitting material as claimed in claim 3, characterized in that: in step 2, the coordinate origin of the camera coordinate system is located at the optical center of the camera, the X -axis points to the right side of the camera, is perpendicular to the optical axis, and is parallel to the image plane, the Y -axis points to the top of the camera, is perpendicular to the optical axis, and is parallel to the X- axis and the image plane, and the Z -axis points to the observation direction of the camera, that is, points to the observed scene, and is parallel to the optical axis; the depth value of the corner point is obtained by using the depth map aligned with the RGB image, and the corner point coordinates are converted from the pixel coordinate system to the camera coordinate system in combination with the camera intrinsic parameter, and the coordinate conversion formula is as follows: (1) (1) 式中,uv为像素点在像素坐标系中的坐标值,分别表示相机在像素坐标系中uv方向上的焦距,表示像主点的像素坐标,XY、Z为像素点在相机坐标系中的坐标值,像素点在相机坐标系中的Z值为深度值dWhere u and v are the coordinate values of the pixel point in the pixel coordinate system. Respectively represent the focal length of the camera in the u and v directions in the pixel coordinate system, represents the pixel coordinates of the principal point of the image, X , Y, Z are the coordinate values of the pixel point in the camera coordinate system, and the Z value of the pixel point in the camera coordinate system is the depth value d ; 相机外参指的是相机的位姿,包括旋转矩阵R和平移矩阵t,假设存在多条光线,其中每一条都连接相机光学中心、三维目标点以及目标点在相机平面上的投影,将相机坐标系中的点用表示,Apriltag坐标系中的点用表示,则求解相机外参即为求解相机坐标系中的每个点到Apriltag坐标系对应点的旋转矩阵和平移矩阵;相机的外参通过调整使得表达式的值最小求得,表示2-范数的平方。Camera extrinsics refers to the camera's position and posture , including the rotation matrix R and the translation matrix t . Assume that there are multiple rays, each of which connects the camera optical center, the three-dimensional target point, and the projection of the target point on the camera plane. The point in the camera coordinate system is represented by Indicates that points in the Apriltag coordinate system are represented by In other words, solving the camera extrinsic parameters is to solve the rotation matrix and translation matrix of each point in the camera coordinate system to the corresponding point in the Apriltag coordinate system; the camera extrinsic parameters are adjusted by So that the expression The minimum value of is obtained, Represents the square of the 2-norm. 5.如权利要求1所述的一种适用于含有透光材料的物体点云重建方法,其特征在于:步骤4中将步骤1不断收集的目标物体各个角度的RGB图像和深度信息统一到最初时刻的Apriltag坐标系下,剔除由于透光材料而引起异常点的数据后,得到多角度的融合后的点云数据;对每个点云进行筛选,在Aptiltag坐标系中,去除z坐标值小于0和xy大于标定板尺寸的点,得到目标物体和转台的点云信息,对于与目标物体有接触面积的转台难以分割的情况,采用平面模型分割算法,将融合后的点云数据分割为体素,对于每个体素只保留其中心点,设置迭代次数阈值、距离阈值和内点数量阈值,随机选择M个体素的中心点,利用Ransac算法拟合平面方程,估计平面模型的参数;计算所有体素的中心点到估计平面的距离,将距离小于阈值的点作为内点,统计内点的数量,达到指定迭代次数或内点数量达到设定阈值,迭代停止;去除平面模型中的内点,即为去除转台点云,最终得到三维目标物体模型。5. A method for reconstructing a point cloud of an object containing a light-transmitting material as claimed in claim 1, characterized in that: in step 4, the RGB images and depth information of the target object continuously collected in step 1 at various angles are unified into the Apriltag coordinate system at the initial moment, and after removing the data of abnormal points caused by the light-transmitting material, the multi-angle fused point cloud data is obtained; each point cloud is screened, and in the Aptiltag coordinate system, the points with z coordinate values less than 0 and x , For the points where y is larger than the size of the calibration plate, the point cloud information of the target object and the turntable is obtained. In the case where the turntable with a contact area with the target object is difficult to segment, a plane model segmentation algorithm is used to segment the fused point cloud data into voxels. For each voxel, only its center point is retained. The iteration number threshold, distance threshold and internal point number threshold are set. The center points of M voxels are randomly selected, and the plane equation is fitted using the Ransac algorithm to estimate the parameters of the plane model. The distance from the center point of all voxels to the estimated plane is calculated, and the points with a distance less than the threshold are taken as internal points. The number of internal points is counted, and the iteration stops when the specified number of iterations is reached or the number of internal points reaches the set threshold. The internal points in the plane model are removed, that is, the turntable point cloud is removed, and finally the three-dimensional target object model is obtained. 6.如权利要求5所述的一种适用于含有透光材料的物体点云重建方法,其特征在于:步骤4中利用Ransac算法拟合平面方程估计平面模型的参数的计算过程如下:6. A method for reconstructing a point cloud of an object containing a light-transmitting material as claimed in claim 5, characterized in that: the calculation process of estimating the parameters of the plane model by fitting the plane equation using the Ransac algorithm in step 4 is as follows: 假设为M个体素中心点的集合,是点集P中的任意一点,是点集的中心,即,假设拟合的目标平面经过,且其法向量为,利用最小二乘法求解得到使中所有点到平面的距离即的值最小,定义矩阵,则:Assumptions is the set of M voxel centers, is any point in the point set P, It is a point set The center of , assuming that the fitted target plane passes through , and its normal vector is , using the least squares method to solve make The distance from all points to the plane is The value of is the smallest, defining the matrix ,but: (2) (2) 式中,表示矩阵的范数,上标T表示矩阵转置;In the formula, represents the norm of the matrix, and the superscript T represents the matrix transpose; 对矩阵A进行奇异值分解,得到,代入式(2)得:Perform singular value decomposition on matrix A and get , substituting into formula (2) we get: (3) (3) 式中,V的酉矩阵;U的酉矩阵;是一个的矩阵,主对角线上的每个元素都为奇异值,除了主对角线上的元素以外全为0,即Where V is The unitary matrix of The unitary matrix of Is a In a matrix, each element on the main diagonal is a singular value, except for the elements on the main diagonal, which are all 0, that is, ; 用一个矩阵W表示,即,则式(3)表示为:Will Use one The matrix W represents , then formula (3) is expressed as: (4) (4) 式中,均为奇异值,且为矩阵W的元素;In the formula, , , are all singular values, and ; , , are the elements of the matrix W; 最后所得拟合的平面法向量为矩阵U的第三列。The final fitted plane normal vector is the third column of the matrix U. 7.一种适用于含有透光材料的物体点云重建设备,其特征在于,包括处理器和存储器,存储器用于存储程序指令,处理器用于调用存储器中的程序指令执行如权利要求1-6任一项所述的一种适用于含有透光材料的物体点云重建方法。7. A point cloud reconstruction device suitable for objects containing translucent materials, characterized in that it includes a processor and a memory, the memory is used to store program instructions, and the processor is used to call the program instructions in the memory to execute a point cloud reconstruction method suitable for objects containing translucent materials as described in any one of claims 1-6. 8.一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序,所述计算机程序执行时,实现如权利要求1-6任一项所述的一种适用于含有透光材料的物体点云重建方法。8. A readable storage medium, characterized in that a computer program is stored on the readable storage medium, and when the computer program is executed, a point cloud reconstruction method for an object containing a light-transmitting material as described in any one of claims 1 to 6 is implemented.
CN202410506512.9A 2024-04-25 2024-04-25 A point cloud reconstruction system and method suitable for objects containing light-transmitting materials Active CN118097036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410506512.9A CN118097036B (en) 2024-04-25 2024-04-25 A point cloud reconstruction system and method suitable for objects containing light-transmitting materials

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410506512.9A CN118097036B (en) 2024-04-25 2024-04-25 A point cloud reconstruction system and method suitable for objects containing light-transmitting materials

Publications (2)

Publication Number Publication Date
CN118097036A CN118097036A (en) 2024-05-28
CN118097036B true CN118097036B (en) 2024-07-12

Family

ID=91157675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410506512.9A Active CN118097036B (en) 2024-04-25 2024-04-25 A point cloud reconstruction system and method suitable for objects containing light-transmitting materials

Country Status (1)

Country Link
CN (1) CN118097036B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952341B (en) * 2017-03-27 2020-03-31 中国人民解放军国防科学技术大学 Underwater scene three-dimensional point cloud reconstruction method and system based on vision
WO2021176417A1 (en) * 2020-03-06 2021-09-10 Yembo, Inc. Identifying flood damage to an indoor environment using a virtual representation
CN114782628A (en) * 2022-04-25 2022-07-22 西安理工大学 Indoor real-time three-dimensional reconstruction method based on depth camera
CN115880443B (en) * 2023-02-28 2023-06-06 武汉大学 Method and device for implicit surface reconstruction of transparent objects
CN116758223A (en) * 2023-07-06 2023-09-15 哈尔滨工业大学 Three-dimensional multispectral point cloud reconstruction method, system and equipment based on dual-angle multispectral images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Seeing Glass: Joint Point Cloud and Depth Completion for Transparent Objects";Haoping Xu等;《5th Conference on Robot Learning (CoRL 2021), London, UK.》;20210930;4 *
"双目视差图到点云的转换(原理+代码)";Edvincecilia;《https://blog.csdn.net/qq_41037856/article/details/134701111》;20240222;1-3 *

Also Published As

Publication number Publication date
CN118097036A (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN112950667B (en) Video labeling method, device, equipment and computer readable storage medium
CN110009727B (en) Automatic reconstruction method and system for indoor three-dimensional model with structural semantics
CN113052066B (en) Multi-mode fusion method based on multi-view and image segmentation in three-dimensional target detection
CN107833181A (en) A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN102507592A (en) Fly-simulation visual online detection device and method for surface defects
CN115359021A (en) Target positioning detection method based on laser radar and camera information fusion
CN110266938A (en) Intelligent shooting method and device for substation equipment based on deep learning
WO2020207172A1 (en) Method and system for optical monitoring of unmanned aerial vehicles based on three-dimensional light field technology
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117197010A (en) Method and device for carrying out workpiece point cloud fusion in laser cladding processing
CN114663344A (en) Train wheel set tread defect identification method and device based on image fusion
CN116958449B (en) Urban scene three-dimensional modeling method, device and electronic equipment
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN114273826A (en) Automatic identification method of welding position for large workpieces to be welded
CN115712940A (en) BIM modeling-based visual power grid fault identification and positioning method and device
CN115564661B (en) Automatic repairing method and system for building glass area elevation
WO2022011560A1 (en) Image cropping method and apparatus, electronic device, and storage medium
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN114283199B (en) Dynamic scene-oriented dotted line fusion semantic SLAM method
CN118097036B (en) A point cloud reconstruction system and method suitable for objects containing light-transmitting materials
CN115035168B (en) Photovoltaic panel multi-source image registration method, device and system based on multiple constraints
CN115131459B (en) Plan layout reconstruction method and device
CN116503567A (en) Intelligent modeling management system based on AI big data
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN112991419B (en) Disparity data generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant