WO2020199439A1 - Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method - Google Patents

Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method Download PDF

Info

Publication number
WO2020199439A1
WO2020199439A1 PCT/CN2019/098678 CN2019098678W WO2020199439A1 WO 2020199439 A1 WO2020199439 A1 WO 2020199439A1 CN 2019098678 W CN2019098678 W CN 2019098678W WO 2020199439 A1 WO2020199439 A1 WO 2020199439A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
point
projector
image
point cloud
Prior art date
Application number
PCT/CN2019/098678
Other languages
French (fr)
Chinese (zh)
Inventor
邢威
张楠楠
孙博
郭磊
Original Assignee
易思维天津科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 易思维天津科技有限公司 filed Critical 易思维天津科技有限公司
Publication of WO2020199439A1 publication Critical patent/WO2020199439A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Definitions

  • the invention relates to the field of machine vision detection, in particular to a three-dimensional point cloud computing method based on mono-binocular hybrid measurement.
  • the binocular vision measurement system uses two cameras to take pictures of the target object from different angles for image collection, and reconstructs the three-dimensional information of the target in the three-dimensional space to achieve the detection of the object shape. It has a wide range of applications in the field of vision measurement.
  • the 3D scanning measurement system which adopts the structure of binocular camera + projector, realizes 3D point cloud information acquisition through raster projection, and is equipped with robots, guide rails, turntables and other motion mechanisms to achieve large-scale flexible measurement , High-density point cloud measurement, to truly restore the rich surface details of the object, due to its high efficiency, high accuracy, large data volume, low cost, and low environmental requirements, it is gradually replacing the three-coordinate system and becoming the mainstream implementation parts and components. Tools for measuring large-size workpieces.
  • the left and right cameras constitute a binocular stereo vision system.
  • the measurement range is the common field of view of the two cameras. Due to factors such as occlusion or overexposure, as long as the field of view of one of the cameras cannot be collected The surface information of the measured object and the uncollected partial surface will not be able to solve the 3D point cloud and perform 3D measurement, resulting in incomplete measurement range.
  • the present invention proposes a three-dimensional point cloud computing method based on mono-binocular hybrid measurement.
  • the three-dimensional scanning measurement system not only the binocular stereo vision method is used to obtain the three-dimensional point cloud information of the measured object, but also when When the field of view of one of the cameras is blocked or the shooting is overexposed, the other camera and the projector can be used to calculate the 3D point cloud of the blocked part, which ensures the integrity of the 3D point cloud data of the measured object.
  • the three-dimensional point cloud solution in the common field of view of a single camera and a projector is practical.
  • a three-dimensional point cloud calculation method based on single and binocular hybrid measurement including the following steps:
  • the projector projects the sinusoidal fringes onto the surface of the object to be measured.
  • intersection point When the pixel coordinates of the intersection point are on the image plane of the second camera: mark the intersection point as the second matching point, and use the effective image point u i and the second matching point to calculate the three-dimensional point cloud (x i , Y i , z i );
  • step S3 Repeat step S2 to traverse all effective image points and complete the three-dimensional point cloud calculation on the surface of the measured object.
  • step S2 the three-dimensional point cloud (x i , y i , z i ) of the point is calculated using the effective image point and the second matching point, and the calculation method is as follows:
  • step S2 the effective image point u i and the first matching point are used to calculate the three-dimensional point cloud (x i , y i , z i ) of the point, and the calculation method is as follows:
  • F lr is the basic matrix between the first camera and the second camera.
  • F dr is the basic matrix between the projector and the second camera.
  • a three-dimensional point cloud is calculated using the effective image point v j and the third matching point.
  • the absolute phase is solved by a phase shift combined with multi-frequency heterodyne or Gray coding, and the absolute phase is the horizontal and vertical phase.
  • the calibration method is Zhang's calibration method or beam adjustment calibration method
  • Zhang's calibration method regards the calibration plate as a plane, and only uses the information in the x and y directions of the mark point during the calibration process, but in fact the calibration plate is not an ideal plane, the calibration result is biased, and the reprojection error is 0.1 About pixels; the beam adjustment calibration method can be used to solve the three-dimensional point coordinates in the space.
  • the camera parameters are used as the quantity to be calculated in the observation equation, and the calculation is continuously optimized through iteration, and finally the camera parameters are obtained.
  • the reprojection error of this calibration method is about 0.02 pixels .
  • the calibration board includes a plurality of standard circles, and the center coordinates of the standard circles in the left and right camera images are calculated respectively; the image points between the left and right cameras are established Matching relationship
  • the projector projects multiple horizontal and vertical fringe images onto the calibration board, and simultaneously triggers the left and right cameras to collect multiple projected calibration board images;
  • the calibration result includes the internal parameter matrix, distortion coefficient, and external parameter matrix of the projector and the left and right cameras; the error equation of the beam adjustment is listed based on the collinear equation:
  • V i At+Dk-L
  • A is the partial derivative matrix of the internal parameter matrix correction of the projector and the left and right cameras
  • t is the internal parameter matrix correction of the projector and the left and right cameras
  • k is the projector and the left and right cameras.
  • the correction amount of the external parameter matrix of the projector D is the partial derivative matrix of the external parameter matrix of the projector and the left and right cameras
  • L is the two-dimensional coordinates on the image plane of the left camera, the right camera, and the projector and the three-dimensional coordinates of the calibration board The difference between the coordinates converted to the image plane;
  • q takes a value of 0.05 to 0.2.
  • the method of the present invention solves the problem that the binocular system cannot complete the point cloud calculation when the field of view of one of the cameras is blocked or the shooting is overexposed by obtaining the conversion relationship between the single camera and the projector ,
  • the cloud solution method obtains more point clouds, which can obtain more surface information of the measured object at one time, and has practical value.
  • the beam adjustment calibration method is used for calibration, and the calibration result is more accurate.
  • Figure 1 is a comparison diagram of the number of points obtained by the traditional method and the method of the present invention in the test experiment.
  • a three-dimensional point cloud calculation method based on single and binocular hybrid measurement including the following steps:
  • f xl , f yl are the focal length of the left camera, (u 0l , v 0l ) are the main point coordinates; f xr , f yr are the focal length, (u 0r , v 0r ) are the main point coordinates;
  • the projector projects the sinusoidal fringes onto the surface of the object to be measured.
  • the left and right cameras collect the modulated fringe images to obtain the left camera image and the right camera image.
  • the left camera image and the right camera image are calculated by the method of phase shift combined with multi-frequency heterodyne
  • the absolute phase of the left camera image and the right camera image is calculated by the Gray code method
  • step S3 Repeat step S2 to traverse all effective image points and complete the three-dimensional point cloud calculation on the surface of the measured object.
  • the Zhang calibration method is adopted for the calibration process of the left and right cameras and the projector;
  • the calibration process of the left and right cameras and the projector adopts the beam adjustment calibration method, and the calibration process is as follows:
  • the calibration board includes multiple standard circles.
  • the center coordinates of the standard circles in the left and right camera images are calculated respectively; the matching relationship between the image points in the left and right cameras is established ;
  • the projector projects multiple horizontal and vertical fringe images onto the calibration board, and simultaneously triggers the left and right cameras to collect multiple projected calibration board images;
  • the calibration results include the internal parameter matrix, distortion coefficient, and external parameter matrix of the projector and left and right cameras; the error equation of the beam adjustment is listed based on the collinear equation:
  • V i At+Dk-L
  • A is the partial derivative matrix of the internal parameter matrix correction of the projector and the left and right cameras
  • t is the internal parameter matrix correction of the projector and the left and right cameras
  • k is the external matrix of the projector and the left and right cameras.
  • the correction amount of the parameter matrix D is the partial derivative matrix of the external parameter matrix of the projector and the left and right cameras
  • L is the two-dimensional coordinates on the image plane of the left camera, the right camera, and the projector, and the three-dimensional coordinates of the calibration board are converted to The difference between the coordinates of the image plane;

Abstract

A single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method. For a three-dimensional scanning measurement system, two cameras and a projector are calibrated first, and intrinsic and extrinsic parameters and a basic matrix of the three are acquired; when a point cloud on the surface of an object to be measured is calculated, point cloud information is acquired by using a binocular epipolar line matching method; and when the field of view of one of the cameras is blocked or overexposed, a three-dimensional point cloud of a shaded portion can be resolved by a single camera and the projector, ensuring the integrity of three-dimensional point cloud data of said object. Moreover, in addition to the common field of view, a three-dimensional point cloud resolving within the range of an independent field of view of the single camera is added, so that more pieces of surface information of said object can be obtained once, having practical value.

Description

基于单双目混合测量的三维点云计算方法Three-dimensional point cloud computing method based on single and binocular hybrid measurement 技术领域Technical field
本发明涉及机器视觉检测领域,具体涉及一种基于单双目混合测量的三维点云计算方法。The invention relates to the field of machine vision detection, in particular to a three-dimensional point cloud computing method based on mono-binocular hybrid measurement.
背景技术Background technique
双目视觉测量系统是利用两个摄像机从不同角度对目标物体拍照进行图像采集,并在三维空间中重构目标的三维信息,以实现物体形貌的检测,在视觉测量领域有着广泛的应用,其中,一个重要的应用就是三维扫描测量系统,该系统采用双目相机+投影仪的结构,通过光栅投影的方式实现三维点云信息获取,搭载机器人、导轨、转台等运动机构实现大范围柔性测量,高密度点云测量,真实还原物体的丰富表面细节,因其效率高、精度高、数据量大、成本低、环境要求低等特点,正逐步替代三坐标系统,成为主流的实现零部件及大尺寸工件测量的工具。The binocular vision measurement system uses two cameras to take pictures of the target object from different angles for image collection, and reconstructs the three-dimensional information of the target in the three-dimensional space to achieve the detection of the object shape. It has a wide range of applications in the field of vision measurement. Among them, one of the important applications is the 3D scanning measurement system, which adopts the structure of binocular camera + projector, realizes 3D point cloud information acquisition through raster projection, and is equipped with robots, guide rails, turntables and other motion mechanisms to achieve large-scale flexible measurement , High-density point cloud measurement, to truly restore the rich surface details of the object, due to its high efficiency, high accuracy, large data volume, low cost, and low environmental requirements, it is gradually replacing the three-coordinate system and becoming the mainstream implementation parts and components. Tools for measuring large-size workpieces.
现有的三维扫描测量系统,左、右相机构成双目立体视觉系统,测量范围为两个相机的公共视场,因遮挡或者过曝光等因素,只要其中一部相机的视场无法采集完整的被测物表面信息,未被采集的局部表面将无法解算三维点云,进行三维测量,导致测量范围不完整。In the existing 3D scanning measurement system, the left and right cameras constitute a binocular stereo vision system. The measurement range is the common field of view of the two cameras. Due to factors such as occlusion or overexposure, as long as the field of view of one of the cameras cannot be collected The surface information of the measured object and the uncollected partial surface will not be able to solve the 3D point cloud and perform 3D measurement, resulting in incomplete measurement range.
发明内容Summary of the invention
针对上述问题,本发明提出一种基于单双目混合测量的三维点云计算方法, 对于三维扫描测量系统,不仅利用了双目立体视觉的方法获取被测物的三维点云信息,并且,当其中一部相机视场受阻或拍摄过曝时,能够通过另一部相机和投影仪解算被遮挡部分的三维点云,保障了被测物三维点云数据的完整性,同时,增加了除去双相机公共视场之外,单相机和投影仪公共视场内三维点云解算,具有实用性。To solve the above problems, the present invention proposes a three-dimensional point cloud computing method based on mono-binocular hybrid measurement. For the three-dimensional scanning measurement system, not only the binocular stereo vision method is used to obtain the three-dimensional point cloud information of the measured object, but also when When the field of view of one of the cameras is blocked or the shooting is overexposed, the other camera and the projector can be used to calculate the 3D point cloud of the blocked part, which ensures the integrity of the 3D point cloud data of the measured object. In addition to the two-camera common field of view, the three-dimensional point cloud solution in the common field of view of a single camera and a projector is practical.
技术方案如下:The technical solutions are as follows:
一种基于单双目混合测量的三维点云计算方法,包括以下步骤:A three-dimensional point cloud calculation method based on single and binocular hybrid measurement, including the following steps:
S1、利用标定方法分别对三维扫描测量系统中的左、右相机和投影仪进行标定;S1. Use calibration methods to calibrate the left and right cameras and projectors in the 3D scanning measurement system;
标定完成后,投影仪将正弦条纹投影到被测物表面,左、右相机采集被调制的条纹图像,得到左相机图像和右相机图像,解算绝对相位,择一图像记为第一图像,将第一图像上绝对相位值为0的像点标记为无效像点,其余像点标记为有效像点u i,i=1,2,3……n,n为有效像点的个数; After the calibration is completed, the projector projects the sinusoidal fringes onto the surface of the object to be measured. The left and right cameras collect the modulated fringe images to obtain the left camera image and the right camera image, calculate the absolute phase, and select one image as the first image. Mark the pixels with an absolute phase value of 0 on the first image as invalid pixels, and mark the remaining pixels as valid pixels u i , i=1, 2, 3...n, n is the number of valid pixels;
将采集第一图像的相机标记为第一相机;非采集第一图像的相机标记为第二相机;Mark the camera that collected the first image as the first camera; mark the camera that did not collect the first image as the second camera;
S2、对于有效像点u i,依次计算其在第二相机的像平面的极线L iS2. For the effective image point u i , calculate the epipolar line L i of the image plane of the second camera in sequence;
根据有效像点u i的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记为第一匹配点
Figure PCTCN2019098678-appb-000001
According to the absolute phase information of the effective image point u i , find its corresponding matching point on the image plane of the projector, and record it as the first matching point
Figure PCTCN2019098678-appb-000001
计算所述第一匹配点
Figure PCTCN2019098678-appb-000002
在第二相机的像平面的极线Y i
Calculate the first matching point
Figure PCTCN2019098678-appb-000002
Polar line Y i in the image plane of the second camera;
计算两条极线L i、Y i的交点; Computing the intersection of two source lines L i, Y i of;
当所述交点的像素坐标在第二相机的像平面上:将所述交点记为第二匹配点,利用所述有效像点u i和第二匹配点计算该点的三维点云(x i,y i,z i); When the pixel coordinates of the intersection point are on the image plane of the second camera: mark the intersection point as the second matching point, and use the effective image point u i and the second matching point to calculate the three-dimensional point cloud (x i , Y i , z i );
当所述交点像素坐标不在第二相机的像平面上:认为在第二相机的像平面 没有对应的匹配点,利用所述有效像点u i和第一匹配点
Figure PCTCN2019098678-appb-000003
计算该点的三维点云(x i,y i,z i);
When the pixel coordinates of the intersection point are not on the image plane of the second camera: consider that there is no corresponding matching point in the image plane of the second camera, and use the effective image point u i and the first matching point
Figure PCTCN2019098678-appb-000003
Calculate the three-dimensional point cloud (x i , y i , z i ) of the point;
S3、重复步骤S2,遍历所有有效像点,完成被测物表面三维点云计算。S3. Repeat step S2 to traverse all effective image points and complete the three-dimensional point cloud calculation on the surface of the measured object.
进一步,步骤S2中,利用所述有效像点和第二匹配点计算该点的三维点云(x i,y i,z i),计算方法如下: Further, in step S2, the three-dimensional point cloud (x i , y i , z i ) of the point is calculated using the effective image point and the second matching point, and the calculation method is as follows:
Figure PCTCN2019098678-appb-000004
Figure PCTCN2019098678-appb-000004
Figure PCTCN2019098678-appb-000005
Figure PCTCN2019098678-appb-000005
Figure PCTCN2019098678-appb-000006
Figure PCTCN2019098678-appb-000006
其中,
Figure PCTCN2019098678-appb-000007
among them,
Figure PCTCN2019098678-appb-000007
Figure PCTCN2019098678-appb-000008
为有效像点u i的像素坐标,
Figure PCTCN2019098678-appb-000009
为第二匹配点的像素坐标;
Figure PCTCN2019098678-appb-000008
Is the pixel coordinates of the effective image point u i ,
Figure PCTCN2019098678-appb-000009
Is the pixel coordinate of the second matching point;
第一相机内参数矩阵
Figure PCTCN2019098678-appb-000010
第二相机内参数矩阵
Figure PCTCN2019098678-appb-000011
Figure PCTCN2019098678-appb-000012
Parameter matrix in the first camera
Figure PCTCN2019098678-appb-000010
Second camera internal parameter matrix
Figure PCTCN2019098678-appb-000011
Figure PCTCN2019098678-appb-000012
其中,f xl,f yl为第一相机的焦距,(u 0l,v 0l)为第一相机像平面的主点坐标;f xr,f yr为第二相机的焦距,(u 0r,v 0r)为第二相机像平面的主点坐标; Among them, f xl , f yl are the focal length of the first camera, (u 0l , v 0l ) are the principal point coordinates of the image plane of the first camera; f xr , f yr are the focal length of the second camera, (u 0r , v 0r ) Is the principal point coordinates of the image plane of the second camera;
第一相机坐标系到第二相机坐标系的旋转矩阵
Figure PCTCN2019098678-appb-000013
平移矩阵T c=[t 1 t 2 t 3] T
Rotation matrix from the first camera coordinate system to the second camera coordinate system
Figure PCTCN2019098678-appb-000013
The translation matrix T c =[t 1 t 2 t 3 ] T.
进一步,步骤S2中,利用所述有效像点u i和第一匹配点计算该点的三维点 云(x i,y i,z i),计算方法如下: Further, in step S2, the effective image point u i and the first matching point are used to calculate the three-dimensional point cloud (x i , y i , z i ) of the point, and the calculation method is as follows:
Figure PCTCN2019098678-appb-000014
Figure PCTCN2019098678-appb-000014
Figure PCTCN2019098678-appb-000015
Figure PCTCN2019098678-appb-000015
Figure PCTCN2019098678-appb-000016
Figure PCTCN2019098678-appb-000016
其中,
Figure PCTCN2019098678-appb-000017
among them,
Figure PCTCN2019098678-appb-000017
Figure PCTCN2019098678-appb-000018
为有效像点u i的像素坐标,
Figure PCTCN2019098678-appb-000019
为所述第一匹配点的像素坐标:
Figure PCTCN2019098678-appb-000018
Is the pixel coordinates of the effective image point u i ,
Figure PCTCN2019098678-appb-000019
Is the pixel coordinates of the first matching point:
投影仪内参数矩阵
Figure PCTCN2019098678-appb-000020
Parameter matrix in the projector
Figure PCTCN2019098678-appb-000020
第一相机坐标系和投影仪坐标系旋转矩阵
Figure PCTCN2019098678-appb-000021
平移矩阵
Figure PCTCN2019098678-appb-000022
Rotation matrix of the first camera coordinate system and the projector coordinate system
Figure PCTCN2019098678-appb-000021
Translation matrix
Figure PCTCN2019098678-appb-000022
进一步,所述极线L i:a iu pr+b iv pr+c i=0;计算方法如下: Further, the epipolar line Li : a i u pr + b i v pr + c i =0; the calculation method is as follows:
Figure PCTCN2019098678-appb-000023
Figure PCTCN2019098678-appb-000023
其中,F lr为第一相机与第二相机之间的基础矩阵。 Among them, F lr is the basic matrix between the first camera and the second camera.
进一步,所述极线Y i:a i′u pr+b i′v pr+c i′=0;计算方法如下: Further, the source line Y i: a i 'u pr + b i' v pr + c i '= 0; calculated as follows:
Figure PCTCN2019098678-appb-000024
Figure PCTCN2019098678-appb-000024
其中,F dr为投影仪与第二相机之间的基础矩阵。 Among them, F dr is the basic matrix between the projector and the second camera.
进一步,在第二相机的像点中剔除所述第二匹配点和绝对相位值为0的点, 得到剩余点v j,j=1,2,3……m,m为剩余点的个数;根据剩余点v j的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记第三匹配点
Figure PCTCN2019098678-appb-000025
Further, remove the second matching point and the point with an absolute phase value of 0 from the image points of the second camera to obtain the remaining points v j , where j = 1, 2, 3...m, and m is the number of remaining points ; According to the absolute phase information of the remaining points v j , find the corresponding matching point on the image plane of the projector, and record the third matching point
Figure PCTCN2019098678-appb-000025
利用所述有效像点v j和第三匹配点计算三维点云。 A three-dimensional point cloud is calculated using the effective image point v j and the third matching point.
进一步,解算绝对相位通过相移结合多频外差或格雷编码的方法,所述绝对相位为横纵相位。Further, the absolute phase is solved by a phase shift combined with multi-frequency heterodyne or Gray coding, and the absolute phase is the horizontal and vertical phase.
进一步,所述标定方法为张氏标定法或光束平差标定法;Further, the calibration method is Zhang's calibration method or beam adjustment calibration method;
张氏标定法将标定板看作是一个平面,标定过程中只用到了标记点x、y两个方向的信息,但是实际上标定板不是理想的平面,标定结果有偏差,重投影误差为0.1像素左右;光束平差标定法可用于解算空间中三维点坐标,将相机参数作为观测方程中待求量,通过迭代不断优化计算,最后获取相机参数,该标定方法重投影误差在0.02像素左右。Zhang's calibration method regards the calibration plate as a plane, and only uses the information in the x and y directions of the mark point during the calibration process, but in fact the calibration plate is not an ideal plane, the calibration result is biased, and the reprojection error is 0.1 About pixels; the beam adjustment calibration method can be used to solve the three-dimensional point coordinates in the space. The camera parameters are used as the quantity to be calculated in the observation equation, and the calculation is continuously optimized through iteration, and finally the camera parameters are obtained. The reprojection error of this calibration method is about 0.02 pixels .
进一步,所述光束平差标定法,标定过程如下:Further, the calibration process of the beam adjustment calibration method is as follows:
利用左、右相机采集标定板图像,所述标定板上包括多个标准圆,分别计算得到标准圆的在左、右相机图像中的圆心坐标;建立左、右相机中图像像点之间的匹配关系;Use the left and right cameras to collect images of the calibration board. The calibration board includes a plurality of standard circles, and the center coordinates of the standard circles in the left and right camera images are calculated respectively; the image points between the left and right cameras are established Matching relationship
投影仪投影多幅横竖条纹图像到标定板上,同步触发左、右相机采集多幅被投影后的标定板图像;The projector projects multiple horizontal and vertical fringe images onto the calibration board, and simultaneously triggers the left and right cameras to collect multiple projected calibration board images;
利用多幅具有横竖条纹左、右相机标定板图像,分别计算各个标准圆圆心在左右相机图像中的绝对相位;Using multiple left and right camera calibration images with horizontal and vertical stripes, calculate the absolute phase of each standard circle center in the left and right camera images;
根据左/右相机标定板图像中圆心的绝对相位,找到其在投影仪像平面上对应的匹配点,建立左、右相机的像平面与投影仪的投影平面之间的点匹配关系;Find the corresponding matching point on the image plane of the projector according to the absolute phase of the circle center in the image of the left/right camera calibration plate, and establish the point matching relationship between the image plane of the left and right camera and the projection plane of the projector;
对标定结果进行优化,所述标定结果包括投影仪和左、右相机的内参数矩阵、畸变系数、外参数矩阵;基于共线方程列出光束平差的误差方程:The calibration result is optimized, the calibration result includes the internal parameter matrix, distortion coefficient, and external parameter matrix of the projector and the left and right cameras; the error equation of the beam adjustment is listed based on the collinear equation:
V i=At+Dk-L V i =At+Dk-L
式中,A为所述投影仪和左、右相机的内参数矩阵修正量的偏导矩阵,t为投影仪和左、右相机的内参数矩阵修正量,k为投影仪和左、右相机的外参数矩阵的修正量,D为所述投影仪和左、右相机的外参数矩阵的偏导矩阵,L为左相机、右相机、投影仪像面上二维坐标和标定板三维坐标经过转换到像面的坐标之间的差值;In the formula, A is the partial derivative matrix of the internal parameter matrix correction of the projector and the left and right cameras, t is the internal parameter matrix correction of the projector and the left and right cameras, and k is the projector and the left and right cameras. The correction amount of the external parameter matrix of the projector, D is the partial derivative matrix of the external parameter matrix of the projector and the left and right cameras, L is the two-dimensional coordinates on the image plane of the left camera, the right camera, and the projector and the three-dimensional coordinates of the calibration board The difference between the coordinates converted to the image plane;
迭代上式,每次迭代的输出结果为下一次迭代的输入,当误差量矢量V i小于预设值q时,停止迭代,输出此时对应的投影仪和左、右相机的内参数矩阵参数、畸变系数、外参数矩阵参数,作为最终的标定结果。 Iterative formula, each iteration of the output for the next iteration of the input vector when the error amount is smaller than the predetermined value V i q, the iteration is stopped, at this time the output of the projector and the corresponding left and right camera intrinsic parameters matrix parameters , Distortion coefficient, external parameter matrix parameters, as the final calibration result.
进一步,q取值0.05~0.2。Further, q takes a value of 0.05 to 0.2.
在三维扫描测量系统中,本发明方法通过获得单相机与投影仪之间的转换关系,解决了当其中一部相机视场受阻或拍摄过曝时,双目系统无法完成点云解算的问题,保障了被测物三维点云数据的完整性,同时,增加了除去双相机公共视场之外,单相机和投影仪公共视场内三维点云解算,相比于纯双目系统点云解算本方法获取的点云数量更多,能够一次性获得更多被测物的表面信息,具有实用价值,采用光束平差标定法进行标定,标定结果更加准确。In the three-dimensional scanning measurement system, the method of the present invention solves the problem that the binocular system cannot complete the point cloud calculation when the field of view of one of the cameras is blocked or the shooting is overexposed by obtaining the conversion relationship between the single camera and the projector , To ensure the integrity of the three-dimensional point cloud data of the measured object, and at the same time, increase the calculation of the three-dimensional point cloud in the common field of view of the single camera and the projector in addition to the common field of view of the dual camera, compared to the point cloud of the pure binocular system The cloud solution method obtains more point clouds, which can obtain more surface information of the measured object at one time, and has practical value. The beam adjustment calibration method is used for calibration, and the calibration result is more accurate.
附图说明Description of the drawings
图1为测试实验中传统方法与本发明方法获取点数量的对比图。Figure 1 is a comparison diagram of the number of points obtained by the traditional method and the method of the present invention in the test experiment.
具体实施方式detailed description
以下结合具体实施方式对本发明的技术方案进行详细描述。The technical solution of the present invention will be described in detail below in conjunction with specific embodiments.
一种基于单双目混合测量的三维点云计算方法,包括以下步骤:A three-dimensional point cloud calculation method based on single and binocular hybrid measurement, including the following steps:
S1、利用标定方法对三维扫描测量系统中的左、右相机和投影仪进行标定,得到:S1. Use the calibration method to calibrate the left and right cameras and projectors in the 3D scanning measurement system to obtain:
左相机内参数矩阵
Figure PCTCN2019098678-appb-000026
Left camera internal parameter matrix
Figure PCTCN2019098678-appb-000026
右相机内参数矩阵
Figure PCTCN2019098678-appb-000027
Parameter matrix in the right camera
Figure PCTCN2019098678-appb-000027
投影仪内参数
Figure PCTCN2019098678-appb-000028
Parameters in the projector
Figure PCTCN2019098678-appb-000028
其中,f xl,f yl为左相机的焦距,(u 0l,v 0l)为主点坐标;f xr,f yr为的焦距,(u 0r,v 0r)为主点坐标; Among them, f xl , f yl are the focal length of the left camera, (u 0l , v 0l ) are the main point coordinates; f xr , f yr are the focal length, (u 0r , v 0r ) are the main point coordinates;
左相机坐标系到右相机坐标系的旋转矩阵
Figure PCTCN2019098678-appb-000029
平移矩阵T c=[t 1 t 2 t 3] T
Rotation matrix from left camera coordinate system to right camera coordinate system
Figure PCTCN2019098678-appb-000029
Translation matrix T c =[t 1 t 2 t 3 ] T ;
右相机坐标系到左相机坐标系的旋转矩阵
Figure PCTCN2019098678-appb-000030
平移矩阵
Figure PCTCN2019098678-appb-000031
Rotation matrix from right camera coordinate system to left camera coordinate system
Figure PCTCN2019098678-appb-000030
Translation matrix
Figure PCTCN2019098678-appb-000031
左相机坐标系和投影仪坐标系旋转矩阵
Figure PCTCN2019098678-appb-000032
平移矩阵
Figure PCTCN2019098678-appb-000033
Rotation matrix of left camera coordinate system and projector coordinate system
Figure PCTCN2019098678-appb-000032
Translation matrix
Figure PCTCN2019098678-appb-000033
右相机坐标系和投影仪坐标系旋转矩阵
Figure PCTCN2019098678-appb-000034
平移矩阵
Figure PCTCN2019098678-appb-000035
Rotation matrix of right camera coordinate system and projector coordinate system
Figure PCTCN2019098678-appb-000034
Translation matrix
Figure PCTCN2019098678-appb-000035
计算投影仪与左相机之间的基础矩阵F dl、投影仪与右相机之间的基础矩阵F dr、左相机与右相机之间的基础矩阵F lrCalculate the fundamental matrix F dl between the projector and the left camera, the fundamental matrix F dr between the projector and the right camera, and the fundamental matrix F lr between the left camera and the right camera;
投影仪将正弦条纹投影到被测物表面,左、右相机采集被调制的条纹图像,得到左相机图像和右相机图像,通过相移结合多频外差的方法解算左相机图像和右相机图像的绝对相位,选择左相机图像记为第一图像,将第一图像上绝对相位值为0的像点标记为无效像点,其余像点记为有效像点
Figure PCTCN2019098678-appb-000036
i=1,2,3……n,n为有效像点的个数;
The projector projects the sinusoidal fringes onto the surface of the object to be measured. The left and right cameras collect the modulated fringe images to obtain the left camera image and the right camera image. The left camera image and the right camera image are calculated by the method of phase shift combined with multi-frequency heterodyne The absolute phase of the image, select the left camera image as the first image, mark the pixels with the absolute phase value of 0 on the first image as invalid pixels, and mark the remaining pixels as valid pixels
Figure PCTCN2019098678-appb-000036
i=1, 2, 3...n, n is the number of effective image points;
作为本发明的另一种实施方式,通过格雷码的方法解算左相机图像和右相机图像的绝对相位;As another embodiment of the present invention, the absolute phase of the left camera image and the right camera image is calculated by the Gray code method;
S2、对于有效像点
Figure PCTCN2019098678-appb-000037
依次计算其在右相机的像平面的极线L i
S2, for the effective image point
Figure PCTCN2019098678-appb-000037
Calculate the epipolar line L i of the image plane of the right camera in turn;
极线L i:a iu pr+b iv pr+c i=0; Polar line Li : a i u pr +b i v pr +c i =0;
Figure PCTCN2019098678-appb-000038
Figure PCTCN2019098678-appb-000038
根据有效像点u i的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记为第一匹配点
Figure PCTCN2019098678-appb-000039
According to the absolute phase information of the effective image point u i , find its corresponding matching point on the image plane of the projector, and record it as the first matching point
Figure PCTCN2019098678-appb-000039
计算第一匹配点
Figure PCTCN2019098678-appb-000040
在右相机的像平面的极线Y i
Calculate the first matching point
Figure PCTCN2019098678-appb-000040
Polar line Y i in the image plane of the right camera;
极线Y i:a i′u pr+b i′v pr+c i′=0 Polar line Y i : a i ′u pr +b i ′v pr +c i ′=0
Figure PCTCN2019098678-appb-000041
Figure PCTCN2019098678-appb-000041
计算两条极线L i、Y i的交点; Computing the intersection of two source lines L i, Y i of;
利用有效像点u i和第二匹配点
Figure PCTCN2019098678-appb-000042
计算该点的三维点云(x i,y i,z i);
Use the effective image point u i and the second matching point
Figure PCTCN2019098678-appb-000042
Calculate the three-dimensional point cloud (x i , y i , z i ) of the point;
Figure PCTCN2019098678-appb-000043
Figure PCTCN2019098678-appb-000043
Figure PCTCN2019098678-appb-000044
Figure PCTCN2019098678-appb-000044
Figure PCTCN2019098678-appb-000045
Figure PCTCN2019098678-appb-000045
其中,
Figure PCTCN2019098678-appb-000046
among them,
Figure PCTCN2019098678-appb-000046
当交点像素坐标不在右相机的像平面上:认为在右相机的像平面没有对应的匹配点,利用有效像点
Figure PCTCN2019098678-appb-000047
和第一匹配点
Figure PCTCN2019098678-appb-000048
计算该点的三维点云(x i,y i,z i);
When the pixel coordinates of the intersection point are not on the image plane of the right camera: consider that there is no corresponding matching point on the image plane of the right camera, and use the effective image point
Figure PCTCN2019098678-appb-000047
And the first match point
Figure PCTCN2019098678-appb-000048
Calculate the three-dimensional point cloud (x i , y i , z i ) of the point;
Figure PCTCN2019098678-appb-000049
Figure PCTCN2019098678-appb-000049
Figure PCTCN2019098678-appb-000050
Figure PCTCN2019098678-appb-000050
Figure PCTCN2019098678-appb-000051
Figure PCTCN2019098678-appb-000051
其中,
Figure PCTCN2019098678-appb-000052
among them,
Figure PCTCN2019098678-appb-000052
S3、重复步骤S2,遍历所有有效像点,完成被测物表面三维点云计算。S3. Repeat step S2 to traverse all effective image points and complete the three-dimensional point cloud calculation on the surface of the measured object.
进一步,在右相机的像点中剔除所述第二匹配点和绝对相位值为0的点,得到剩余点
Figure PCTCN2019098678-appb-000053
j=1,2,3……m,m为剩余点的个数;根据剩余点v j的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记第三匹配点
Figure PCTCN2019098678-appb-000054
Further, remove the second matching point and the point with an absolute phase value of 0 from the image points of the right camera to obtain the remaining points
Figure PCTCN2019098678-appb-000053
j = 1, 2, 3...m, m is the number of remaining points; according to the absolute phase information of the remaining points v j , find the corresponding matching point on the image plane of the projector, and record the third matching point
Figure PCTCN2019098678-appb-000054
利用所述剩余点
Figure PCTCN2019098678-appb-000055
和第三匹配点
Figure PCTCN2019098678-appb-000056
计算三维点云(x j,y j,z j);
Use the remaining points
Figure PCTCN2019098678-appb-000055
And the third match point
Figure PCTCN2019098678-appb-000056
Calculate three-dimensional point cloud (x j , y j , z j );
Figure PCTCN2019098678-appb-000057
Figure PCTCN2019098678-appb-000057
Figure PCTCN2019098678-appb-000058
Figure PCTCN2019098678-appb-000058
Figure PCTCN2019098678-appb-000059
Figure PCTCN2019098678-appb-000059
其中,
Figure PCTCN2019098678-appb-000060
among them,
Figure PCTCN2019098678-appb-000060
作为本发明的一种实施方式,对左、右相机和投影仪的标定过程采用张氏标定方法;As an embodiment of the present invention, the Zhang calibration method is adopted for the calibration process of the left and right cameras and the projector;
作为本发明的另一种实施方式,对左、右相机和投影仪的标定过程采用光束平差标定法,标定过程如下:As another embodiment of the present invention, the calibration process of the left and right cameras and the projector adopts the beam adjustment calibration method, and the calibration process is as follows:
利用左、右相机采集标定板图像,标定板上包括多个标准圆,分别计算得到标准圆的在左、右相机图像中的圆心坐标;建立左、右相机中图像像点之间的匹配关系;Use the left and right cameras to collect the image of the calibration board. The calibration board includes multiple standard circles. The center coordinates of the standard circles in the left and right camera images are calculated respectively; the matching relationship between the image points in the left and right cameras is established ;
投影仪投影多幅横竖条纹图像到标定板上,同步触发左、右相机采集多幅被投影后的标定板图像;The projector projects multiple horizontal and vertical fringe images onto the calibration board, and simultaneously triggers the left and right cameras to collect multiple projected calibration board images;
利用多幅具有横竖条纹左、右相机标定板图像,分别计算各个标准圆圆心在左右相机图像中的绝对相位;Using multiple left and right camera calibration images with horizontal and vertical stripes, calculate the absolute phase of each standard circle center in the left and right camera images;
根据左/右相机标定板图像中圆心的绝对相位,找到其在投影仪像平面上对应的匹配点,建立左、右相机的像平面与投影仪的投影平面之间的点匹配关系;Find the corresponding matching point on the image plane of the projector according to the absolute phase of the circle center in the image of the left/right camera calibration plate, and establish the point matching relationship between the image plane of the left and right camera and the projection plane of the projector;
对标定结果进行优化,标定结果包括投影仪和左、右相机的内参数矩阵、畸变系数、外参数矩阵;基于共线方程列出光束平差的误差方程:Optimize the calibration results. The calibration results include the internal parameter matrix, distortion coefficient, and external parameter matrix of the projector and left and right cameras; the error equation of the beam adjustment is listed based on the collinear equation:
V i=At+Dk-L V i =At+Dk-L
式中,A为投影仪和左、右相机的内参数矩阵修正量的偏导矩阵,t为投影仪和左、右相机的内参数矩阵修正量,k为投影仪和左、右相机的外参数矩阵的 修正量,D为所述投影仪和左、右相机的外参数矩阵的偏导矩阵,L为左相机、右相机、投影仪像面上二维坐标和标定板三维坐标经过转换到像面的坐标之间的差值;In the formula, A is the partial derivative matrix of the internal parameter matrix correction of the projector and the left and right cameras, t is the internal parameter matrix correction of the projector and the left and right cameras, and k is the external matrix of the projector and the left and right cameras. The correction amount of the parameter matrix, D is the partial derivative matrix of the external parameter matrix of the projector and the left and right cameras, L is the two-dimensional coordinates on the image plane of the left camera, the right camera, and the projector, and the three-dimensional coordinates of the calibration board are converted to The difference between the coordinates of the image plane;
迭代上式,每次迭代的输出结果为下一次迭代的输入,当误差量矢量V i小于预设值0.1时,停止迭代,输出此时对应的投影仪和左、右相机的内参数矩阵参数、畸变系数、外参数矩阵参数,作为最终的标定结果。 Iterative formula, each iteration of the output for the next iteration of the input vector when the error amount is smaller than the predetermined value V i 0.1, the iteration is stopped, at this time the output of the projector and the corresponding left and right camera intrinsic parameters matrix parameters , Distortion coefficient, external parameter matrix parameters, as the final calibration result.
测试实验:Test experiment:
分别采用现有技术中纯双目解算的方法和本发明方法多次测量同一个直径为50.797mm的标准球,得到标准球表面的点云坐标,图1为两种方法获取点数量的对比图;图中可以看出,采用本发明方法获取的点数量多于传统方法,能够得到更多标准球表面点云信息;The method of pure binocular solution in the prior art and the method of the present invention are used to measure the same standard sphere with a diameter of 50.797mm multiple times to obtain the point cloud coordinates on the surface of the standard sphere. Figure 1 shows the comparison of the number of points obtained by the two methods. Figure; In the figure, it can be seen that the number of points obtained by the method of the present invention is more than that of the traditional method, and more point cloud information on the surface of the standard sphere can be obtained;
前面对本发明具体示例性实施方案所呈现的描述是出于说明和描述的目的。前面的描述并不想要成为毫无遗漏的,也不是想要把本发明限制为所公开的精确形式,显然,根据上述教导很多改变和变化都是可能的。选择示例性实施方案并进行描述是为了解释本发明的特定原理及其实际应用,从而使得本领域的其它技术人员能够实现并利用本发明的各种示例性实施方案及其不同选择形式和修改形式。本发明的范围旨在由所附权利要求书及其等价形式所限定。The foregoing descriptions of specific exemplary embodiments of the present invention are presented for the purpose of illustration and description. The foregoing description is not intended to be exhaustive, nor is it intended to limit the invention to the precise form disclosed. Obviously, many changes and variations are possible based on the above teachings. The exemplary embodiments are selected and described in order to explain the specific principles of the present invention and its practical application, so that other skilled in the art can realize and use various exemplary embodiments of the present invention and its different selection forms and modifications. . The scope of the present invention is intended to be defined by the appended claims and their equivalents.

Claims (10)

  1. 一种基于单双目混合测量的三维点云计算方法,其特征在于,包括以下步骤:A three-dimensional point cloud computing method based on mono-binocular hybrid measurement, which is characterized in that it comprises the following steps:
    S1、利用标定方法分别对三维扫描测量系统中的左、右相机和投影仪进行标定;S1. Use calibration methods to calibrate the left and right cameras and projectors in the 3D scanning measurement system;
    标定完成后,投影仪将正弦条纹投影到被测物表面,左、右相机采集被调制的条纹图像,得到左相机图像和右相机图像,解算绝对相位,择一图像记为第一图像,将第一图像上绝对相位值为0的像点标记为无效像点,其余像点标记为有效像点u i,i=1,2,3……n,n为有效像点的个数; After the calibration is completed, the projector projects the sinusoidal fringes onto the surface of the object to be measured. The left and right cameras collect the modulated fringe images to obtain the left camera image and the right camera image, calculate the absolute phase, and select one image as the first image. Mark the pixels with an absolute phase value of 0 on the first image as invalid pixels, and mark the remaining pixels as valid pixels u i , i=1, 2, 3...n, n is the number of valid pixels;
    将采集第一图像的相机标记为第一相机;非采集第一图像的相机标记为第二相机;Mark the camera that collected the first image as the first camera; mark the camera that did not collect the first image as the second camera;
    S2、对于有效像点u i,依次计算其在第二相机的像平面的极线L iS2. For the effective image point u i , calculate the epipolar line L i of the image plane of the second camera in sequence;
    根据有效像点u i的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记为第一匹配点
    Figure PCTCN2019098678-appb-100001
    According to the absolute phase information of the effective image point u i , find its corresponding matching point on the image plane of the projector, and record it as the first matching point
    Figure PCTCN2019098678-appb-100001
    计算所述第一匹配点
    Figure PCTCN2019098678-appb-100002
    在第二相机的像平面的极线Y i
    Calculate the first matching point
    Figure PCTCN2019098678-appb-100002
    Polar line Y i in the image plane of the second camera;
    计算两条极线L i、Y i的交点; Computing the intersection of two source lines L i, Y i of;
    当所述交点的像素坐标在第二相机的像平面上:将所述交点记为第二匹配点,利用所述有效像点u i和第二匹配点计算该点的三维点云(x i,y i,z i); When the pixel coordinates of the intersection point are on the image plane of the second camera: mark the intersection point as the second matching point, and use the effective image point u i and the second matching point to calculate the three-dimensional point cloud (x i , Y i , z i );
    当所述交点像素坐标不在第二相机的像平面上:认为在第二相机的像平面没有对应的匹配点,利用所述有效像点u i和第一匹配点
    Figure PCTCN2019098678-appb-100003
    计算该点的三维点云(x i,y i,z i);
    When the pixel coordinates of the intersection point are not on the image plane of the second camera: consider that there is no corresponding matching point in the image plane of the second camera, and use the effective image point u i and the first matching point
    Figure PCTCN2019098678-appb-100003
    Calculate the three-dimensional point cloud (x i , y i , z i ) of the point;
    S3、重复步骤S2,遍历所有有效像点,完成被测物表面三维点云计算。S3. Repeat step S2 to traverse all effective image points and complete the three-dimensional point cloud calculation on the surface of the measured object.
  2. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在 于:步骤S2中,利用所述有效像点和第二匹配点计算该点的三维点云(x i,y i,z i),计算方法如下: The method for calculating a three-dimensional point cloud based on mono-binocular hybrid measurement according to claim 1, characterized in that: in step S2, the effective image point and the second matching point are used to calculate the three-dimensional point cloud ( xi , y i , z i ), the calculation method is as follows:
    Figure PCTCN2019098678-appb-100004
    Figure PCTCN2019098678-appb-100004
    Figure PCTCN2019098678-appb-100005
    Figure PCTCN2019098678-appb-100005
    Figure PCTCN2019098678-appb-100006
    Figure PCTCN2019098678-appb-100006
    其中,
    Figure PCTCN2019098678-appb-100007
    among them,
    Figure PCTCN2019098678-appb-100007
    Figure PCTCN2019098678-appb-100008
    为有效像点u i的像素坐标,
    Figure PCTCN2019098678-appb-100009
    为第二匹配点的像素坐标;
    Figure PCTCN2019098678-appb-100008
    Is the pixel coordinates of the effective image point u i ,
    Figure PCTCN2019098678-appb-100009
    Is the pixel coordinate of the second matching point;
    第一相机内参数矩阵
    Figure PCTCN2019098678-appb-100010
    第二相机内参数矩阵
    Figure PCTCN2019098678-appb-100011
    Figure PCTCN2019098678-appb-100012
    Parameter matrix in the first camera
    Figure PCTCN2019098678-appb-100010
    Second camera internal parameter matrix
    Figure PCTCN2019098678-appb-100011
    Figure PCTCN2019098678-appb-100012
    其中,f xl,f yl为第一相机的焦距,(u 0l,v 0l)为第一相机像平面的主点坐标;f xr,f yr为第二相机的焦距,(u 0r,v 0r)为第二相机像平面的主点坐标; Among them, f xl , f yl are the focal length of the first camera, (u 0l , v 0l ) are the principal point coordinates of the image plane of the first camera; f xr , f yr are the focal length of the second camera, (u 0r , v 0r ) Is the principal point coordinates of the image plane of the second camera;
    第一相机坐标系到第二相机坐标系的旋转矩阵
    Figure PCTCN2019098678-appb-100013
    平移矩阵T c=[t 1 t 2 t 3] T
    Rotation matrix from the first camera coordinate system to the second camera coordinate system
    Figure PCTCN2019098678-appb-100013
    The translation matrix T c =[t 1 t 2 t 3 ] T.
  3. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:步骤S2中,利用所述有效像点u i和第一匹配点计算该点的三维点云(x i,y i,z i),计算方法如下: The method for calculating a three-dimensional point cloud based on mono-binocular hybrid measurement according to claim 1, characterized in that: in step S2, the effective image point u i and the first matching point are used to calculate the three-dimensional point cloud ( xi , Y i , z i ), the calculation method is as follows:
    Figure PCTCN2019098678-appb-100014
    Figure PCTCN2019098678-appb-100014
    Figure PCTCN2019098678-appb-100015
    Figure PCTCN2019098678-appb-100015
    Figure PCTCN2019098678-appb-100016
    Figure PCTCN2019098678-appb-100016
    其中,
    Figure PCTCN2019098678-appb-100017
    among them,
    Figure PCTCN2019098678-appb-100017
    Figure PCTCN2019098678-appb-100018
    为有效像点u i的像素坐标,
    Figure PCTCN2019098678-appb-100019
    为所述第一匹配点的像素坐标:
    Figure PCTCN2019098678-appb-100018
    Is the pixel coordinates of the effective image point u i ,
    Figure PCTCN2019098678-appb-100019
    Is the pixel coordinates of the first matching point:
    投影仪内参数矩阵
    Figure PCTCN2019098678-appb-100020
    Parameter matrix in the projector
    Figure PCTCN2019098678-appb-100020
    第一相机坐标系和投影仪坐标系旋转矩阵
    Figure PCTCN2019098678-appb-100021
    平移矩阵
    Figure PCTCN2019098678-appb-100022
    Rotation matrix of the first camera coordinate system and the projector coordinate system
    Figure PCTCN2019098678-appb-100021
    Translation matrix
    Figure PCTCN2019098678-appb-100022
  4. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:所述极线L i:a iu pr+b iv pr+c i=0;计算方法如下: The three-dimensional point cloud calculation method based on monocular and binocular hybrid measurement according to claim 1, wherein the epipolar line Li : a i u pr + b i v pr + c i = 0; the calculation method is as follows:
    Figure PCTCN2019098678-appb-100023
    Figure PCTCN2019098678-appb-100023
    其中,F lr为第一相机与第二相机之间的基础矩阵。 Among them, F lr is the basic matrix between the first camera and the second camera.
  5. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:所述极线Y i:a i′u pr+b i′v pr+c i′=0;计算方法如下: 3D point cloud is calculated based on a single measurement method of mixing the binocular as claimed in claim 1, wherein: said electrode line Y i: a i 'u pr + b i' v pr + c i '= 0; Calculation Method as follows:
    Figure PCTCN2019098678-appb-100024
    Figure PCTCN2019098678-appb-100024
    其中,F dr为投影仪与第二相机之间的基础矩阵。 Among them, F dr is the basic matrix between the projector and the second camera.
  6. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:在第二相机的像点中剔除所述第二匹配点和绝对相位值为0的点,得到剩余点v j,j=1,2,3……m,m为剩余点的个数;根据剩余点v j的绝对相位信息,找到其在投影仪像平面上对应的匹配点,记第三匹配点
    Figure PCTCN2019098678-appb-100025
    The method for calculating a three-dimensional point cloud based on mono-binocular hybrid measurement according to claim 1, wherein the second matching point and the point with an absolute phase value of 0 are eliminated from the image points of the second camera to obtain the remaining points v j , j = 1, 2, 3...m, m is the number of remaining points; according to the absolute phase information of the remaining points v j , find the corresponding matching point on the image plane of the projector, and record the third matching point
    Figure PCTCN2019098678-appb-100025
    利用所述剩余点v j和第三匹配点计算三维点云。 The three-dimensional point cloud is calculated using the remaining points v j and the third matching point.
  7. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:解算绝对相位通过四步相移结合多频外差方法或相移结合格雷码的方法,所述绝对相位为横纵相位。The three-dimensional point cloud computing method based on monocular and binocular hybrid measurement according to claim 1, characterized in that: the absolute phase is solved by a four-step phase shift combined with a multi-frequency heterodyne method or a phase shift combined with a Gray code method. The phase is the horizontal and vertical phase.
  8. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:所述标定方法为张氏标定法。The three-dimensional point cloud calculation method based on mono-binocular hybrid measurement according to claim 1, wherein the calibration method is Zhang's calibration method.
  9. 如权利要求1所述基于单双目混合测量的三维点云计算方法,其特征在于:所述标定方法为光束平差标定法,标定过程如下:The three-dimensional point cloud calculation method based on single and binocular hybrid measurement according to claim 1, wherein the calibration method is a beam adjustment calibration method, and the calibration process is as follows:
    利用左、右相机采集标定板图像,所述标定板上包括多个标准圆,分别计算得到标准圆的在左、右相机图像中的圆心坐标;建立左、右相机中图像像点之间的匹配关系;Use the left and right cameras to collect images of the calibration board. The calibration board includes a plurality of standard circles, and the center coordinates of the standard circles in the left and right camera images are calculated respectively; the image points between the left and right cameras are established Matching relationship
    投影仪投影多幅横竖条纹图像到标定板上,同步触发左、右相机采集多幅被投影后的标定板图像;The projector projects multiple horizontal and vertical fringe images onto the calibration board, and simultaneously triggers the left and right cameras to collect multiple projected calibration board images;
    利用多幅具有横竖条纹左、右相机标定板图像,分别计算各个标准圆圆心在左右相机图像中的绝对相位;Using multiple left and right camera calibration images with horizontal and vertical stripes, calculate the absolute phase of each standard circle center in the left and right camera images;
    根据左/右相机标定板图像中圆心的绝对相位,找到其在投影仪像平面上对应的匹配点,建立左、右相机的像平面与投影仪的投影平面之间的点匹配关系;Find the corresponding matching point on the image plane of the projector according to the absolute phase of the circle center in the image of the left/right camera calibration plate, and establish the point matching relationship between the image plane of the left and right camera and the projection plane of the projector;
    对标定结果进行优化,所述标定结果包括投影仪和左、右相机的内参数矩 阵、畸变系数、外参数矩阵;基于共线方程列出光束平差的误差方程:Optimize the calibration results, which include the internal parameter matrix, distortion coefficient, and external parameter matrix of the projector and the left and right cameras; the error equation of the beam adjustment is listed based on the collinear equation:
    V i=At+Dk-L V i =At+Dk-L
    式中,A为所述投影仪和左、右相机的内参数矩阵修正量的偏导矩阵,t为投影仪和左、右相机的内参数矩阵修正量,k为投影仪和左、右相机的外参数矩阵的修正量,D为所述投影仪和左、右相机的外参数矩阵的偏导矩阵,L为左相机、右相机、投影仪像面上二维坐标和标定板三维坐标经过转换到像面的坐标之间的差值;In the formula, A is the partial derivative matrix of the internal parameter matrix correction of the projector and the left and right cameras, t is the internal parameter matrix correction of the projector and the left and right cameras, and k is the projector and the left and right cameras. The correction amount of the external parameter matrix of the projector, D is the partial derivative matrix of the external parameter matrix of the projector and the left and right cameras, L is the two-dimensional coordinates on the image plane of the left camera, the right camera, and the projector and the three-dimensional coordinates of the calibration board The difference between the coordinates converted to the image plane;
    迭代上式,每次迭代的输出结果为下一次迭代的输入,当误差量矢量V i小于预设值q时,停止迭代,输出此时对应的投影仪和左、右相机的内参数矩阵参数、畸变系数、外参数矩阵参数,作为最终的标定结果。 Iterative formula, each iteration of the output for the next iteration of the input vector when the error amount is smaller than the predetermined value V i q, the iteration is stopped, at this time the output of the projector and the corresponding left and right camera intrinsic parameters matrix parameters , Distortion coefficient, external parameter matrix parameters, as the final calibration result.
  10. 如权利要求9所述基于单双目混合测量的三维点云计算方法,其特征在于:q取值0.05~0.2。9. The three-dimensional point cloud calculation method based on mono-binocular hybrid measurement according to claim 9, characterized in that: q takes a value of 0.05 to 0.2.
PCT/CN2019/098678 2019-03-29 2019-07-31 Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method WO2020199439A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910247867.X 2019-03-29
CN201910247867.XA CN110044301B (en) 2019-03-29 2019-03-29 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement

Publications (1)

Publication Number Publication Date
WO2020199439A1 true WO2020199439A1 (en) 2020-10-08

Family

ID=67275555

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098678 WO2020199439A1 (en) 2019-03-29 2019-07-31 Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method

Country Status (2)

Country Link
CN (1) CN110044301B (en)
WO (1) WO2020199439A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044301B (en) * 2019-03-29 2020-05-05 易思维(天津)科技有限公司 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement
CN110880185B (en) * 2019-11-08 2022-08-12 南京理工大学 High-precision dynamic real-time 360-degree all-dimensional point cloud acquisition method based on fringe projection
CN111023994B (en) * 2020-01-11 2023-06-23 武汉玄景科技有限公司 Grating three-dimensional scanning method and system based on multiple measurement
CN112002016B (en) * 2020-08-28 2024-01-26 中国科学院自动化研究所 Continuous curved surface reconstruction method, system and device based on binocular vision
CN115063468B (en) * 2022-06-17 2023-06-27 梅卡曼德(北京)机器人科技有限公司 Binocular stereo matching method, computer storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004233138A (en) * 2003-01-29 2004-08-19 Kurabo Ind Ltd Method and apparatus for determination of corresponding point with measurement point in photogrammetry
CN102032878A (en) * 2009-09-24 2011-04-27 甄海涛 Accurate on-line measurement method based on binocular stereo vision measurement system
CN102654391A (en) * 2012-01-17 2012-09-05 深圳大学 Stripe projection three-dimensional measurement system based on bundle adjustment principle and calibration method thereof
CN102721376A (en) * 2012-06-20 2012-10-10 北京航空航天大学 Calibrating method of large-field three-dimensional visual sensor
CN104835158A (en) * 2015-05-05 2015-08-12 中国人民解放军国防科学技术大学 3D point cloud acquisition method based on Gray code structure light and polar constraints
CN110044301A (en) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004233138A (en) * 2003-01-29 2004-08-19 Kurabo Ind Ltd Method and apparatus for determination of corresponding point with measurement point in photogrammetry
CN102032878A (en) * 2009-09-24 2011-04-27 甄海涛 Accurate on-line measurement method based on binocular stereo vision measurement system
CN102654391A (en) * 2012-01-17 2012-09-05 深圳大学 Stripe projection three-dimensional measurement system based on bundle adjustment principle and calibration method thereof
CN102721376A (en) * 2012-06-20 2012-10-10 北京航空航天大学 Calibrating method of large-field three-dimensional visual sensor
CN104835158A (en) * 2015-05-05 2015-08-12 中国人民解放军国防科学技术大学 3D point cloud acquisition method based on Gray code structure light and polar constraints
CN110044301A (en) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 Three-dimensional point cloud computing method based on monocular and binocular mixed measurement

Also Published As

Publication number Publication date
CN110044301A (en) 2019-07-23
CN110044301B (en) 2020-05-05

Similar Documents

Publication Publication Date Title
WO2020199439A1 (en) Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method
CN111750806B (en) Multi-view three-dimensional measurement system and method
CN110514143B (en) Stripe projection system calibration method based on reflector
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN110378969B (en) Convergent binocular camera calibration method based on 3D geometric constraint
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN110793464B (en) Large-field-of-view fringe projection vision three-dimensional measurement system and method
CN110375648A (en) The spatial point three-dimensional coordinate measurement method that the single camera of gridiron pattern target auxiliary is realized
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN111091599B (en) Multi-camera-projector system calibration method based on sphere calibration object
CN107610183B (en) Calibration method of fringe projection phase height conversion mapping model
CN109307483A (en) A kind of phase developing method based on structured-light system geometrical constraint
CN114111637A (en) Stripe structured light three-dimensional reconstruction method based on virtual dual-purpose
CN110349257B (en) Phase pseudo mapping-based binocular measurement missing point cloud interpolation method
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN112489109B (en) Three-dimensional imaging system method and device and three-dimensional imaging system
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN112164119A (en) Calibration method of system with multiple cameras placed in surrounding mode and suitable for narrow space
CN112465914B (en) Camera array calibration method based on non-common view field
CN113865514B (en) Calibration method of line structured light three-dimensional measurement system
CN114037768A (en) Method and device for joint calibration of multiple sets of tracking scanners
CN114663520A (en) Double-camera combined calibration method and system for ultra-large range vision measurement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922364

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19922364

Country of ref document: EP

Kind code of ref document: A1