WO2020133888A1 - 一种三维图像的尺度不变深度图映射方法 - Google Patents

一种三维图像的尺度不变深度图映射方法 Download PDF

Info

Publication number
WO2020133888A1
WO2020133888A1 PCT/CN2019/087244 CN2019087244W WO2020133888A1 WO 2020133888 A1 WO2020133888 A1 WO 2020133888A1 CN 2019087244 W CN2019087244 W CN 2019087244W WO 2020133888 A1 WO2020133888 A1 WO 2020133888A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point cloud
scale
depth map
dimensional
Prior art date
Application number
PCT/CN2019/087244
Other languages
English (en)
French (fr)
Inventor
严律
王杰高
王明松
Original Assignee
南京埃克里得视觉技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京埃克里得视觉技术有限公司 filed Critical 南京埃克里得视觉技术有限公司
Publication of WO2020133888A1 publication Critical patent/WO2020133888A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • the invention relates to a three-dimensional image depth map mapping method, in particular to a three-dimensional image scale-invariant depth map mapping method.
  • machine vision improves the flexibility and automation of production.
  • machine vision is often used to replace artificial vision; at the same time, in large-scale industrial production processes, artificial vision is used to check
  • the product quality and efficiency are low and the accuracy is not high.
  • Using machine vision detection methods can greatly improve production efficiency and production automation.
  • the software, algorithms and applications of 2D vision are very mature, and the current commonly used method of 3D vision is to collect the point cloud of an object.
  • the image of a 3D object is a collection of point clouds (see Figure 1).
  • the objects are at different heights.
  • the depth image of the three-dimensional image is changed (see Figure 2), and its related processing algorithm is very vacant.
  • the present invention proposes a scale-invariant depth map mapping method for three-dimensional images, which is a scale-invariant depth map mapping method for three-dimensional image point cloud sets.
  • the method can obtain the same at different depths.
  • Scale object depth maps can be processed using existing image algorithms, and have the feature of calculating the three-dimensional position of the object space through the depth map.
  • the scale-invariant depth map mapping method of the three-dimensional image of the present invention has the following steps:
  • Step 1 Obtain a three-dimensional point cloud image of an object through a three-dimensional camera, and obtain a three-dimensional image point cloud of the same object at different heights.
  • Step 2 Set the parameters needed for the mapping: the upper left corner of the image (x 1 , y 1 ), the lower left corner of the image (x 2 , y 2 ), the pixel width W, Dmax is the depth value corresponding to 255 when calculating grayscale, Dmin is the depth value corresponding to 0 when calculating grayscale.
  • x 1 is the X axis coordinate of the upper left corner of the image in the point cloud coordinate system
  • y 1 is the Y axis coordinate of the upper left corner of the image in the point cloud coordinate system
  • x 2 is the X axis of the lower right corner of the image in the point cloud coordinate system Coordinates
  • y 2 is the Y axis coordinate of the lower right corner of the image in the point cloud coordinate system.
  • Step 3 Express the image depth value G with the gray value of the image:
  • Dz is the Z coordinate of the current point.
  • Step 4 calculate the number of rows R and the number of columns C of the image, and generate an image with grayscale 0:
  • Step 5 Traverse the points (D x , D y , D z ) in the point cloud and calculate the pixel gray value of the corresponding position (r 1 , c 1 ):
  • D x , D y , and D z are the X, Y, and Z coordinates of the point
  • r 1 is the row position corresponding to the pixel position
  • c 1 is the column position corresponding to the pixel position.
  • Step 6 Cyclically use the G value of step 3 to assign to (r1, c1) of image M until traversing all points on the image, M is the scale-invariant depth map.
  • the method of the present invention is a method for mapping a two-dimensional depth map of a three-dimensional image point cloud set with constant scale.
  • the method can obtain object depth maps of the same scale at different depths, and can be processed using existing image algorithms, and There is the feature of calculating the three-dimensional position of the object space through the depth map.
  • Figure 1 is a point cloud of a three-dimensional image. Among them: a picture is the overall point cloud picture, b picture is the local point cloud picture.
  • Figure 2 is the scale change of the same object at different heights in the traditional depth map. Among them: a picture is the depth image of the point cloud mapping of the workpiece taken at a long distance, and b picture is the depth image of the point cloud taken at a close distance.
  • 3 is a schematic diagram of setting parameters required for mapping.
  • x1 is the X axis coordinate of the upper left corner of the image in the point cloud coordinate system
  • y1 is the Y axis coordinate of the upper left corner of the image in the point cloud coordinate system
  • x2 is the X axis coordinate of the lower right corner of the image in the point cloud coordinate system
  • y2 Is the Y-axis coordinate of the lower right corner of the image in the point cloud coordinate system
  • w is the pixel width.
  • a is the scale-invariant depth image of the point cloud mapping of the workpiece taken at a long distance
  • b is the scale-invariant depth image of the point cloud taken at close range.
  • the unit is the same as the point cloud unit, in millimeters here.
  • An image with a gray scale of 0 is generated according to the number of rows and columns.
  • the row and column coordinates of the corresponding pixel are:

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种三维图像的尺度不变深度图映射方法,获取物体的三维点云图像,设定映射需要的参数,计算图像的行数R和列数C并生成灰度为0的图像,遍历点云中的点,计算对应位置的像素灰度值,得到指定的尺度不变的深度图。本发明方法是一种三维图像点云集合的尺度不变的深度图映射方法,在不同深度上都能得到相同尺度的物体深度图,能够利用现有的图像算法进行处理,并有通过深度图即可计算物体空间三维位置的特征。

Description

一种三维图像的尺度不变深度图映射方法 技术领域
本发明涉及一种三维图像深度图映射方法,具体说是一种三维图像的尺度不变深度图映射方法。
背景技术
在工业自动化领域,机器视觉提高生产的柔性和自动化程度,在一些危险工作环境或人工视觉难以满足要求的场合,常用机器视觉来替代人工视觉;同时在大批量工业生产过程中,用人工视觉检查产品质量效率低且精度不高,用机器视觉检测方法可以大大提高生产效率和生产的自动化程度。目前二维视觉的软件、算法和应用已经非常成熟,而三维视觉目前常用的方法是采集物体的点云,一个三维物体的图像就是一个点云的集合(见图1),物体在不同高度面上,三维图像的深度图像是变化的(见图2),其相关处理算法非常空缺。
发明内容
为解决上述问题,本发明提出了一种三维图像的尺度不变深度图映射方法,是一种三维图像点云集合的尺度不变的深度图映射方法,该方法在不同深度上都能得到相同尺度的物体深度图,能够利用现有的图像算法进行处理,并有通过深度图即可计算物体空间三维位置的特征。
本发明三维图像的尺度不变深度图映射方法,其步骤如下:
步骤1.通过三维相机获取物体的三维点云图像,获取相同物体的在不同高度上的三维图像点云。
步骤2.设定映射需要的参数:图像左上角坐标(x 1,y 1),图像左下角坐标(x 2,y 2),像素宽度W,Dmax为计算灰度时255对应的深度值,Dmin为计算灰度时0对应的深度值。其中x 1为图像左上角在点云坐标系中的X轴坐标,y 1为图像左上角在点云坐标系中的Y轴坐标;x 2为图像右下角在点云坐标系中的X轴坐标,y 2为图像右下角在点云坐标系中的Y轴坐标。
步骤3.用图像灰度值表示图像深度值G:
G=(Dz-Dmin)/(Dmax-Dmin)×255
其中Dz为当前点的Z坐标。
步骤4.根据步骤2的参数,计算图像的行数R和列数C,并生成灰度为0的图像:
R=(x 2-x 1)/W
C=(x 2-x 1)/W
步骤5.遍历点云中的点(D x,D y,D z),计算对应位置(r 1,c 1)的像素灰度值:
r 1=(D x-x 1)/W
c1=(Dy-y1)/W
其中D x、D y、D z分别为点的X、Y、Z坐标,r 1为对应像素位置的行位置,c 1为对应像素位置的列位置。
步骤6.循环使用步骤3的G值赋值到图像M的(r1,c1)处,直到遍历完图像上所有点,M即为得到的尺度不变深度图。
本发明方法是一种三维图像点云集合的尺度不变的二维深度图映射方法,该方法在不同深度上都能得到相同尺度的物体深度图,能够利用现有的图像算法进行处理,并有通过深度图即可计算物体空间三维位置的特征。
附图说明
图1是三维图像的点云。其中:a图为总体点云图,b图为局部点云图。
图2是相同物体在传统深度图不同高度上的尺度变化。其中:a图为远距离拍摄的工件点云映射的深度图像,b图为近距离拍摄点云的深度图像。
图3是映射需要的参数的设定示意图。其中x1为图像左上角在点云坐标系中的X轴坐标,y1为图像左上角在点云坐标系中的Y轴坐标;x2为图像右下角在点云坐标系中的X轴坐标,y2为图像右下角在点云坐标系中的Y轴坐标,w是像素宽度。
图4是本发明三维图像的尺度不变深度图映射方法获得的尺度不变的深度映射图。其中:a图为远距离拍摄的工件点云映射的尺度不变深度图像,b图为近距离拍摄点云的尺度不变深度图像。
具体实施方式
下面结合实施例和附图,对本发明作进一步详细说明。
实施例:
获取相同物体的在不同高度上的三维图像点云,可比较常规深度图像,如图2,其尺度在图像上是不一样的。
设定映射需要的参数:
对象
x 1 -300.0
y 1 -300.0
x 2 400
y 2 300
W 1.0
D max -720.0
D min -760.0
单位和点云单位相同,这里为毫米。
计算图像的行数和列数:
R=(x 2–x 1)/W=700
C=(x 2–x 1)/W=600
根据行列数量生成灰度为0的图像。
遍历点云,例如其中一个点云的坐标为:
D x=-70.1
D y=2.8
D z=-741.3
则对应像素的行列坐标为:
r 1=(D x–x 1)/W=229.9四舍五入取整为230;
c 1=(D y–y 1)/W=297.2四舍五入取整为297;
则对应像素的灰度值为:
G=(D z-D min)/(D max-D min)×255=119.21
按照以上方式计算每个点对应的位置的灰度值,按照此方式遍历两个点云图像即可得到图4的图像,因为有尺度不变性,因此可以利用现有的视觉算法处理,如模板匹配、最小外接圆等,即可定位物体位置,而物体位置三维坐标按照以上方法的逆运算即可算得,以供机器人或其他设备抓取或处理。

Claims (1)

  1. 一种三维图像的尺度不变深度图映射方法,其步骤如下:
    步骤1.通过三维相机获取物体的三维点云图像;
    步骤2.设定映射需要的参数:
    图像左上角坐标(x 1,y 1),图像左下角坐标(x 2,y 2),像素宽度W,Dmax为计算灰度时255对应的深度值,Dmin为计算灰度时0对应的深度值;其中x 1为图像左上角在点云坐标系中的X轴坐标,y 1为图像左上角在点云坐标系中的Y轴坐标;x 2为图像右下角在点云坐标系中的X轴坐标,y 2为图像右下角在点云坐标系中的Y轴坐标;
    步骤3.用图像灰度值表示图像深度值G:
    G=(Dz-Dmin)/(Dmax-Dmin)×255
    其中Dz为当前点的Z坐标;
    步骤4.根据步骤2的参数,计算图像的行数R和列数C:
    R=(x 2-x 1)/W
    C=(x 2-x 1)/W
    并生成灰度为0的图像M;
    步骤5.遍历点云中的点(D x,D y,D z),计算对应位置(r 1,c 1)的像素灰度值:
    r 1=(D x-x 1)/W
    c 1=(D y-y 1)/W
    其中D x、D y、D z分别为点的X、Y、Z坐标,r 1为对应像素位置的行位置,c 1为对应像素位置的列位置;
    步骤6.循环使用步骤3的G值赋值到图像M的(r 1,c 1)处,直到遍历完图像上所有点,M即为得到的尺度不变深度图。
PCT/CN2019/087244 2018-12-27 2019-05-16 一种三维图像的尺度不变深度图映射方法 WO2020133888A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811608105.X 2018-12-27
CN201811608105.XA CN109727282A (zh) 2018-12-27 2018-12-27 一种三维图像的尺度不变深度图映射方法

Publications (1)

Publication Number Publication Date
WO2020133888A1 true WO2020133888A1 (zh) 2020-07-02

Family

ID=66297310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087244 WO2020133888A1 (zh) 2018-12-27 2019-05-16 一种三维图像的尺度不变深度图映射方法

Country Status (2)

Country Link
CN (1) CN109727282A (zh)
WO (1) WO2020133888A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882496A (zh) * 2022-04-15 2022-08-09 武汉益模科技股份有限公司 基于深度图像的三维部件相似度计算方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727282A (zh) * 2018-12-27 2019-05-07 南京埃克里得视觉技术有限公司 一种三维图像的尺度不变深度图映射方法
CN112767399B (zh) * 2021-04-07 2021-08-06 高视科技(苏州)有限公司 半导体焊线缺陷检测方法、电子设备及存储介质
CN113538547A (zh) * 2021-06-03 2021-10-22 苏州小蜂视觉科技有限公司 一种3d线激光传感器的深度处理方法及点胶设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455984A (zh) * 2013-09-02 2013-12-18 清华大学深圳研究生院 一种Kinect深度图像获取方法与装置
US9483960B2 (en) * 2014-09-26 2016-11-01 Xerox Corporation Method and apparatus for dimensional proximity sensing for the visually impaired
CN106780592A (zh) * 2016-06-30 2017-05-31 华南理工大学 基于相机运动和图像明暗的Kinect深度重建算法
CN109727282A (zh) * 2018-12-27 2019-05-07 南京埃克里得视觉技术有限公司 一种三维图像的尺度不变深度图映射方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018039871A1 (zh) * 2016-08-29 2018-03-08 北京清影机器视觉技术有限公司 三维视觉测量数据的处理方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455984A (zh) * 2013-09-02 2013-12-18 清华大学深圳研究生院 一种Kinect深度图像获取方法与装置
US9483960B2 (en) * 2014-09-26 2016-11-01 Xerox Corporation Method and apparatus for dimensional proximity sensing for the visually impaired
CN106780592A (zh) * 2016-06-30 2017-05-31 华南理工大学 基于相机运动和图像明暗的Kinect深度重建算法
CN109727282A (zh) * 2018-12-27 2019-05-07 南京埃克里得视觉技术有限公司 一种三维图像的尺度不变深度图映射方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882496A (zh) * 2022-04-15 2022-08-09 武汉益模科技股份有限公司 基于深度图像的三维部件相似度计算方法

Also Published As

Publication number Publication date
CN109727282A (zh) 2019-05-07

Similar Documents

Publication Publication Date Title
WO2020133888A1 (zh) 一种三维图像的尺度不变深度图映射方法
CN105021124B (zh) 一种基于深度图的平面零件三维位置和法向量计算方法
CN107450885B (zh) 一种工业机器人与三维传感器的坐标变换求解方法
JP7212236B2 (ja) オーバービュー視覚およびローカル視覚の一体化によるロボット視覚案内方法及び装置
CN108182689A (zh) 应用于机器人搬运打磨领域的板状工件三维识别定位方法
CN111897349A (zh) 一种基于双目视觉的水下机器人自主避障方法
CN107392929B (zh) 一种基于人眼视觉模型的智能化目标检测及尺寸测量方法
CN112419429B (zh) 一种基于多视角的大型工件表面缺陷检测标定方法
KR102634535B1 (ko) 포인트 집단 분석을 이용한 작업대상물의 터치교시점 인식방법
CN110136211A (zh) 一种基于主动双目视觉技术的工件定位方法及系统
CN104976950B (zh) 物件空间信息量测装置与方法及取像路径的计算方法
Xia et al. Workpieces sorting system based on industrial robot of machine vision
CN104460505A (zh) 工业机器人相对位姿估计方法
CN112288815A (zh) 一种目标模位置测量方法、系统、存储介质及设备
CN113723389A (zh) 一种支柱式绝缘子定位方法及装置
CN108986216A (zh) 激光雷达控制软件3d绘图方法
Dalirani et al. Automatic extrinsic calibration of thermal camera and LiDAR for vehicle sensor setups
WO2020133407A1 (zh) 基于结构光的工业机器人定位方法和装置、控制器、介质
CN111678511B (zh) 一种机器人的多传感器融合定位方法和系统
Zhang et al. Visual 3d reconstruction system based on rgbd camera
Zhu et al. Target Measurement Method Based on Sparse Disparity for Live Power Lines Maintaining Robot
JP2013254300A (ja) 画像処理方法
Sukop Implementation of two cameras to robotic cell for recognition of components
CN113487679B (zh) 激光打标机自动调焦系统视觉测距信号处理方法
CN113379663B (zh) 一种空间定位方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19903881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19903881

Country of ref document: EP

Kind code of ref document: A1