WO2022099597A1 - 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法 - Google Patents

一种基于虚拟轮廓特征点的机械零件6d位姿测量方法 Download PDF

Info

Publication number
WO2022099597A1
WO2022099597A1 PCT/CN2020/128623 CN2020128623W WO2022099597A1 WO 2022099597 A1 WO2022099597 A1 WO 2022099597A1 CN 2020128623 W CN2020128623 W CN 2020128623W WO 2022099597 A1 WO2022099597 A1 WO 2022099597A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
intersection
template
straight lines
lines
Prior art date
Application number
PCT/CN2020/128623
Other languages
English (en)
French (fr)
Inventor
冯武希
赵昕玥
何再兴
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US17/915,470 priority Critical patent/US20230135674A1/en
Priority to PCT/CN2020/128623 priority patent/WO2022099597A1/zh
Publication of WO2022099597A1 publication Critical patent/WO2022099597A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Definitions

  • the invention relates to the field of computer vision, in particular to a 6D pose measurement method of mechanical parts based on virtual contour key points.
  • the common solution in the industry is to install a camera on the robotic arm, use the camera to obtain a picture of the part's pose, and perform 6D pose recognition of the part according to the picture.
  • the 6D pose measurement of objects is classified from the hardware facilities used, and there are two mainstream methods: one is to use 3D vision to obtain the surface point cloud of the target object, thereby calculating the pose of the object.
  • This method obtains more complete object information and has high accuracy of pose calculation results.
  • the point cloud of the object needs to be obtained by means of laser irradiation, when the reflection phenomenon on the surface of the object is serious, the measuring instrument cannot obtain the accurate surface of the object. point cloud information, so this method cannot measure reflective objects.
  • this method also has the disadvantages of high computational cost and slow speed.
  • Another method is to use two-dimensional images for measurement. For parts, the surface texture information is less, so this method is difficult to apply to metal parts.
  • the present invention proposes a new 6D pose measurement method for mechanical parts.
  • a part model is established, a small number of sparse template images are generated, the edge feature lines and ellipses of the parts in the real image and the template image are detected by the line detection algorithm and the ellipse detection algorithm, and then the real image and the template image are matched. Matched lines are matched to the intersection and center of the circle.
  • use the PnP function in OPENCV to solve the real 6D pose of the part.
  • the present invention proposes a 6D pose measurement method of mechanical parts based on virtual contour feature points, which is characterized in that it includes the following steps:
  • Step 1 Take pictures of the part twice under different light source conditions, extract the linear features respectively and fuse them.
  • Step 2 Obtain the intersection point of the spatial straight line corresponding to the straight line identified in the template image.
  • Step 3 Perform intersection matching for the template image with intersections of spatial straight lines and the plane straight lines in the real image.
  • Step 4 Extract the ellipse features in the real image and describe them according to the matched lines.
  • Step 5 Extract the ellipse features in the template image and describe them according to the matched lines.
  • Step 6 Match the ellipse features in the real image and the template image.
  • Step 7 Match the circle centers in the real image and the template image to generate 2D-2D matching point pairs.
  • Step 8 Use the vertex coordinate file of the CAD model to establish the 2D-3D point pair of the intersection of the straight line and the center of the "real image-CAD model", and use the PnP algorithm to calculate the pose.
  • step 1 the part is photographed twice under different light source conditions, and straight line features are extracted respectively and merged.
  • step 1 arranging the light source and the camera.
  • light source 1 and light source 3 are turned on
  • light source 2 and light source 4 are turned on.
  • Lines are extracted from the two pictures respectively, and the results of the two line recognitions are fused.
  • the results are fused, some duplicate results may appear.
  • the processing method is as follows:
  • step 3 Calculate the midpoint distance of the straight line pair obtained in step 2. If it is greater than 5 pixels, delete this group of straight lines; if it is less than or equal to 5 pixels, calculate the midpoints of the two straight lines corresponding to the two sets of endpoints as new endpoints, and store the new endpoints in the line recognition result.
  • obtaining the intersection point of the spatial straight line corresponding to the straight line identified in the template diagram is specifically: traversing all the two straight lines identified in the template diagram as a group once. Because only the 2D coordinates (a 1 , b 1 ) of the endpoints in the template diagram and the corresponding spatial line coordinates (x 1 , y 1 , z 1 ) are stored in a straight line, it is assumed that the two straight lines to be intersected are respectively For L1, L2.
  • the 3D coordinates of the start and end points corresponding to L1 are (x 1 , y 1 , z 1 ) (x 2 , y 2 , z 2 ); the 3D coordinates of the start and end points corresponding to L2 are (x 3 , y 3 , z 3 ) )(x4, y4, z4), then the following formula can be used to calculate whether there is an intersection between two straight lines:
  • the intersection matching of the template graph with the intersection of the space straight line and the plane straight line in the real graph is specifically: for a group of straight lines having an intersection in the space, the intersection point is calculated in the real graph and the template graph, and the The 2D points are matched with the 3D points calculated in step 2 and saved.
  • the ellipse features in the real image and the template image are extracted and described according to the matched straight lines.
  • the ellipse features are extracted from the real image and the template image, respectively, and several lines closest to the center of the ellipse and the surrounding area are calculated. The distance of the line, and save it as an array.
  • matching the ellipse features and circle centers in the real image and the template image is specifically: compare the ellipse identified in the real image and the template image according to their descriptions, and compare the two ellipses that satisfy the following formula Match the center of the circle:
  • a [a 1 ,a 2 ,...,an ]
  • b [b 1 ,b 2 ,...,b n ]
  • a i and b i are the midpoint and surrounding line of ellipse a and ellipse b respectively the distance.
  • the vertex coordinate file of the CAD model is used to establish the 2D-3D point pair of the intersection of the straight line and the center of the circle of the "real picture-CAD model", and the PnP algorithm is used to calculate the pose means: according to the generated template diagram, the The vertex coordinate file (including the correspondence between the 2D point in the template image and the 3D point in the CAD model) is used to match the 2D-3D point pair from the real image to the CAD model, and use the matching result to calculate the pose result through the PnP function.
  • two-dimensional information of the scene which can be achieved by a general grayscale camera.
  • two-dimensional information has the characteristics of small footprint and fast data processing.
  • Figure 1 is a rendering of the lighting layout
  • Figure 2 is a plan view of the lighting arrangement
  • Fig. 3 is the virtual geometric point matching result
  • Fig. 4 is the real image ellipse recognition result
  • Fig. 5 is the template image ellipse recognition result
  • Fig. 6 is the result of center matching
  • Figure 7 is a rendering overlay effect diagram.
  • intersection is saved in a 2D-2D point pair, as shown in Figure 3.
  • each identified ellipse in the real image and the template image and its nearby straight line is calculated and a vector is formed.
  • Each ellipse in the template image and the real image corresponds to a vector.
  • the PnP function in OPENCV is used to solve the 6D pose of the part according to the 2D coordinates of the real image and the 3D coordinates of the model, and the position vector and rotation vector of the part are obtained. They are both superimposed by two images, one is the real image taken, and the other is the image generated by rendering the CAD model according to the calculation result. It can be seen in the superimposed image that the rendered image and the original image are almost completely overlapped. It shows that the calculation results are very accurate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种基于虚拟轮廓特征点的机械零件6自由度(6D)位姿测量方法,利用多光源分时照明方式进行拍照,提高图像中几何特征的识别成功率;在零件真实图与模板图直线匹配成功的基础上,对已匹配直线所对应的空间直线进行求交计算,当空间直线存在交点时,再求取真实图与模板图中平面直线的交点坐标并作为已匹配点对保存;再对真实图与模板图进行椭圆特征检测,利用圆心与已匹配直线的距离对两组圆心进行匹配,匹配成功的圆心点对也予以保存;最后利用模板图的2D-3D坐标关系建立真实图的2D-3D关系,并通过PnP算法进行零件的位姿求解。

Description

一种基于虚拟轮廓特征点的机械零件6D位姿测量方法 技术领域
本发明涉及计算机视觉领域,具体涉及一种基于虚拟轮廓关键点的机械零件6D位姿测量方法。
背景技术
机械臂的精确引导一直以来都是工业自动化的一个重要组成部分,但目前工业流水线上作业的机器人大多都只能按照既定路径以及程序实施移动与抓取等动作。事实上,这种工作模式下的机器人很难应对目前工业上越来越复杂的应用场合。在这种工作模式下,一方面,机器人引导的准确与否很大程度上取决于零件摆放位置是否准确;另一方面,既定的路径以及程序只能完成一种零件的抓取,在更换抓取目标后,需要对机械臂重新进行标定以及路径规划。所以,该工作模式下的机器人无论从工作效率还是从工作准确度上都有待提高。因此研究一种可靠的机械臂精确引导方法就显得尤为重要。
目前工业上常见的解决方案是通过在机械臂上安装摄像头,利用摄像头获取零件位姿图片,并根据图片对零件进行6D位姿识别。而物体6D位姿测量从所用硬件设施来分类,有两个主流的办法:一种是采用三维视觉,获取目标物体表面点云,从而计算物体的位姿。这种方法获取物体信息较为完整,位姿计算结果准确度较高,但是由于需要使用激光照射的方式获取物体的点云,当物体表面反光现象较为严重时,测量仪器无法获取到准确的物体表面点云信息,因此该方法不能对反光物体进行测量。同时由于需要处理大量点云数据,这种方法也存在着计算成本高,速度慢的缺点。另外一种方法是利用二维图像进行测量,其优点是成本低运算量小,速度快,但是这种方法需要依赖目标物体表面上的纹理信息,需要显著的特征点,对于工业上常见的金属零件来说,其表面纹理信息较少,因此该方法对于金属零件难以应用。
针对目前方法的不足,本发明提出了一种新的机械零件6D位姿测量方法。 首先建立零件模型,生成少量稀疏模板图,通过直线检测算法与椭圆检测算法对真实图与模板图的零件边缘特征线与椭圆进行检测,再对真实图与模板图中直线进行匹配,进而根据已匹配的直线进行交点匹配与圆心匹配。最后,利用OPENCV中的PnP函数对零件真实6D位姿进行解算。
发明内容
本发明为解决上述位姿估计方法的不足,提出一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于,包括如下步骤:
步骤1:在不同光源条件下对零件进行两次拍照,分别提取直线特征并融合。
步骤2:对模板图中识别出的直线对应的空间直线求取交点。
步骤3:对于空间直线存在交点的模板图以及真实图中的平面直线进行交点匹配。
步骤4:提取真实图中的椭圆特征并根据已匹配直线进行描述。
步骤5:提取模板图中的椭圆特征并根据已匹配直线进行描述。
步骤6:匹配真实图与模板图中的椭圆特征。
步骤7:匹配真实图与模板图中的圆心,生成2D-2D匹配点对。
步骤8:利用CAD模型的顶点坐标文件建立“真实图—CAD模型”的直线交点与圆心的2D-3D点对,并使用PnP算法计算位姿。
优选的,步骤1中在不同光源条件下对零件进行两次拍照,分别提取直线特征并融合具体是:布置光源与相机。在进行第一次拍照时,打开光源1与光源3,在进行第二次拍照时,打开光源2与光源4。在两张图片上分别提取直线,并将两次直线识别的结果融合,在结果融合时可能会出现一些重复的结果,处理方法如下:
1、提取零件的外轮廓并将外轮廓之外的所有直线删除;
2、搜索两组结果中,具有相近斜率和端点坐标的直线;
3、计算步骤2中获得的直线对的中点距离。如果大于5个像素,则删除该组直线;如果小于等于5个像素,则计算出两直线对应两组端点的中点作为新 端点,并将新的两端点存入直线识别结果中。
优选的,步骤2中对模板图中识别出的直线对应的空间直线求取交点具体是:将模板图中所识别出的直线每两个为一组全部进行一次遍历。因为一条直线中只保存了其模板图中端点的2D坐标(a 1,b 1),以及其对应的空间直线坐标(x 1,y 1,z 1),假设待求交的两条直线分别为L1,L2。其中,L1对应的起终点3D坐标分别为(x 1,y 1,z 1)(x 2,y 2,z 2);L2对应的起终点3D坐标分别为(x 3,y 3,z 3)(x4,y4,z4),则可由下式计算出两条直线是否存在交点:
P=(x 1-x 3,y 1-y 3,z 1-z 3)×(x 1-x 4,y 1-y 4,z 1-z 4)
·(x 1-x 2,y 1-y 2,z 1-z 2)
若P=0,则两直线共面,进行下一步计算;若P≠0,则两直线不共面,不存在交点;
Q=(x 1-x 2,y 1-y 32,z 1-z 32)×(x 3-x 4,y 3-y 4,z 3-z 4)
若Q=0,则两直线平行,不存在交点;若Q≠0,则两直线不平行,交点可求。
优选的,步骤3中对于空间直线存在交点的模板图以及真实图中的平面直线进行交点匹配具体是:对于空间中具有交点的一组直线在真实图与模板图中计算其交点,并将该2D点与步骤2中计算出的3D点匹配并保存。
优选的,步骤4、5中提取真实图与模板图中的椭圆特征并根据已匹配直线进行描述具体是:分别在真实图和模板图中提取椭圆特征,并计算椭圆中心与周围最近的若干条直线的距离,并保存为一个数组。
优选的,步骤6、7中匹配真实图与模板图中的椭圆特征以及圆心具体是:将真实图与模板图中识别出的椭圆,根据其描述分别进行比较,满足下式的两椭圆将其圆心匹配:
diff=|a-b|
其中,a=[a 1,a 2,…,a n],b=[b 1,b 2,…,b n],a i和b i分别是椭圆a与椭圆b的中点与周围直线的距离。
优选的,步骤8中利用CAD模型的顶点坐标文件建立“真实图—CAD模型” 的直线交点与圆心的2D-3D点对,并使用PnP算法计算位姿是指:根据生成模板图同时生成的顶点坐标文件(包含模板图中的2D点与CAD模型中的3D点的对应关系)进行真实图到CAD模型的2D-3D点对匹配,并以此匹配结果通过PnP函数计算位姿结果。
本发明的有益效果是:
1)适用范围广,充分利用图像上的虚拟几何特征点,不依赖零件的纹理特征计算出更加精确的零件几何位姿。
2)只需获取场景的二维信息,一般的灰度相机即可实现。同时,二维信息具有占用空间小,数据处理快的特点。
附图说明
图1为灯光布置渲染图;
图2为灯光布置平面图;
图3为虚拟几何点匹配结果;
图4为真实图椭圆识别结果;
图5为模板图椭圆识别结果;
图6为圆心匹配结果;
图7为渲染叠加效果图。
具体实施方式
下面结合附图和实例对本发明作进一步说明。
首先,在不同光源条件下对零件分别拍摄两张图片,光源布置如图1、2所示。开光源1、3,拍摄第一张图片;开光源2、4,拍摄第二张图片。
接下来,利用已有方法对两张图片进行直线识别与匹配。
下一步,随机选取匹配成功的两对直线,对其所对应的空间直线进行求交,若空间直线存在交点。则对真实图与模板图中的这两对直线进行求交运算。若存在以下情况:
1)两条直线平行;
2)两条直线的交点在图像之外。
3)两条直线的交点距离识别出的线段距离较远。
则选取另外两对随机直线进行上述步骤,直至任意一对直线均进行过求交操作。
若两对直线存在符合要求的交点,则将该交点保存在2D-2D点对中,如图3所示。
下一步,对真实图与模板图进行椭圆检测,识别图像中存在的椭圆,如图4、5所示。
接下来,对真实图和模板图中的每一个识别出的椭圆中心与其附近直线分别求取距离并组成向量,模板图和真实图中的每一个椭圆均会对应一个向量。接下来,随机取一组模板图中和真实图中的向量,并作差取模。当模小于所设定的阈值时,则将此组圆心的2D坐标进行匹配,如图6所示。
Figure PCTCN2020128623-appb-000001
Figure PCTCN2020128623-appb-000002
Figure PCTCN2020128623-appb-000003
下一步,利用顶点文件中2D-3D坐标的对应关系对应关系完成真实图2D坐标与模型3D坐标的匹配。
最后使用OPENCV中的PnP函数,根据真实图2D坐标-模型3D坐标求解零件6D位姿,得到零件的位置向量与旋转向量,计算结果叠加渲染图如图7所示,图7的每一幅图都是由两张图片叠加而成,一张是拍摄的的真实图像,另一张是按照计算结果将CAD模型渲染生成的图像,在叠加图中可见渲染后的图像与原图像几乎完全重合,说明计算结果十分精确。
虽然本发明已以较佳实施例揭露如上,然其并非用以限定本发明。本发明所属技术领域中具有通常知识者,在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视权利要求书所界定者为准。

Claims (7)

  1. 一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于,包括如下步骤:
    步骤1:在不同光源条件下对零件进行两次拍照,分别提取直线特征并融合;
    步骤2:对模板图中识别出的直线对应的空间直线求取交点;
    步骤3:对于空间直线存在交点的模板图以及真实图中的平面直线进行交点匹配;
    步骤4:提取真实图中的椭圆特征并根据已匹配直线进行描述;
    步骤5:提取模板图中的椭圆特征并根据已匹配直线进行描述;
    步骤6:匹配真实图与模板图中的椭圆特征;
    步骤7:匹配真实图与模板图中的圆心,生成2D-2D匹配点对;
    步骤8:利用CAD模型的顶点坐标文件建立“真实图—CAD模型”的直线交点与圆心的2D-3D点对,并使用PnP算法计算位姿。
  2. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤1中在不同光源条件下对零件进行两次拍照,分别提取直线特征并融合具体是:布置光源与相机,在进行第一次拍照时,打开光源1与光源3,在进行第二次拍照时,打开光源2与光源4,在两张图片上分别提取直线,并将两次直线识别的结果融合,在结果融合时可能会出现一些重复的结果,处理方法如下:
    1、提取零件的外轮廓并将外轮廓之外的所有直线删除;
    2、搜索两组结果中,具有相近斜率和端点坐标的直线;
    3、计算步骤2中获得的直线对的中点距离,如果大于5个像素,则删除该组直线;如果小于等于5个像素,则计算出两直线对应两组端点的中点作为新端点,并将新的两端点存入直线识别结果中。
  3. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤2中对模板图中识别出的直线对应的空间直线求取交 点具体是:将模板图中所识别出的直线每两个为一组全部进行一次遍历。因为一条直线中只保存了其模板图中端点的2D坐标(a 1,b 1),以及其对应的空间直线坐标(x 1,y 1,z 1),假设待求交的两条直线分别为L1,L2。其中,L1对应的起终点3D坐标分别为(x 1,y 1,z 1)(x 2,y 2,z 2);L2对应的起终点3D坐标分别为(x 3,y 3,z 3)(x4,y4,z4),则可由下式计算出两条直线是否存在交点:
    P=(x 1-x 3,y 1-y 3,z 1-z 3)×(x 1-x 4,y 1-y 4,z 1-z 4)·(x 1-x 2,y 1-y 2,z 1-z 2)
    若P=0,则两直线共面,进行下一步计算;若P≠0,则两直线不共面,不存在交点;
    Q=(x 1-x 2,y 1-y 32,z 1-z 32)×(x 3-x 4,y 3-y 4,z 3-z 4)
    若Q=0,则两直线平行,不存在交点;若Q≠0,则两直线不平行,交点可求。
  4. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤3中对于空间直线存在交点的模板图以及真实图中的平面直线进行交点匹配具体是:对于空间中具有交点的一组直线在真实图与模板图中计算其交点,并将该2D点与步骤2中计算出的3D点匹配并保存。
  5. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤4、5中提取真实图与模板图中的椭圆特征并根据已匹配直线进行描述具体是:分别在真实图和模板图中提取椭圆特征,并计算椭圆中心与周围最近的若干条直线的距离,并保存为一个数组。
  6. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤6、7中匹配真实图与模板图中的椭圆特征以及圆心具体是:将真实图与模板图中识别出的椭圆,根据其描述分别进行比较,满足下式的两椭圆将其圆心匹配:
    diff=|a-b|
    其中,a=[a 1,a 2,…,a n],b=[b 1,b 2,…,b n],a i和b i分别是椭圆a与椭 圆b的中点与周围直线的距离。
  7. 根据权利要求1所述一种基于虚拟轮廓特征点的机械零件6D位姿测量方法,其特征在于:所述步骤8中利用CAD模型的顶点坐标文件建立“真实图—CAD模型”的直线交点与圆心的2D-3D点对,并使用PnP算法计算位姿是指:根据生成模板图同时生成的顶点坐标文件进行真实图到CAD模型的2D-3D点对匹配,并以此匹配结果通过PnP函数计算位姿结果。
PCT/CN2020/128623 2020-11-13 2020-11-13 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法 WO2022099597A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/915,470 US20230135674A1 (en) 2020-11-13 2020-11-13 6d pose measurement method for mechanical parts based on virtual contour feature points
PCT/CN2020/128623 WO2022099597A1 (zh) 2020-11-13 2020-11-13 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/128623 WO2022099597A1 (zh) 2020-11-13 2020-11-13 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法

Publications (1)

Publication Number Publication Date
WO2022099597A1 true WO2022099597A1 (zh) 2022-05-19

Family

ID=81602001

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128623 WO2022099597A1 (zh) 2020-11-13 2020-11-13 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法

Country Status (2)

Country Link
US (1) US20230135674A1 (zh)
WO (1) WO2022099597A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509908A (zh) * 2018-03-31 2018-09-07 天津大学 一种基于双目立体视觉的瞳孔直径实时测量方法
CN109886124A (zh) * 2019-01-23 2019-06-14 浙江大学 一种基于线束描述子图像匹配的无纹理金属零件抓取方法
CN109887030A (zh) * 2019-01-23 2019-06-14 浙江大学 基于cad稀疏模板的无纹理金属零件图像位姿检测方法
CN110044261A (zh) * 2019-04-22 2019-07-23 西安外事学院 一种轴线不垂直于端面的任意位姿孔视觉测量方法
US20190304134A1 (en) * 2018-03-27 2019-10-03 J. William Mauchly Multiview Estimation of 6D Pose
CN111121655A (zh) * 2019-12-18 2020-05-08 浙江大学 一种共面等大多孔型工件位姿与孔径视觉检测方法
CN111311618A (zh) * 2018-12-11 2020-06-19 长春工业大学 一种基于高精度几何基元提取的圆弧工件匹配与定位方法
CN111508021A (zh) * 2020-03-24 2020-08-07 广州视源电子科技股份有限公司 一种位姿确定方法、装置、存储介质及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190304134A1 (en) * 2018-03-27 2019-10-03 J. William Mauchly Multiview Estimation of 6D Pose
CN108509908A (zh) * 2018-03-31 2018-09-07 天津大学 一种基于双目立体视觉的瞳孔直径实时测量方法
CN111311618A (zh) * 2018-12-11 2020-06-19 长春工业大学 一种基于高精度几何基元提取的圆弧工件匹配与定位方法
CN109886124A (zh) * 2019-01-23 2019-06-14 浙江大学 一种基于线束描述子图像匹配的无纹理金属零件抓取方法
CN109887030A (zh) * 2019-01-23 2019-06-14 浙江大学 基于cad稀疏模板的无纹理金属零件图像位姿检测方法
CN110044261A (zh) * 2019-04-22 2019-07-23 西安外事学院 一种轴线不垂直于端面的任意位姿孔视觉测量方法
CN111121655A (zh) * 2019-12-18 2020-05-08 浙江大学 一种共面等大多孔型工件位姿与孔径视觉检测方法
CN111508021A (zh) * 2020-03-24 2020-08-07 广州视源电子科技股份有限公司 一种位姿确定方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
US20230135674A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
JP4785880B2 (ja) 三次元オブジェクト認識のためのシステムおよび方法
JP5455873B2 (ja) シーンにおける物体の姿勢を求めるための方法
Mendonca et al. Epipolar geometry from profiles under circular motion
CN103678754B (zh) 信息处理装置及信息处理方法
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
JP5539138B2 (ja) シーンにおける物体の姿勢を求めるためのシステム及び方法
CN107588721A (zh) 一种基于双目视觉的零件多尺寸的测量方法及系统
JP2011174879A (ja) 位置姿勢推定装置及びその方法
KR900002509B1 (ko) 화상 처리 장치
TWI607814B (zh) 即時三維建模之雷射飛行打標系統及其方法
CN113393524A (zh) 一种结合深度学习和轮廓点云重建的目标位姿估计方法
WO2022099597A1 (zh) 一种基于虚拟轮廓特征点的机械零件6d位姿测量方法
Doubek et al. Reliable 3d reconstruction from a few catadioptric images
JPH0778252A (ja) 物体認識方法
JP7365567B2 (ja) 計測システム、計測装置、計測方法及び計測プログラム
Liu et al. Target identification and location algorithm based on SURF-BRISK operator
Peng et al. Real time and robust 6D pose estimation of RGBD data for robotic bin picking
Du et al. Optimization of stereo vision depth estimation using edge-based disparity map
Li et al. Pose estimation of metal workpieces based on RPM-Net for robot grasping from point cloud
JP5938201B2 (ja) 位置姿勢計測装置、その処理方法及びプログラム
Maruyama et al. Model-based 3D object localization using occluding contours
CN112800582B (zh) 一种结构光视觉传感器仿真激光线生成方法
CN113345029B (zh) 一种光学偏折三维测量中的大视场参考平面标定方法
Wan et al. A high precision stereo-vision robotic system for large-sized object measurement and grasping in non-structured environment
Sheng et al. Image segmentation and object measurement based on stereo vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20961154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20961154

Country of ref document: EP

Kind code of ref document: A1