CN114440776A - A method and system for automatic displacement measurement based on machine vision - Google Patents

A method and system for automatic displacement measurement based on machine vision Download PDF

Info

Publication number
CN114440776A
CN114440776A CN202210104210.XA CN202210104210A CN114440776A CN 114440776 A CN114440776 A CN 114440776A CN 202210104210 A CN202210104210 A CN 202210104210A CN 114440776 A CN114440776 A CN 114440776A
Authority
CN
China
Prior art keywords
positioning
edge
coordinates
positioning point
image plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210104210.XA
Other languages
Chinese (zh)
Other versions
CN114440776B (en
Inventor
李得睿
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotu Technology Co ltd
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiaotu Technology Co ltd
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotu Technology Co ltd, Shanghai Jiao Tong University filed Critical Shanghai Jiaotu Technology Co ltd
Priority to CN202210104210.XA priority Critical patent/CN114440776B/en
Publication of CN114440776A publication Critical patent/CN114440776A/en
Application granted granted Critical
Publication of CN114440776B publication Critical patent/CN114440776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a displacement automatic measurement method and system based on machine vision, and the method comprises the following steps: calibrating a positioning point group of the target pattern; identifying and extracting a positioning point group of the target pattern to obtain image surface coordinates of each positioning point in the positioning point group; and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates. The invention has self-calibration function, can automatically construct the quantitative conversion relation from the image plane coordinate to the true object plane coordinate at any camera shooting angle, and can track the target by adopting various machine vision methods, thereby realizing the automatic measurement of the machine vision displacement without human intervention.

Description

一种基于机器视觉的位移自动测量方法及系统A method and system for automatic displacement measurement based on machine vision

技术领域technical field

本发明涉及机器视觉技术领域,特别涉及一种基于机器视觉的位移自动测量方法及系统。The invention relates to the technical field of machine vision, in particular to a method and system for automatic displacement measurement based on machine vision.

背景技术Background technique

位移测量是学术领域和工程领域极为频繁涉及的测量内容之一。随着机器视觉技术的发展与普及,学术界逐渐尝试将机器视觉与位移测量相结合,实现基于机器视觉的位移测量,从而服务工程实践。机器视觉测位移存在诸多好处,相对于传统方法,其具备非接触、低成本、高精度、实时测量等明显优势,因此基于机器视觉的位移测量长久以来是学术界的研究热点。现阶段国内外已有很多基于机器视觉测量位移的实际工程案例与商用产品。Displacement measurement is one of the most frequently involved measurement contents in academic and engineering fields. With the development and popularization of machine vision technology, academia has gradually tried to combine machine vision and displacement measurement to realize displacement measurement based on machine vision, thus serving engineering practice. Machine vision displacement measurement has many advantages. Compared with traditional methods, it has obvious advantages such as non-contact, low cost, high precision, and real-time measurement. Therefore, displacement measurement based on machine vision has long been a research hotspot in academia. At this stage, there are many practical engineering cases and commercial products based on machine vision measurement of displacement at home and abroad.

尽管被报道的机器视觉测量位移的应用案例颇多,但是这种先进的测量手段距离大范围推广应用仍有相当的距离。其中重要原因之一即为外参标定问题。相机标定涉及两类问题,其一为相机自身相关的物理尺度标定,比如相机内部光学内参标定,或者双目相机组内部的相机间位姿标定,这些都可统一称作相机或相机组内参标定。这一类问题目前已有很成熟的解决方案,如张正友标定法等。Although there are many reported application cases of machine vision measurement of displacement, this advanced measurement method is still far from being widely used. One of the important reasons is the problem of external parameter calibration. Camera calibration involves two types of problems, one of which is the calibration of the physical scale related to the camera itself, such as the calibration of the internal optical parameters of the camera, or the calibration of the pose and attitude between the cameras inside the binocular camera group, which can be collectively referred to as the camera or camera group internal parameter calibration. . There are mature solutions for this type of problem, such as Zhang Zhengyou's calibration method.

还有一类问题,也即像素尺度至真实物理尺度的转换关系标定问题。在机器视觉测量位移时,所得的到的原始位移数据结果仅为以像素为单位的像素尺度结果,而实际工程中所需要的结果是以真实物理尺度为单位的位移结果(如米、毫米等)。此时需要标定得到像素尺度至物理尺度的转换关系,以便将像素位移结果转换为具备物理意义的物理尺度位移结果,此即外参标定。然而,这类外参标定问题目前尚未出现统一的解决方案,各个现有的解决方案存在的最大缺点为:需人为介入。There is another kind of problem, that is, the calibration of the conversion relationship from pixel scale to real physical scale. When measuring displacement with machine vision, the obtained original displacement data results are only pixel scale results in pixels, while the results required in actual engineering are displacement results in real physical scale (such as meters, millimeters, etc. ). At this time, it is necessary to calibrate the conversion relationship from pixel scale to physical scale, so as to convert the pixel displacement result into a physical scale displacement result with physical meaning, which is called external parameter calibration. However, there is no unified solution for this kind of external parameter calibration problem, and the biggest disadvantage of each existing solution is that human intervention is required.

因此,现阶段的机器视觉位移测量技术无法实现真正意义的位移自动测量,需人为介入的标定过程极大地增加了机器视觉位移测量技术的应用壁垒和测量难度。在实际应用过程中,情况复杂多变,例如工程现场,实验人员业务能力差异较大,因而需要人为介入的标定方式大概率会对测量结果造成影响,极大地降低结果可信度。由此可见,实现自标定方法,对于机器视觉位移自动测量具有重要意义。Therefore, the current machine vision displacement measurement technology cannot realize the real automatic displacement measurement, and the calibration process that requires human intervention greatly increases the application barriers and measurement difficulties of the machine vision displacement measurement technology. In the actual application process, the situation is complex and changeable. For example, in the engineering site, the professional ability of the experimenters varies greatly. Therefore, the calibration method that requires human intervention is likely to affect the measurement results and greatly reduce the reliability of the results. It can be seen that the realization of the self-calibration method is of great significance for the automatic measurement of machine vision displacement.

发明内容SUMMARY OF THE INVENTION

本发明针对上述现有技术中存在的问题,提出一种基于机器视觉的位移自动测量方法及系统,以解决现有外参标定中需要认为介入的问题。Aiming at the problems existing in the above-mentioned prior art, the present invention proposes an automatic displacement measurement method and system based on machine vision, so as to solve the problem that intervention is required in the existing external parameter calibration.

为解决上述技术问题,本发明是通过如下技术方案实现的:In order to solve the above-mentioned technical problems, the present invention is achieved through the following technical solutions:

根据本发明的第一方面,提供一种基于机器视觉的位移自动测量方法,其包括:According to a first aspect of the present invention, there is provided an automatic displacement measurement method based on machine vision, comprising:

S11:标定靶标图案的定位点群;S11: The positioning point group of the calibration target pattern;

S12:识别并提取所述靶标图案的定位点群,得到所述定位点群中的各定位点的像面坐标;S12: Identify and extract the positioning point group of the target pattern, and obtain the image plane coordinates of each positioning point in the positioning point group;

S13:根据所述各定位点的像面坐标,建立物面的二维坐标系或三维坐标系,进而将待测图像的像面坐标变换为物面坐标。S13: Establish a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and then transform the image plane coordinates of the image to be measured into object plane coordinates.

较佳地,所述S11包括:Preferably, the S11 includes:

S111:标定所述靶标图案的中心,得到中心定位点;S111: demarcate the center of the target pattern to obtain a center positioning point;

S112:标定所述靶标图案的边缘,得到多个边缘定位点,其中,边缘定位点可以为靠近靶标图案的边界的点,也可以为靶标图案的边界上的点。S112: Demarcate the edge of the target pattern to obtain a plurality of edge positioning points, where the edge positioning points may be points close to the boundary of the target pattern, or may be points on the boundary of the target pattern.

较佳地,所述S12包括:Preferably, the S12 includes:

S121:识别并提取所述靶标图案的中心定位点,得到所述中心定位点的像面坐标;S121: Identify and extract the center positioning point of the target pattern, and obtain the image plane coordinates of the center positioning point;

S122:识别并提取所述靶标图案的多个边缘定位点,得到多个所述边缘定位点的像面坐标。S122: Identify and extract multiple edge positioning points of the target pattern to obtain image plane coordinates of the multiple edge positioning points.

较佳地,所述S121之前还包括:对所述靶标图案进行二值化处理。Preferably, before the step S121, the method further includes: performing a binarization process on the target pattern.

较佳地,所述S11中的所述中心定位点为中心定位圆的圆心、所述边缘定位点为边缘定位圆的圆心;对应地,Preferably, the center positioning point in the S11 is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle; correspondingly,

所述S121包括:The S121 includes:

S1211:采用边界提取法对所述靶标图案进行边缘检测,提取带有边缘等级信息的边缘拓扑表;S1211: Use a boundary extraction method to perform edge detection on the target pattern, and extract an edge topology table with edge level information;

S1212:通过查找所述边缘拓扑表的边缘等级信息,锁定所述中心定位圆的边界;S1212: By searching the edge level information of the edge topology table, lock the boundary of the center positioning circle;

S1213:采用最小二乘法拟合对所述中心定位圆的边界进行处理,得到所述中心定位圆的圆心坐标,即为所述中心定位点的像面坐标;S1213: Use least squares fitting to process the boundary of the center positioning circle, and obtain the center coordinates of the center positioning circle, which are the image plane coordinates of the center positioning point;

所述S122包括:The S122 includes:

S1221:采用边界提取法对所述靶标图案进行椭圆检测,锁定所述多个边缘定位点的边界;S1221: Use a boundary extraction method to perform ellipse detection on the target pattern, and lock the boundaries of the multiple edge positioning points;

S1222:采用最小二乘法拟合对多个所述边缘定位点的边界进行处理,得到多个所述边缘定位点的圆心坐标,即为多个所述边缘定位点的像面坐标。S1222 : Process the boundaries of the plurality of edge positioning points by using least squares fitting to obtain the coordinates of the circle centers of the plurality of the edge positioning points, that is, the image plane coordinates of the plurality of the edge positioning points.

较佳地,所述S111中的中心定位圆包括至少两个,至少两个所述中心定位圆为同心圆,至少两个所述中心定位圆的半径不同;对应地,Preferably, the center positioning circles in S111 include at least two, at least two of the center positioning circles are concentric circles, and at least two of the center positioning circles have different radii; correspondingly,

所述S1213为:采用最小二乘法拟合对所述中心定位圆的边界进行处理,得到所述中心定位圆的圆心坐标,对至少两个以上所述中心定位圆的圆心坐标进行取均值处理,得到所述中心定位点的像面坐标。The step S1213 is: using least squares fitting to process the boundary of the centering circle, obtaining the center coordinates of the centering circle, and performing averaging processing on the center coordinates of at least two or more of the centering circles, The image plane coordinates of the central positioning point are obtained.

较佳地,所述S13中建立物面的二维坐标系,进而将待测图像的像面图像变换为物面坐标包括:Preferably, establishing a two-dimensional coordinate system of the object plane in the step S13, and then transforming the image plane image of the image to be measured into the object plane coordinates includes:

S1311:利用所述定位点群中的各定位点求解像面映射到物面的二维坐标系下的透视变换矩阵;S1311: Use each locating point in the locating point group to solve the perspective transformation matrix under the two-dimensional coordinate system in which the image plane is mapped to the object plane;

S1312:利用所述透视变换矩阵将待测图像的像面坐标变换为物面的二维坐标,变换公式为:S1312: Use the perspective transformation matrix to transform the image plane coordinates of the image to be measured into the two-dimensional coordinates of the object plane, and the transformation formula is:

Figure BDA0003493324170000041
Figure BDA0003493324170000041

其中,x/s为物面的二维坐标系的横坐标,y/s为物面的二维坐标系的纵坐标,s为比例系数,W为透视变换矩阵,P为像面点齐次坐标向量,P^为物面点坐标向量。Among them, x/s is the abscissa of the two-dimensional coordinate system of the object surface, y/s is the vertical coordinate of the two-dimensional coordinate system of the object surface, s is the scale coefficient, W is the perspective transformation matrix, and P is the homogeneous image surface point. Coordinate vector, P^ is the coordinate vector of the object surface point.

较佳地,所述S11中的靶标图案包括:通过双目相机组的其中一相机获得的左视图靶向图案和另一相机获得的右视图靶标图案;Preferably, the target pattern in S11 includes: a left-view target pattern obtained by one of the cameras of the binocular camera group and a right-view target pattern obtained by another camera;

所述S12中得到的定位点群中的各定位点的像面坐标分别包括:左视图的像面坐标和右视图的像面坐标;The image plane coordinates of each positioning point in the positioning point group obtained in S12 respectively include: the image plane coordinates of the left view and the image plane coordinates of the right view;

所述S13中建立物面的三维坐标系,进而将待测图像的像面图像变换为物面坐标包括:The three-dimensional coordinate system of the object plane is established in the S13, and then the image plane image of the image to be measured is transformed into the object plane coordinates, including:

S1321:根据所述双目相机组的内参标定结果,对所述各定位点的左视图的像面坐标和右视图的像面坐标进行重建,得到所述各定位点在双目相机组坐标系内的三维坐标结果PcamS1321: According to the internal parameter calibration result of the binocular camera group, reconstruct the image plane coordinates of the left view and the image plane coordinates of the right view of the positioning points to obtain the coordinates of the positioning points in the binocular camera group coordinate system The three-dimensional coordinate result within P cam ;

S1322:利用所述各定位点在双目相机组坐标系内的三维坐标结果Pcam,通过最小二乘法拟合得到靶标图案所在平面的物面方程;S1322: Using the three-dimensional coordinate result P cam of each positioning point in the binocular camera group coordinate system, the object plane equation of the plane where the target pattern is located is obtained by least squares fitting;

S1323:将x、y轴建于物面,z轴建于物面法向,以所述定位点群中的中心定位点Pc的三维坐标作为坐标系原点,建立物面的三维坐标系;S1323: the x and y axes are built on the object surface, and the z axis is built in the normal direction of the object surface, and the three-dimensional coordinate of the center positioning point P c in the positioning point group is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object surface is established;

S1324:设所述物面的三维坐标系在所述双目相机组坐标系内的三轴归一化坐标所形成的矩阵为R,以Pc作为平移向量,通过坐标变换将所述待测图像在所述双目相机组坐标系内的三维坐标结果Pcam转换为物面的三维坐标系内的坐标结果Pworld,转换公式为:S1324: Set the matrix formed by the three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the binocular camera group coordinate system as R, and use P c as a translation vector, and transform the to-be-measured The three-dimensional coordinate result P cam of the image in the coordinate system of the binocular camera group is converted into the coordinate result P world in the three-dimensional coordinate system of the object surface, and the conversion formula is:

Figure BDA0003493324170000051
Figure BDA0003493324170000051

其中,

Figure BDA0003493324170000052
为Pc的三维坐标。in,
Figure BDA0003493324170000052
is the three-dimensional coordinate of Pc .

根据本发明的第二方面,提供一种基于机器视觉的位移自动测量系统,其包括:定位点群标定模块、像面坐标获取模块以及物面坐标系获取模块;其中,According to a second aspect of the present invention, an automatic displacement measurement system based on machine vision is provided, which includes: a positioning point group calibration module, an image plane coordinate acquisition module, and an object plane coordinate system acquisition module; wherein,

所述定位点群标定模块用于标定靶标图案的定位点群;The positioning point group calibration module is used for calibrating the positioning point group of the target pattern;

所述像面坐标获取模块用于识别并提取所述靶标图案的定位点群,得到所述定位点群中的各定位点的像面坐标;The image plane coordinate acquisition module is used to identify and extract the positioning point group of the target pattern, and obtain the image plane coordinates of each positioning point in the positioning point group;

所述物面坐标系获取模块用于根据所述各定位点的像面坐标,建立物面的二维或三维坐标系,进而将待测图像的像面坐标变换为物面坐标。The object plane coordinate system acquisition module is configured to establish a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and then transform the image plane coordinates of the image to be measured into object plane coordinates.

较佳地,所述像面坐标获取模块包括:边缘检测单元、中心定位圆的边界锁定单元、中心定位点的像面坐标获得单元、边缘定位圆的边界锁定单元以及边缘定位点的像面坐标获得单元;其中,Preferably, the image plane coordinate acquisition module includes: an edge detection unit, a boundary locking unit for the center positioning circle, an image plane coordinate obtaining unit for the center positioning point, a boundary locking unit for the edge positioning circle, and the image plane coordinates for the edge positioning point. get unit; where,

所述定位点群标定模块中的定位点群包括:中心定位点以及边缘定位点,所述中心定位点为中心定位圆的圆心,所述边缘定位点为边缘定位圆的圆心;The positioning point group in the positioning point group calibration module includes: a center positioning point and an edge positioning point, the center positioning point is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle;

所述边缘检测单元用于采用边界提取法对所述靶标图案进行边缘检测,提取带有边缘等级信息的边缘拓扑表;The edge detection unit is used to perform edge detection on the target pattern by using the boundary extraction method, and extract an edge topology table with edge level information;

所述中心定位圆的边界锁定单元用于通过查找所述边缘拓扑表的边缘等级信息,锁定所述中心定位圆的边界;The boundary locking unit of the center positioning circle is configured to lock the boundary of the center positioning circle by looking up the edge level information of the edge topology table;

所述中心定位点的像面坐标获得单元用于采用最小二乘法拟合对所述中心定位圆的边界进行处理,得到所述中心定位圆的圆心坐标,即为所述中心定位点的像面坐标;The image plane coordinate obtaining unit of the center positioning point is used to process the boundary of the center positioning circle by using least squares fitting to obtain the center coordinates of the center positioning circle, which is the image plane of the center positioning point. coordinate;

所述边缘定位圆的边界锁定单元用于采用边界提取法对所述靶标图案进行椭圆检测,锁定所述多个边缘定位点的边界;The boundary locking unit of the edge positioning circle is configured to perform ellipse detection on the target pattern by using a boundary extraction method, and lock the boundaries of the plurality of edge positioning points;

所述边缘定位点的像面坐标获得单元用于采用最小二乘法拟合对多个所述边缘定位点的边界进行处理,得到多个所述边缘定位点的圆心坐标,即为多个所述边缘定位点的像面坐标。The image plane coordinate obtaining unit of the edge positioning point is used to process the boundaries of a plurality of the edge positioning points by using least squares fitting, and obtain the center coordinates of the plurality of edge positioning points, that is, a plurality of the edge positioning points. The image plane coordinates of the edge anchor point.

相较于现有技术,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

本发明提供的基于机器视觉的位移自动测量方法及系统,通过标定靶标图案,并根据标定的靶标图案获得物面的坐标系,进而可将待测图像的像面坐标变换为物面坐标,即实现了自动外参标定,无需人为接入,避免了人为介入的标定方式对测量结果带来的影响。The method and system for automatic displacement measurement based on machine vision provided by the present invention can calibrate the target pattern and obtain the coordinate system of the object plane according to the calibrated target pattern, and then the image plane coordinates of the image to be measured can be transformed into the object plane coordinates, that is, The automatic external parameter calibration is realized without manual access, which avoids the influence of the calibration method of human intervention on the measurement results.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.

图1为本发明一实施例的基于机器视觉的位移自动测量方法的流程图;1 is a flowchart of an automatic displacement measurement method based on machine vision according to an embodiment of the present invention;

图2为本发明一较佳实施例的自标定靶标图案的定位点群的示意图;2 is a schematic diagram of a positioning point group of a self-calibrating target pattern according to a preferred embodiment of the present invention;

图3为本发明一较佳实施例的基于机器视觉的二维位移测量场景下的自标定结果;3 is a self-calibration result in a two-dimensional displacement measurement scenario based on machine vision according to a preferred embodiment of the present invention;

图4为本发明一较佳实施例的基于机器视觉的三维位移测量场景下的自标定结果。FIG. 4 is a self-calibration result in a scene of three-dimensional displacement measurement based on machine vision according to a preferred embodiment of the present invention.

具体实施方式Detailed ways

下面对本发明的实施例作详细说明,本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process, but the protection scope of the present invention is not limited to the following implementation. example.

本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if present) in the description and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to Describe a particular order or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.

在一实施例中,本发明提供一种基于机器视觉的位移自动测量方法,请参考图1,其包括:In one embodiment, the present invention provides an automatic displacement measurement method based on machine vision, please refer to FIG. 1 , which includes:

S11:标定靶标图案的定位点群;S11: The positioning point group of the calibration target pattern;

S12:识别并提取靶标图案的定位点群,得到定位点群中的各定位点的像面坐标;S12: Identify and extract the positioning point group of the target pattern, and obtain the image plane coordinates of each positioning point in the positioning point group;

S13:根据各定位点的像面坐标,建立物面的二维坐标系或三维坐标系,进而将待测图像的像面坐标变换为物面坐标。S13: Establish a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of each positioning point, and then transform the image plane coordinates of the image to be measured into object plane coordinates.

本发明上述实施例无需人为接入,避免了人为介入的标定方式对测量结果带来的影响。The above-mentioned embodiments of the present invention do not require manual access, thereby avoiding the influence of the calibration method of human intervention on the measurement results.

作为优选,在一实施例中,S11可以进一步包括:Preferably, in one embodiment, S11 may further include:

S111:标定靶标图案的中心,得到中心定位点;S111: Calibrate the center of the target pattern to obtain the center positioning point;

S112:标定靶标图案的边缘,得到多个边缘定位点。S112: Calibrate the edge of the target pattern to obtain a plurality of edge positioning points.

作为优选,在一实施例中,S12包括:Preferably, in one embodiment, S12 includes:

S121:识别并提取靶标图案的中心定位点,得到中心定位点的像面坐标;S121: Identify and extract the center positioning point of the target pattern, and obtain the image plane coordinates of the center positioning point;

S122:识别并提取靶标图案的多个边缘定位点,得到多个边缘定位点的像面坐标。S122: Identify and extract multiple edge positioning points of the target pattern, and obtain image plane coordinates of the multiple edge positioning points.

作为优选,在一实施例中,S121之前还包括:对靶标图案进行二值化处理,可以使得靶标图案的边界更凸显,后续中心定位点和边缘定位点的提取更准确。Preferably, in an embodiment, before S121, further includes: performing a binarization process on the target pattern, which can make the boundary of the target pattern more prominent, and the subsequent extraction of the center positioning point and the edge positioning point is more accurate.

作为优选,在一实施例中,S11中的中心定位点为中心定位圆的圆心、边缘定位点为边缘定位圆的圆心;对应地,Preferably, in an embodiment, the center positioning point in S11 is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle; correspondingly,

S121包括:S121 includes:

S1211:采用边界提取法对靶标图案进行边缘检测,提取带有边缘等级信息的边缘拓扑表;S1211: Use the boundary extraction method to perform edge detection on the target pattern, and extract the edge topology table with edge level information;

其中,边缘等级信息可以为:中心定位圆为第一等级,边缘定位圆为第二等级;或者也可以采用其他等级划分形式;Wherein, the edge level information may be: the center positioning circle is the first level, and the edge positioning circle is the second level; or other level division forms may also be used;

S1212:通过查找边缘拓扑表的边缘等级信息,锁定中心定位圆的边界;S1212: Lock the boundary of the center positioning circle by searching the edge level information of the edge topology table;

S1213:采用最小二乘法拟合对中心定位圆的边界进行处理,得到中心定位圆的圆心坐标,即为中心定位点的像面坐标;S1213: Use least squares fitting to process the boundary of the center positioning circle to obtain the center coordinates of the center positioning circle, which are the image plane coordinates of the center positioning point;

S122包括:S122 includes:

S1221:采用边界提取法对靶标图案进行椭圆检测,锁定多个边缘定位点的边界;S1221: Use the boundary extraction method to perform ellipse detection on the target pattern, and lock the boundaries of multiple edge positioning points;

靶标图案本身是圆形,相机正对拍摄靶标,则靶标在相机画面内呈现圆形;但是在相机斜对靶标进行拍摄时,靶标图案在相机画面内呈现椭圆形状,且椭圆检测可检测正圆与椭圆,所以采用椭圆检测能够使得检测结构更准确;The target pattern itself is a circle, and the camera is facing the shooting target, the target will appear circular in the camera screen; but when the camera is obliquely shooting the target, the target pattern will appear in the camera screen. and ellipse, so using ellipse detection can make the detection structure more accurate;

S1222:采用最小二乘法拟合对多个边缘定位点的边界进行处理,得到多个边缘定位点的圆心坐标,即为多个边缘定位点的像面坐标。S1222: Use least squares fitting to process the boundaries of the multiple edge positioning points to obtain the coordinates of the circle centers of the multiple edge positioning points, that is, the image plane coordinates of the multiple edge positioning points.

作为优选,在一实施例中,S111中的中心定位圆包括至少两个,至少两个中心定位圆为同心圆,至少两个中心定位圆的半径不同。请参考图2,中心定位圆以三个同心圆为例。不同实施例中,中心定位圆的数量也可以为两个,或三个以上。Preferably, in an embodiment, the center positioning circles in S111 include at least two, the at least two center positioning circles are concentric circles, and the radii of the at least two center positioning circles are different. Please refer to FIG. 2 , the center positioning circle takes three concentric circles as an example. In different embodiments, the number of center positioning circles may also be two, or more than three.

对应地,S1213为:采用最小二乘法拟合对中心定位圆的边界进行处理,得到中心定位圆的圆心坐标,对至少两个以上中心定位圆的圆心坐标进行取均值处理,得到中心定位点的像面坐标。Correspondingly, S1213 is: adopting least squares fitting to process the boundary of the center positioning circle, obtaining the center coordinates of the center positioning circle, and performing averaging processing on the center coordinates of at least two or more center positioning circles to obtain the center positioning point's coordinates. Image coordinates.

在二维位移测量场景下,S13中像面坐标变换至物面坐标的量化转变关系为像面坐标系变换至空间中某二维物面坐标系的单应性矩阵。In the two-dimensional displacement measurement scenario, the quantization transformation relationship from the image plane coordinates to the object plane coordinates in S13 is a homography matrix of the image plane coordinate system transformed to a two-dimensional object plane coordinate system in space.

一实施例中,S13中建立物面的二维坐标系,进而将待测图像的像面图像变换为物面坐标包括:In one embodiment, establishing a two-dimensional coordinate system of the object plane in S13, and then transforming the image plane image of the image to be measured into object plane coordinates includes:

S1311:利用定位点群中的各定位点求解像面映射到物面的二维坐标系下的透视变换矩阵W;S1311: Use each positioning point in the positioning point group to solve the perspective transformation matrix W in the two-dimensional coordinate system in which the image plane is mapped to the object plane;

S1312:利用透视变换矩阵将待测图像的像面坐标变换为物面的二维坐标,变换公式为:S1312: Use the perspective transformation matrix to transform the image plane coordinates of the image to be measured into the two-dimensional coordinates of the object plane, and the transformation formula is:

Figure BDA0003493324170000091
Figure BDA0003493324170000091

其中,x为物面的二维坐标系的横坐标,y为物面的二维坐标系的纵坐标,s为比例系数,P为像面点,P^为物面点。Among them, x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is the scale coefficient, P is the image plane point, and P^ is the object plane point.

在三维位移测量场景下,S13中像面坐标变换至物面坐标的量化转变关系为双目相机组坐标系变换至空间中某三维物面坐标系的线形变换关系,该变换关系可以由相应的平移矩阵和旋转矩阵组成。In the three-dimensional displacement measurement scenario, the quantitative transformation relationship from the image plane coordinates to the object plane coordinates in S13 is the linear transformation relationship from the binocular camera group coordinate system to a three-dimensional object plane coordinate system in space. The transformation relationship can be determined by the corresponding It consists of translation matrix and rotation matrix.

一实施例中,S11中的靶标图案包括:通过双目相机组的其中一相机获得的左视图靶向图案和另一相机获得的右视图靶标图案;In one embodiment, the target pattern in S11 includes: a left-view target pattern obtained by one of the cameras of the binocular camera group and a right-view target pattern obtained by another camera;

S12中得到的定位点群中的各定位点的像面坐标分别包括:左视图的像面坐标和右视图的像面坐标;The image plane coordinates of each positioning point in the positioning point group obtained in S12 respectively include: the image plane coordinates of the left view and the image plane coordinates of the right view;

S13中建立物面的三维坐标系,进而将待测图像的像面图像变换为物面坐标包括:In S13, the three-dimensional coordinate system of the object plane is established, and then the image plane image of the image to be measured is transformed into the object plane coordinates, including:

S1321:根据双目相机组的内参标定结果,对各定位点的左视图的像面坐标和右视图的像面坐标进行重建,得到各定位点在双目相机组坐标系内的三维坐标结果PcamS1321: According to the internal parameter calibration result of the binocular camera group, reconstruct the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point, and obtain the three-dimensional coordinate result P of each positioning point in the binocular camera group coordinate system cam ;

S1322:利用各定位点在双目相机组坐标系内的三维坐标结果Pcam,求解靶标图案所在平面的物面方程;S1322: Use the three-dimensional coordinate result P cam of each positioning point in the binocular camera group coordinate system to solve the object plane equation of the plane where the target pattern is located;

S1323:将x、y轴建于物面,z轴建于物面法向,以定位点群中的中心定位点Pc的三维坐标作为坐标系原点,建立物面的三维坐标系;S1323: The x and y axes are built on the object surface, and the z axis is built in the normal direction of the object surface, and the three-dimensional coordinate of the center positioning point P c in the positioning point group is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object surface is established;

S1324:设物面的三维坐标系在双目相机组坐标系内的三轴归一化坐标所形成的矩阵为R,以Pc作为平移向量,通过坐标变换将待测图像在双目相机组坐标系内的三维坐标结果Pcam转换为物面的三维坐标系内的坐标结果Pworld,转换公式为:S1324: Set the matrix formed by the three-axis normalized coordinates of the three-dimensional coordinate system of the object surface in the binocular camera group coordinate system as R, and use P c as the translation vector to convert the image to be measured in the binocular camera group through coordinate transformation. The three-dimensional coordinate result P cam in the coordinate system is converted into the coordinate result P world in the three-dimensional coordinate system of the object surface, and the conversion formula is:

Figure BDA0003493324170000101
Figure BDA0003493324170000101

其中,

Figure BDA0003493324170000102
为Pc的三维坐标。in,
Figure BDA0003493324170000102
is the three-dimensional coordinate of Pc .

下面以一具体实例对上述实施例的基于机器视觉的唯一测量的自标定方法进行描述,但是靶标图案、各定位点的数量以及各参数并不限于下述具体实例。The self-calibration method for the unique measurement based on machine vision in the above embodiment is described below with a specific example, but the target pattern, the number of each positioning point and each parameter are not limited to the following specific example.

请参考图2,本实例中的靶标图案以方形为例,中点定位圆以三个为例,边缘定位圆以四个为例。不同实施例中,边缘定位圆不一定以四个为例,也不一定为图示中的这四个位置,只要能够定位该靶标图案所在的平面即可。Please refer to FIG. 2 , the target pattern in this example is a square, the center point positioning circle is three, and the edge positioning circle is four. In different embodiments, four edge positioning circles are not necessarily used as an example, nor are they necessarily the four positions in the figure, as long as the plane on which the target pattern is located can be located.

S121包括:S121 includes:

S1211:采用边界提取法对靶标图案进行边缘检测,提取带有边缘等级信息的边缘拓扑表;以图2为例,边缘检测得到的是三个同心圆和四个角点的边缘形状;边缘等级信息可以为:同心圆中的最内侧圆边缘为第一级,次内测同心圆边缘为第二级,最外侧同心圆边缘与四个角点边缘为第三级;S1211: Use the boundary extraction method to perform edge detection on the target pattern, and extract the edge topology table with edge level information; taking Figure 2 as an example, the edge detection obtains the edge shapes of three concentric circles and four corner points; the edge level The information can be: the innermost circle edge in the concentric circle is the first level, the secondary inner concentric circle edge is the second level, and the outermost concentric circle edge and the four corner edges are the third level;

S1212:通过查找边缘拓扑表的边缘等级信息,锁定三个中心定位圆的边界;S1212: By searching the edge level information of the edge topology table, lock the boundaries of the three center positioning circles;

S1213:采用最小二乘法拟合对三个中心定位圆的边界进行处理,得到三个椭圆圆心坐标,并取均值作为中心定位点的像面坐标PcS1213: adopt least squares fitting to process the boundaries of the three center positioning circles, obtain three elliptical circle center coordinates, and take the mean value as the image plane coordinates P c of the center positioning point;

S122包括:S122 includes:

S1221:采用边界提取法对靶标图案进行椭圆检测,筛选出所有椭圆边界,因为对于真实情况的靶标检测,其实是在一整幅真实拍摄的图像中,检测其中的真实靶标,那么椭圆检测会检测出真实图像拍摄情景中的所有椭圆特征边界,所以要从其中筛选出四个角点,具体筛选方法为:找出距离中心定位点最近的四个椭圆边界,即为四个边缘定位点的椭圆边界;S1221: Use the boundary extraction method to perform ellipse detection on the target pattern, and filter out all ellipse boundaries, because the target detection in the real situation is actually to detect the real target in a whole real shot image, then the ellipse detection will detect All ellipse feature boundaries in the real image shooting scene are obtained, so four corner points should be screened out. The specific screening method is: find the four ellipse boundaries closest to the center positioning point, that is, the ellipse of the four edge positioning points. boundary;

S1222:采用最小二乘法拟合对四个边缘定位点的椭圆边界进行处理,得到四个边缘定位点的圆心坐标,分别为:P1、P2、P3、P4S1222: Process the elliptical boundaries of the four edge positioning points by using least squares fitting, and obtain the circle center coordinates of the four edge positioning points, which are respectively: P 1 , P 2 , P 3 , and P 4 .

在二维位移测量场景下,S13具体包括:In the two-dimensional displacement measurement scenario, S13 specifically includes:

S1311:采用Pc与P1、P2、P3、P4共计五个定位点求解像面映射到物面的透视变换矩阵W;S1311: Use P c and five positioning points of P 1 , P 2 , P 3 , and P 4 to solve the perspective transformation matrix W that maps the image plane to the object plane;

S1312:对于每一次测试得到的待测图像的像素坐标结果,采用透视变换矩阵W将像面坐标变换为物面坐标,透视变换按照下式实现:S1312: For the pixel coordinate results of the image to be tested obtained in each test, the perspective transformation matrix W is used to transform the image plane coordinates into object plane coordinates, and the perspective transformation is implemented according to the following formula:

Figure BDA0003493324170000111
Figure BDA0003493324170000111

其中,x为物面的二维坐标系的横坐标,y为物面的二维坐标系的纵坐标,s为比例系数,P为像面点,P^为物面点。Among them, x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is the scale coefficient, P is the image plane point, and P^ is the object plane point.

如图3所示为利用上述实例的自标定方法在二维位移测量场景下自标定得到的物面坐标系,其中x、y轴为该坐标系的两个轴。Figure 3 shows an object plane coordinate system obtained by self-calibration using the self-calibration method of the above example in a two-dimensional displacement measurement scene, wherein the x and y axes are the two axes of the coordinate system.

在三维位移测量场景下,S13具体包括:In the three-dimensional displacement measurement scenario, S13 specifically includes:

S1321:采用双目相机左视图和右视图中各自识别所得的Pc与P1、P2、P3、P4五个定位点坐标,根据双目相机组的内参标定结果,重建五个定位点在双目相机组坐标系内的三维坐标结果PcamS1321: Using the coordinates of P c and five positioning points P 1 , P 2 , P 3 , and P 4 respectively identified in the left and right views of the binocular camera, and reconstructing five positioning points according to the internal parameter calibration results of the binocular camera group The three-dimensional coordinate result P cam of the point in the coordinate system of the binocular camera group;

S1322:利用五个定位点在双目相机组坐标系内的三维坐标结果Pcam,求解靶标图案所在平面的物面方程;S1322: Use the three-dimensional coordinate result P cam of the five positioning points in the binocular camera group coordinate system to solve the object plane equation of the plane where the target pattern is located;

S1323:将x、y轴建于物面,z轴建于物面法向,以定位点群中的中心定位点Pc的三维坐标作为坐标系原点,建立物面的三维坐标系;S1323: The x and y axes are built on the object surface, and the z axis is built in the normal direction of the object surface, and the three-dimensional coordinate of the center positioning point P c in the positioning point group is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object surface is established;

S1324:设物面的三维坐标系在双目相机组坐标系内的三轴归一化坐标所形成的矩阵为R,以Pc作为平移向量,通过坐标变换将待测图像在双目相机组坐标系内的三维坐标结果Pcam转换为物面的三维坐标系内的坐标结果Pworld,转换公式为:S1324: Set the matrix formed by the three-axis normalized coordinates of the three-dimensional coordinate system of the object surface in the binocular camera group coordinate system as R, and use P c as the translation vector to convert the image to be measured in the binocular camera group through coordinate transformation. The three-dimensional coordinate result P cam in the coordinate system is converted into the coordinate result P world in the three-dimensional coordinate system of the object surface, and the conversion formula is:

Figure BDA0003493324170000121
Figure BDA0003493324170000121

其中,

Figure BDA0003493324170000122
为Pc的三维坐标。in,
Figure BDA0003493324170000122
is the three-dimensional coordinate of Pc .

如图4所示为利用上述实例的自标定方法在三维位移测量场景下自标定得到的物面坐标系,其中x、y、z轴为该坐标系的三个轴。Figure 4 shows the object plane coordinate system obtained by self-calibration in the three-dimensional displacement measurement scene by using the self-calibration method of the above example, wherein the x, y, and z axes are the three axes of the coordinate system.

一实施例中,还提供一种基于机器视觉的位移自动测量系统,其包括:定位点群标定模块、像面坐标获取模块以及物面坐标系获取模块;其中,In an embodiment, an automatic displacement measurement system based on machine vision is also provided, which includes: a positioning point group calibration module, an image plane coordinate acquisition module, and an object plane coordinate system acquisition module; wherein,

定位点群标定模块用于标定靶标图案的定位点群;The positioning point group calibration module is used to calibrate the positioning point group of the target pattern;

像面坐标获取模块用于识别并提取所述靶标图案的定位点群,得到定位点群中的各定位点的像面坐标;The image plane coordinate acquisition module is used to identify and extract the positioning point group of the target pattern, and obtain the image plane coordinates of each positioning point in the positioning point group;

物面坐标系获取模块用于根据各定位点的像面坐标,建立物面的二维或三维坐标系,进而将待测图像的像面坐标变换为物面坐标。The object plane coordinate system acquisition module is used to establish a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of each positioning point, and then transform the image plane coordinates of the image to be measured into object plane coordinates.

一实施例中,定位点群标定模块包括:中心定位点标定单元以及边缘定位点标定单元;其中,In one embodiment, the locating point group calibration module includes: a center locating point calibrating unit and an edge locating point calibrating unit; wherein,

中点定位点标定单元用于标定靶标图案的中心,得到中心定位点;The midpoint locating point calibration unit is used to calibrate the center of the target pattern to obtain the center locating point;

边缘定位点标定单元用于标定靶标图案的边缘,得到多个边缘定位点。The edge locating point calibration unit is used for calibrating the edge of the target pattern to obtain a plurality of edge locating points.

一实施例中,像面坐标获取模块包括:边缘检测单元、中心定位圆的边界锁定单元、中心定位点的像面坐标获得单元、边缘定位圆的边界锁定单元以及边缘定位点的像面坐标获得单元;其中,In one embodiment, the image plane coordinate acquisition module includes: an edge detection unit, a boundary locking unit for the center positioning circle, an image plane coordinate obtaining unit for the center positioning point, a boundary locking unit for the edge positioning circle, and an image plane coordinate obtaining unit for the edge positioning point. unit; of which,

定位点群标定模块中的定位点群包括:中心定位点以及边缘定位点,所述中心定位点为中心定位圆的圆心,所述边缘定位点为边缘定位圆的圆心;The positioning point group in the positioning point group calibration module includes: a center positioning point and an edge positioning point, the center positioning point is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle;

边缘检测单元用于采用边界提取法对靶标图案进行边缘检测,提取带有边缘等级信息的边缘拓扑表;The edge detection unit is used to detect the edge of the target pattern by using the edge extraction method, and extract the edge topology table with edge level information;

中心定位圆的边界锁定单元用于通过查找边缘拓扑表的边缘等级信息,锁定中心定位圆的边界;The boundary locking unit of the center positioning circle is used to lock the boundary of the center positioning circle by looking up the edge level information of the edge topology table;

中心定位点的像面坐标获得单元用于采用最小二乘法拟合对中心定位圆的边界进行处理,得到中心定位圆的圆心坐标,即为中心定位点的像面坐标;The image plane coordinate obtaining unit of the center positioning point is used to process the boundary of the center positioning circle by using least squares fitting to obtain the center coordinates of the center positioning circle, which are the image plane coordinates of the center positioning point;

边缘定位圆的边界锁定单元用于采用边界提取法对所述靶标图案进行椭圆检测,锁定所述多个边缘定位点的边界;The boundary locking unit of the edge positioning circle is used to perform ellipse detection on the target pattern by using the boundary extraction method, and lock the boundaries of the plurality of edge positioning points;

边缘定位点的像面坐标获得单元用于采用最小二乘法拟合对多个所述边缘定位点的边界进行处理,得到多个所述边缘定位点的圆心坐标,即为多个所述边缘定位点的像面坐标。The image plane coordinate obtaining unit of the edge positioning point is used to process the boundaries of the multiple edge positioning points by using least squares fitting, and obtain the center coordinates of the multiple edge positioning points, that is, the multiple edge positioning points. The image plane coordinates of the point.

一实施例中,像面坐标获取模块还包括:二值化处理单元,其用于对靶标图案进行二值化处理,可以使得靶标图案的边界更凸显,后续的边缘检测结果更准确。In one embodiment, the image plane coordinate acquisition module further includes: a binarization processing unit for performing binarization processing on the target pattern, which can make the boundary of the target pattern more prominent and the subsequent edge detection results more accurate.

上述各个模块所采用的技术可以参照基于机器视觉的位移测量的靶标自标定方法的说明,在此不再赘述。For the technologies used by the above-mentioned modules, reference may be made to the description of the target self-calibration method based on machine vision displacement measurement, which will not be repeated here.

本发明上述实施例中的方法和系统,具有自标定功能,可以在任意相机拍摄角度下自动构建从像面坐标至真是物面坐标的量化转换关系,同时可采用各类机器视觉方法追踪该靶标,从而实现无需人为介入的机器视觉位移自动测量。The method and system in the above embodiments of the present invention have a self-calibration function, and can automatically construct a quantitative conversion relationship from the coordinates of the image plane to the coordinates of the real object under any camera shooting angle, and can use various machine vision methods to track the target. , so as to realize the automatic measurement of machine vision displacement without human intervention.

需要说明的是,本发明提供的所述方法中的步骤,可以利用所述系统中对应的模块、装置、单元等予以实现,本领域技术人员可以参照所述系统的技术方案实现所述方法的步骤流程,即,所述系统中的实施例可理解为实现所述方法的优选例,在此不予赘述。It should be noted that the steps in the method provided by the present invention can be implemented by using the corresponding modules, devices, units, etc. in the system, and those skilled in the art can refer to the technical solutions of the system to implement the method. The step flow, that is, the embodiments in the system can be understood as a preferred example for implementing the method, and details are not described here.

本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统及其各个装置以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统及其各个装置以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同功能。所以,本发明提供的系统及其各项装置可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构;也可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system provided by the present invention and its respective devices in the form of pure computer-readable program codes, the system provided by the present invention and its respective devices can be made by logic gates, Switches, application-specific integrated circuits, programmable logic controllers, and embedded microcontrollers are used to achieve the same function. Therefore, the system and its various devices provided by the present invention can be regarded as a kind of hardware components, and the devices for realizing various functions included in the system can also be regarded as structures in the hardware components; The means for implementing various functions can be regarded as either a software module implementing a method or a structure within a hardware component.

在本说明书的描述中,参考术语“一种实施方式”、“一种实施例”、“具体实施过程”、“一种举例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, reference to the terms "an embodiment", "an example", "a specific implementation process", "an example", etc. refers to the specific features described in conjunction with the embodiment or example, A structure, material, or feature is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

此处公开的仅为本发明的优选实施例,本说明书选取并具体描述这些实施例,是为了更好地解释本发明的原理和实际应用,并不是对本发明的限定。任何本领域技术人员在说明书范围内所做的修改和变化,均应落在本发明所保护的范围内。Only preferred embodiments of the present invention are disclosed herein, and the present specification selects and specifically describes these embodiments to better explain the principles and practical applications of the present invention, rather than limiting the present invention. Any modifications and changes made by those skilled in the art within the scope of the description should fall within the protection scope of the present invention.

Claims (10)

1. A displacement automatic measurement method based on machine vision is characterized by comprising the following steps:
s11: calibrating a positioning point group of the target pattern;
s12: identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
s13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates.
2. The machine-vision-based automatic displacement measurement method of claim 1, wherein the S11 includes:
s111: calibrating the center of the target pattern to obtain a center positioning point;
s112: and calibrating the edge of the target pattern to obtain a plurality of edge positioning points.
3. The machine-vision-based automatic displacement measurement method of claim 2, wherein the S12 includes:
s121: identifying and extracting a central positioning point of the target pattern to obtain an image surface coordinate of the central positioning point;
s122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image surface coordinates of the plurality of edge positioning points.
4. The machine-vision-based automatic displacement measurement method of claim 3, wherein the step S121 is preceded by the step of: and carrying out binarization processing on the target pattern.
5. The machine-vision-based automatic displacement measurement method of claim 3, wherein the center positioning point in the S11 is a center of a center positioning circle, and the edge positioning points are centers of edge positioning circles; in a corresponding manner, the first and second electrodes are,
the S121 includes:
s1211: performing edge detection on the target pattern by adopting a boundary extraction method, and extracting an edge topology table with edge grade information;
s1212: locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
s1213: processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, namely the image plane coordinates of the central positioning point;
the S122 includes:
s1221: carrying out ellipse detection on the target pattern by adopting a boundary extraction method, and locking the boundaries of the plurality of edge positioning points;
s1222: and processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely image surface coordinates of the edge positioning points.
6. The machine-vision-based automatic displacement measuring method according to claim 5, wherein the center positioning circles in S111 include at least two, at least two of the center positioning circles are concentric circles, and at least two of the center positioning circles have different radii; in a corresponding manner, the first and second electrodes are,
the S1213 is: and processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, and averaging the center coordinates of at least two central positioning circles to obtain the image plane coordinates of the central positioning point.
7. The machine-vision-based automatic displacement measuring method according to any one of claims 1 to 6, wherein the establishing of the two-dimensional coordinate system of the object plane in S13, and the transforming of the image plane image of the image to be measured into the object plane coordinates comprises:
s1311: solving a perspective transformation matrix under a two-dimensional coordinate system of which the image surface is mapped to the object surface by utilizing each positioning point in the positioning point group;
s1312: and transforming the image surface coordinate of the image to be measured into a two-dimensional coordinate of the object surface by using the perspective transformation matrix, wherein the transformation formula is as follows:
Figure FDA0003493324160000021
wherein x/s is the abscissa of the two-dimensional coordinate system of the object plane, y/s is the ordinate of the two-dimensional coordinate system of the object plane, s is a proportionality coefficient, W is a perspective transformation matrix, P is an image plane point homogeneous coordinate vector, and P ^ is an object plane point coordinate vector.
8. The machine-vision-based automatic displacement measurement method of any one of claims 1 to 6, wherein the target pattern in S11 comprises: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 include: the image plane coordinates of the left view and the image plane coordinates of the right view;
in S13, establishing a three-dimensional coordinate system of the object plane, and further transforming the image plane image of the image to be measured into the object plane coordinates includes:
s1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P of each positioning point in a coordinate system of the binocular camera setcam
S1322: utilizing the three-dimensional coordinate result P of each positioning point in the coordinate system of the binocular camera setcamObtaining an object plane equation of a plane where the target pattern is located through least square fitting;
s1323: building x and y axes in an object plane, building a z axis in a normal direction of the object plane, and locating a central locating point P in the locating point groupcThe three-dimensional coordinates of the three-dimensional coordinate system are used as the origin of the coordinate system to establishA three-dimensional coordinate system of the object plane;
s1324: setting a matrix formed by three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set as R, and taking P as the reference valuecAs a translation vector, converting the three-dimensional coordinate result P of the image to be detected in the coordinate system of the binocular camera set through coordinate transformationcamConversion into coordinate results P in a three-dimensional coordinate system of the object planeworldThe conversion formula is:
Figure FDA0003493324160000031
wherein,
Figure FDA0003493324160000032
is PcThree-dimensional coordinates of (a).
9. An automatic displacement measuring system based on machine vision is characterized by comprising: the device comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
the positioning point group calibration module is used for calibrating a positioning point group of the target pattern;
the image plane coordinate acquisition module is used for identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
the object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
10. The automatic displacement measuring system based on machine vision according to claim 9, wherein the image plane coordinate acquiring module comprises: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
the positioning point group in the positioning point group calibration module comprises: the positioning device comprises a central positioning point and an edge positioning point, wherein the central positioning point is the circle center of a central positioning circle, and the edge positioning point is the circle center of an edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the central positioning circle is used for locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
the image plane coordinate obtaining unit of the central positioning point is used for processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinate of the central positioning circle, namely the image plane coordinate of the central positioning point;
the boundary locking unit of the edge positioning circle is used for carrying out ellipse detection on the target pattern by adopting a boundary extraction method and locking the boundaries of the edge positioning points;
the image plane coordinate obtaining unit of the edge positioning points is used for processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely the image plane coordinates of the edge positioning points.
CN202210104210.XA 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision Active CN114440776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Publications (2)

Publication Number Publication Date
CN114440776A true CN114440776A (en) 2022-05-06
CN114440776B CN114440776B (en) 2024-07-19

Family

ID=81368838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104210.XA Active CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114440776B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06249615A (en) * 1993-02-25 1994-09-09 Sony Corp Position detecting method
JPH06258028A (en) * 1993-03-10 1994-09-16 Nippondenso Co Ltd Method and system for visually recognizing three dimensional position and attitude
JP2004077377A (en) * 2002-08-21 2004-03-11 Kurabo Ind Ltd Displacement measuring method and displacement measuring device by photogrammetry
CN101866496A (en) * 2010-06-04 2010-10-20 西安电子科技大学 Augmented Reality Method Based on Concentric Ring Pattern Group
CN102944191A (en) * 2012-11-28 2013-02-27 北京航空航天大学 Method and device for three-dimensional vision measurement data registration based on planar circle target
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
CN107741224A (en) * 2017-08-28 2018-02-27 浙江大学 A method for automatic attitude adjustment and positioning of AGV based on visual measurement and calibration
KR20180125095A (en) * 2017-05-12 2018-11-22 경북대학교 산학협력단 Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
CN109816733A (en) * 2019-01-14 2019-05-28 京东方科技集团股份有限公司 Camera parameter initial method and device, camera parameter scaling method and equipment, image capturing system
CN110207605A (en) * 2019-06-13 2019-09-06 广东省特种设备检测研究院东莞检测院 A kind of measuring device and method of the metal structure deformation based on machine vision
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A dynamic displacement measurement method of stereo vision structure based on three-target surface construction
CN111089569A (en) * 2019-12-26 2020-05-01 中国科学院沈阳自动化研究所 A large-scale box measurement method based on monocular vision
CN111402343A (en) * 2020-04-09 2020-07-10 深圳了然视觉科技有限公司 High-precision calibration plate and calibration method
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113310426A (en) * 2021-05-14 2021-08-27 昆山市益企智能装备有限公司 Thread parameter measuring method and system based on three-dimensional profile
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 A method for locating center image point of circular array target based on blanking point

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06249615A (en) * 1993-02-25 1994-09-09 Sony Corp Position detecting method
JPH06258028A (en) * 1993-03-10 1994-09-16 Nippondenso Co Ltd Method and system for visually recognizing three dimensional position and attitude
JP2004077377A (en) * 2002-08-21 2004-03-11 Kurabo Ind Ltd Displacement measuring method and displacement measuring device by photogrammetry
CN101866496A (en) * 2010-06-04 2010-10-20 西安电子科技大学 Augmented Reality Method Based on Concentric Ring Pattern Group
CN102944191A (en) * 2012-11-28 2013-02-27 北京航空航天大学 Method and device for three-dimensional vision measurement data registration based on planar circle target
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
KR20180125095A (en) * 2017-05-12 2018-11-22 경북대학교 산학협력단 Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
CN107741224A (en) * 2017-08-28 2018-02-27 浙江大学 A method for automatic attitude adjustment and positioning of AGV based on visual measurement and calibration
CN109816733A (en) * 2019-01-14 2019-05-28 京东方科技集团股份有限公司 Camera parameter initial method and device, camera parameter scaling method and equipment, image capturing system
CN110207605A (en) * 2019-06-13 2019-09-06 广东省特种设备检测研究院东莞检测院 A kind of measuring device and method of the metal structure deformation based on machine vision
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A dynamic displacement measurement method of stereo vision structure based on three-target surface construction
CN111089569A (en) * 2019-12-26 2020-05-01 中国科学院沈阳自动化研究所 A large-scale box measurement method based on monocular vision
CN111402343A (en) * 2020-04-09 2020-07-10 深圳了然视觉科技有限公司 High-precision calibration plate and calibration method
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113310426A (en) * 2021-05-14 2021-08-27 昆山市益企智能装备有限公司 Thread parameter measuring method and system based on three-dimensional profile
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 A method for locating center image point of circular array target based on blanking point

Also Published As

Publication number Publication date
CN114440776B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN111121655B (en) Visual detection method for pose and aperture of coplanar workpiece with equal large hole patterns
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN100562707C (en) Binocular vision rotating axis calibration method
CN109211198B (en) An intelligent target detection and measurement system and method based on trinocular vision
CN111260731A (en) Checkerboard sub-pixel level corner point self-adaptive detection method
CN109685855B (en) A camera calibration optimization method under the road cloud monitoring platform
CN110095089B (en) Method and system for measuring rotation angle of aircraft
CN113012234A (en) High-precision camera calibration method based on plane transformation
CN109141226A (en) The spatial point coordinate measuring method of one camera multi-angle
CN109272555B (en) A method of obtaining and calibrating external parameters of RGB-D camera
CN106127758A (en) A kind of visible detection method based on virtual reality technology and device
CN108592787A (en) The rotating axis calibration method and system of 3D tracer rotation systems
CN113450292A (en) High-precision visual positioning method for PCBA parts
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
Wang et al. Complete calibration of a structured light stripe vision sensor through a single cylindrical target
Wang et al. Transmission line sag measurement based on single aerial image
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN107092905A (en) A kind of instrument localization method to be identified of electric inspection process robot
CN107976146B (en) Self-calibration method and measurement method of linear array CCD camera
CN114440776A (en) A method and system for automatic displacement measurement based on machine vision
CN113223095A (en) Internal and external parameter calibration method based on known camera position
CN101894369A (en) A Real-time Method for Computing Camera Focal Length from Image Sequence
CN108898585A (en) A kind of axial workpiece detection method and its device
CN114485543A (en) A Stereo Vision-Based Measurement Method of Aircraft Rudder Surface Angle
Xia et al. An Improved Depth-Based Camera Model in Binocular Visual System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant