CN114440776B - Automatic displacement measurement method and system based on machine vision - Google Patents

Automatic displacement measurement method and system based on machine vision Download PDF

Info

Publication number
CN114440776B
CN114440776B CN202210104210.XA CN202210104210A CN114440776B CN 114440776 B CN114440776 B CN 114440776B CN 202210104210 A CN202210104210 A CN 202210104210A CN 114440776 B CN114440776 B CN 114440776B
Authority
CN
China
Prior art keywords
edge
image plane
center
positioning
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210104210.XA
Other languages
Chinese (zh)
Other versions
CN114440776A (en
Inventor
李得睿
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotu Technology Co ltd
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotu Technology Co ltd
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotu Technology Co ltd, Shanghai Jiaotong University filed Critical Shanghai Jiaotu Technology Co ltd
Priority to CN202210104210.XA priority Critical patent/CN114440776B/en
Publication of CN114440776A publication Critical patent/CN114440776A/en
Application granted granted Critical
Publication of CN114440776B publication Critical patent/CN114440776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a displacement automatic measurement method and a system based on machine vision, wherein the method comprises the following steps: calibrating a positioning point group of the target pattern; identifying and extracting a locating point group of the target pattern to obtain the image plane coordinates of each locating point in the locating point group; and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of each positioning point, and further converting the image plane coordinates of the image to be detected into object plane coordinates. The invention has a self-calibration function, can automatically construct a quantitative conversion relation from an image plane coordinate to a true object plane coordinate under any camera shooting angle, and can track the target by adopting various machine vision methods, thereby realizing automatic measurement of machine vision displacement without artificial intervention.

Description

Automatic displacement measurement method and system based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a displacement automatic measurement method and system based on machine vision.
Background
Displacement measurement is one of the measurement contents that are very frequently involved in the academic and engineering fields. With the development and popularization of machine vision technology, the academic world gradually tries to combine machine vision with displacement measurement to realize displacement measurement based on machine vision, so as to serve engineering practice. The displacement measurement based on machine vision has a plurality of advantages, compared with the traditional method, the displacement measurement based on machine vision has obvious advantages of non-contact, low cost, high precision, real-time measurement and the like, so the displacement measurement based on machine vision has long been a research hot spot in the academic world. At present, a plurality of practical engineering cases and commercial products for measuring displacement based on machine vision exist at home and abroad.
Although reported applications of machine vision to measure displacement are quite numerous, such advanced measurement means are still quite distant from wide-scale popularization and application. One of the important reasons is the problem of external parameter calibration. Camera calibration involves two types of problems, one of which is the physical scale calibration associated with the camera itself, such as the internal optical internal reference calibration of the camera, or the internal camera pose calibration of the binocular camera set, which can be collectively referred to as the internal reference calibration of the camera or camera set. There are well established solutions to this type of problem, such as Zhang Zhengyou calibration methods.
There is also a class of problems, namely the scaling of the conversion relationship from pixel scale to real physical scale. When the machine vision measures displacement, the obtained original displacement data result is only a pixel scale result in pixels, and the result required in actual engineering is a displacement result (such as meters, millimeters and the like) in real physical scale. At this time, the conversion relation from the pixel scale to the physical scale is needed to be calibrated, so that the pixel displacement result is converted into the physical scale displacement result with physical meaning, namely, external parameter calibration. However, no unified solution to such external parameter calibration problem exists at present, and the biggest disadvantages of each existing solution are that: human intervention is required.
Therefore, the machine vision displacement measurement technology at the present stage cannot realize true automatic displacement measurement, and the calibration process requiring human intervention greatly increases the application barrier and the measurement difficulty of the machine vision displacement measurement technology. In the practical application process, the situation is complex and changeable, for example, the engineering site and the difference of the service capacities of experimental staff are large, so that the measurement result can be influenced by the high probability of a calibration mode requiring human intervention, and the reliability of the result is greatly reduced. Therefore, the self-calibration method is realized, and the method has important significance for automatic measurement of machine vision displacement.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a displacement automatic measurement method and system based on machine vision, so as to solve the problem that intervention is required to be considered in the existing external parameter calibration.
In order to solve the technical problems, the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a machine vision-based automatic displacement measurement method, comprising:
s11: calibrating a positioning point group of the target pattern;
s12: identifying and extracting a locating point group of the target pattern to obtain the image plane coordinates of each locating point in the locating point group;
S13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
Preferably, the step S11 includes:
S111: calibrating the center of the target pattern to obtain a center positioning point;
S112: and calibrating the edge of the target pattern to obtain a plurality of edge positioning points, wherein the edge positioning points can be points close to the boundary of the target pattern or points on the boundary of the target pattern.
Preferably, the step S12 includes:
s121: identifying and extracting a center positioning point of the target pattern to obtain an image plane coordinate of the center positioning point;
S122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image plane coordinates of the plurality of edge positioning points.
Preferably, before S121, the method further includes: and carrying out binarization treatment on the target pattern.
Preferably, the center positioning point in the S11 is a center of a center positioning circle, and the edge positioning point is a center of an edge positioning circle; in a corresponding manner to the fact that,
The S121 includes:
S1211: performing edge detection on the target pattern by adopting a boundary extraction method, and extracting an edge topology table with edge grade information;
S1212: locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
S1213: processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinates of the center positioning circle, namely the image plane coordinates of the center positioning point;
The S122 includes:
S1221: performing ellipse detection on the target pattern by adopting a boundary extraction method, and locking boundaries of the plurality of edge positioning points;
S1222: and processing the boundaries of the plurality of edge positioning points by adopting least square fitting to obtain circle center coordinates of the plurality of edge positioning points, namely image plane coordinates of the plurality of edge positioning points.
Preferably, the centering circles in S111 include at least two centering circles, at least two centering circles are concentric circles, and radii of at least two centering circles are different; in a corresponding manner to the fact that,
The S1213 is: and processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinates of the center positioning circle, and performing average value processing on at least more than two center coordinates of the center positioning circle to obtain the image plane coordinates of the center positioning point.
Preferably, establishing a two-dimensional coordinate system of the object plane in S13, and further converting the image plane image of the image to be measured into the object plane coordinate includes:
s1311: solving a perspective transformation matrix of the image plane mapped to the object plane under a two-dimensional coordinate system by utilizing each positioning point in the positioning point group;
S1312: transforming the image plane coordinates of the image to be measured into two-dimensional coordinates of an object plane by utilizing the perspective transformation matrix, wherein the transformation formula is as follows:
Wherein x/s is the abscissa of the two-dimensional coordinate system of the object plane, y/s is the ordinate of the two-dimensional coordinate system of the object plane, s is the scaling factor, W is the perspective transformation matrix, P is the homogeneous coordinate vector of the image plane point, and P is the coordinate vector of the object plane point.
Preferably, the target pattern in S11 includes: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 respectively include: image plane coordinates of the left view and image plane coordinates of the right view;
in the step S13, establishing a three-dimensional coordinate system of the object plane, and further converting the image plane image of the image to be measured into the object plane coordinate includes:
S1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P cam of each positioning point in a binocular camera set coordinate system;
S1322: utilizing the three-dimensional coordinate result P cam of each positioning point in the binocular camera set coordinate system, and obtaining an object plane equation of a plane where the target pattern is located through least square fitting;
S1323: establishing an x-axis and a y-axis on an object plane, establishing a z-axis on the normal direction of the object plane, and establishing a three-dimensional coordinate system of the object plane by taking the three-dimensional coordinate of a central positioning point P c in the positioning point group as a coordinate system origin;
s1324: and (3) setting a matrix formed by three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set as R, taking P c as a translation vector, converting a three-dimensional coordinate result P cam of the image to be detected in the coordinate system of the binocular camera set into a coordinate result P world in the three-dimensional coordinate system of the object plane through coordinate conversion, wherein the conversion formula is as follows:
Wherein, Is the three-dimensional coordinates of P c.
According to a second aspect of the present invention, there is provided a machine vision-based automatic displacement measurement system comprising: the system comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
The locating point group calibration module is used for calibrating locating point groups of target patterns;
the image plane coordinate acquisition module is used for identifying and extracting a locating point group of the target pattern to obtain the image plane coordinate of each locating point in the locating point group;
The object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of an object plane according to the image plane coordinates of each positioning point, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
Preferably, the image plane coordinate acquisition module includes: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
The locating point group in the locating point group calibration module comprises: the edge positioning point is the center of the edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the center positioning circle is used for locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
The image plane coordinate obtaining unit of the center positioning point is used for processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinate of the center positioning circle, namely the image plane coordinate of the center positioning point;
The boundary locking unit of the edge positioning circle is used for detecting the ellipse of the target pattern by adopting a boundary extraction method and locking the boundaries of the plurality of edge positioning points;
the image plane coordinate obtaining unit of the edge locating points is used for processing the boundaries of the plurality of edge locating points by adopting least square fitting to obtain circle center coordinates of the plurality of edge locating points, namely the image plane coordinates of the plurality of edge locating points.
Compared with the prior art, the invention has the following advantages:
According to the automatic displacement measurement method and system based on machine vision, the target pattern is calibrated, and the coordinate system of the object plane is obtained according to the calibrated target pattern, so that the image plane coordinate of the image to be measured can be converted into the object plane coordinate, automatic external parameter calibration is realized, manual access is not needed, and the influence of a manual intervention calibration mode on a measurement result is avoided.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a machine vision based automatic displacement measurement method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a set of positioning points of a self-calibrating target pattern according to a preferred embodiment of the present invention;
FIG. 3 is a diagram showing the self-calibration results in a two-dimensional displacement measurement scenario based on machine vision according to a preferred embodiment of the present invention;
FIG. 4 is a diagram showing the self-calibration results in a three-dimensional displacement measurement scenario based on machine vision according to a preferred embodiment of the present invention.
Detailed Description
The following describes in detail the examples of the present invention, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of protection of the present invention is not limited to the following examples.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In an embodiment, the present invention provides a displacement automatic measurement method based on machine vision, please refer to fig. 1, which includes:
s11: calibrating a positioning point group of the target pattern;
S12: identifying and extracting a locating point group of the target pattern to obtain the image plane coordinates of each locating point in the locating point group;
s13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of each positioning point, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
The embodiment of the invention does not need artificial access, and avoids the influence of a calibration mode of human intervention on a measurement result.
Preferably, in an embodiment, S11 may further include:
s111: calibrating the center of the target pattern to obtain a center positioning point;
s112: and calibrating the edges of the target pattern to obtain a plurality of edge positioning points.
Preferably, in an embodiment, S12 includes:
s121: identifying and extracting a center locating point of the target pattern to obtain an image plane coordinate of the center locating point;
S122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image plane coordinates of the plurality of edge positioning points.
Preferably, in an embodiment, S121 further includes: the target pattern is subjected to binarization processing, so that the boundary of the target pattern is more prominent, and the subsequent extraction of the center locating point and the edge locating point is more accurate.
Preferably, in an embodiment, the center positioning point in S11 is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle; in a corresponding manner to the fact that,
S121 includes:
s1211: edge detection is carried out on the target pattern by adopting a boundary extraction method, and an edge topology table with edge grade information is extracted;
The edge level information may be: the center positioning circle is of a first level, and the edge positioning circle is of a second level; or other forms of grading may be employed;
s1212: locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
s1213: processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinates of the center positioning circle, namely the image plane coordinates of the center positioning point;
S122 includes:
s1221: carrying out ellipse detection on the target pattern by adopting a boundary extraction method, and locking the boundaries of a plurality of edge positioning points;
The target pattern is circular, and the camera is opposite to the shooting target, so that the target is circular in a camera picture; however, when the camera shoots the target obliquely, the target pattern presents an elliptical shape in the picture of the camera, and the elliptical detection can detect a perfect circle and an ellipse, so that the detection structure can be more accurate by adopting elliptical detection;
S1222: and processing the boundaries of the plurality of edge positioning points by adopting least square fitting to obtain the circle center coordinates of the plurality of edge positioning points, namely the image plane coordinates of the plurality of edge positioning points.
Preferably, in an embodiment, the centering circles in S111 include at least two centering circles, the at least two centering circles being concentric circles, and radii of the at least two centering circles being different. Referring to fig. 2, the centering circles are exemplified by three concentric circles. In different embodiments, the number of the centering circles may be two, or more than three.
Correspondingly, S1213 is: and processing the boundary of the centering circle by adopting least square fitting to obtain the center coordinates of the centering circle, and carrying out average value processing on the center coordinates of at least more than two centering circles to obtain the image plane coordinates of the centering points.
In the two-dimensional displacement measurement scene, the quantized transformation relationship from the image plane coordinate to the object plane coordinate in S13 is a homography matrix from the image plane coordinate system to a certain two-dimensional object plane coordinate system in space.
In one embodiment, establishing a two-dimensional coordinate system of the object plane in S13, and further converting the image plane image of the image to be measured into the object plane coordinate includes:
s1311: solving a perspective transformation matrix W of the image surface mapped to the object plane under a two-dimensional coordinate system by utilizing each positioning point in the positioning point group;
s1312: transforming the image plane coordinates of the image to be measured into two-dimensional coordinates of an object plane by utilizing a perspective transformation matrix, wherein the transformation formula is as follows:
Wherein x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is the scaling factor, P is the image plane point, and P is the object plane point.
In the three-dimensional displacement measurement scene, the quantized transformation relation from the image plane coordinate to the object plane coordinate in S13 is a linear transformation relation from the binocular camera set coordinate system to a three-dimensional object plane coordinate system in space, and the transformation relation can be composed of a corresponding translation matrix and a rotation matrix.
In one embodiment, the target pattern in S11 includes: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
The image plane coordinates of each positioning point in the positioning point group obtained in S12 respectively include: image plane coordinates of the left view and image plane coordinates of the right view;
In S13, establishing a three-dimensional coordinate system of the object plane, and further converting the image plane image of the image to be measured into the object plane coordinate includes:
S1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P cam of each positioning point in the coordinate system of the binocular camera set;
s1322: solving an object plane equation of a plane where the target pattern is located by utilizing a three-dimensional coordinate result P cam of each positioning point in a binocular camera set coordinate system;
S1323: establishing an x-axis and a y-axis on an object plane, establishing a z-axis on the normal direction of the object plane, and establishing a three-dimensional coordinate system of the object plane by taking the three-dimensional coordinate of a central positioning point P c in a positioning point group as an origin of a coordinate system;
S1324: let the matrix formed by the three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set be R, take P c as translation vector, convert the three-dimensional coordinate result P cam of the image to be measured in the coordinate system of the binocular camera set into the coordinate result P world in the three-dimensional coordinate system of the object plane through coordinate transformation, the conversion formula is:
Wherein, Is the three-dimensional coordinates of P c.
The self-calibration method based on the unique measurement of machine vision of the above embodiment is described below with a specific example, but the target pattern, the number of anchor points, and the parameters are not limited to the following specific example.
Referring to fig. 2, the target pattern in this example is exemplified by a square, the midpoint positioning circle is exemplified by three, and the edge positioning circle is exemplified by four. In various embodiments, the edge locating circle need not be four as an example, nor need it be the four locations in the illustration, so long as the plane in which the target pattern lies can be located.
S121 includes:
S1211: edge detection is carried out on the target pattern by adopting a boundary extraction method, and an edge topology table with edge grade information is extracted; taking fig. 2 as an example, the edge detection results in the edge shape of three concentric circles and four corner points; the edge class information may be: the innermost circle edge in the concentric circles is a first level, the secondary inner measurement concentric circle edge is a second level, and the outermost concentric circle edge and the four corner edges are a third level;
s1212: locking the boundaries of three center positioning circles by searching the edge grade information of the edge topology table;
s1213: processing the boundaries of three center positioning circles by adopting least square fitting to obtain three ellipse circle center coordinates, and taking the average value as an image plane coordinate P c of the center positioning point;
S122 includes:
s1221: the boundary extraction method is adopted to carry out ellipse detection on the target pattern and screen out all ellipse boundaries, and as for the target detection of the real situation, the real target in the real situation is detected in a whole real shot image, the ellipse detection can detect all ellipse characteristic boundaries in the real image shooting scene, so that four corner points are screened out, and the specific screening method is as follows: finding out four elliptical boundaries closest to the center locating point, namely elliptical boundaries of four edge locating points;
s1222: processing the elliptical boundaries of the four edge positioning points by adopting least square fitting to obtain circle center coordinates of the four edge positioning points, wherein the circle center coordinates are respectively as follows: p 1、P2、P3、P4.
In the two-dimensional displacement measurement scenario, S13 specifically includes:
S1311: adopting five positioning points P c and P 1、P2、P3、P4 in total to solve a perspective transformation matrix W of the image plane mapping to the object plane;
s1312: for the pixel coordinate result of the image to be tested obtained by each test, transforming the image plane coordinate into the object plane coordinate by adopting a perspective transformation matrix W, wherein the perspective transformation is realized according to the following formula:
Wherein x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is the scaling factor, P is the image plane point, and P is the object plane point.
Fig. 3 shows an object plane coordinate system obtained by self-calibration in a two-dimensional displacement measurement scene by using the self-calibration method of the above example, wherein the x and y axes are two axes of the coordinate system.
In the three-dimensional displacement measurement scenario, S13 specifically includes:
S1321: five positioning point coordinates P c and P 1、P2、P3、P4 obtained by respective identification in the left view and the right view of the binocular camera are adopted, and a three-dimensional coordinate result P cam of the five positioning points in a coordinate system of the binocular camera is reconstructed according to an internal reference calibration result of the binocular camera;
S1322: solving an object plane equation of a plane where the target pattern is located by utilizing a three-dimensional coordinate result P cam of five positioning points in a binocular camera set coordinate system;
S1323: establishing an x-axis and a y-axis on an object plane, establishing a z-axis on the normal direction of the object plane, and establishing a three-dimensional coordinate system of the object plane by taking the three-dimensional coordinate of a central positioning point P c in a positioning point group as an origin of a coordinate system;
S1324: let the matrix formed by the three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set be R, take P c as translation vector, convert the three-dimensional coordinate result P cam of the image to be measured in the coordinate system of the binocular camera set into the coordinate result P world in the three-dimensional coordinate system of the object plane through coordinate transformation, the conversion formula is:
Wherein, Is the three-dimensional coordinates of P c.
Fig. 4 shows an object plane coordinate system obtained by self-calibration in a three-dimensional displacement measurement scene by using the self-calibration method of the above example, wherein x, y and z axes are three axes of the coordinate system.
In one embodiment, there is also provided a machine vision based automatic displacement measurement system comprising: the system comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
The positioning point group calibration module is used for calibrating the positioning point group of the target pattern;
the image plane coordinate acquisition module is used for identifying and extracting a locating point group of the target pattern to obtain the image plane coordinates of each locating point in the locating point group;
The object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of an object plane according to the image plane coordinates of each positioning point, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
In one embodiment, the locating point group calibration module includes: a center positioning point calibration unit and an edge positioning point calibration unit; wherein,
The midpoint locating point calibration unit is used for calibrating the center of the target pattern to obtain a center locating point;
the edge positioning point calibration unit is used for calibrating the edges of the target patterns to obtain a plurality of edge positioning points.
In one embodiment, the image plane coordinate acquisition module includes: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
The locating point group in the locating point group calibration module comprises: the edge positioning point is the center of the edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the center positioning circle is used for locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
The image plane coordinate obtaining unit of the center locating point is used for processing the boundary of the center locating circle by adopting least square fitting to obtain the center coordinate of the center locating circle, namely the image plane coordinate of the center locating point;
the boundary locking unit of the edge positioning circle is used for detecting the ellipse of the target pattern by adopting a boundary extraction method and locking the boundaries of the plurality of edge positioning points;
the image plane coordinate obtaining unit of the edge locating points is used for processing the boundaries of the plurality of edge locating points by adopting least square fitting to obtain circle center coordinates of the plurality of edge locating points, namely the image plane coordinates of the plurality of edge locating points.
In an embodiment, the image plane coordinate acquisition module further includes: the binarization processing unit is used for performing binarization processing on the target pattern, so that the boundary of the target pattern is more prominent, and the subsequent edge detection result is more accurate.
The techniques used by the above modules may refer to the description of the target self-calibration method based on the displacement measurement of machine vision, and will not be described herein.
The method and the system in the embodiment of the invention have a self-calibration function, can automatically construct a quantitative conversion relation from an image plane coordinate to a true object plane coordinate under any camera shooting angle, and can track the target by adopting various machine vision methods, thereby realizing automatic measurement of machine vision displacement without artificial intervention.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules, devices, units, etc. in the system, and those skilled in the art may refer to a technical solution of the system to implement the step flow of the method, that is, the embodiment in the system may be understood as a preferred example for implementing the method, which is not described herein.
Those skilled in the art will appreciate that the invention provides a system and its individual devices that can be implemented entirely by logic programming of method steps, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., in addition to the system and its individual devices being implemented in pure computer readable program code. Therefore, the system and various devices thereof provided by the present invention may be considered as a hardware component, and the devices included therein for implementing various functions may also be considered as structures within the hardware component; means for achieving the various functions may also be considered as being either a software module that implements the method or a structure within a hardware component.
In the description of the present specification, the descriptions of the terms "one embodiment," "an embodiment," "a particular implementation," "an example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiments disclosed herein were chosen and described in detail in order to best explain the principles of the invention and the practical application, and to thereby not limit the invention. Any modifications or variations within the scope of the description that would be apparent to a person skilled in the art are intended to be included within the scope of the invention.

Claims (5)

1. A machine vision based automatic displacement measurement method, comprising:
s11: calibrating a positioning point group of the target pattern;
The step S11 specifically comprises the following steps:
S111: calibrating the center of the target pattern to obtain a center positioning point;
s112: calibrating the edges of the target patterns to obtain a plurality of edge positioning points;
s12: identifying and extracting a locating point group of the target pattern to obtain the image plane coordinates of each locating point in the locating point group;
s13: according to the image plane coordinates of the positioning points, a two-dimensional coordinate system or a three-dimensional coordinate system of an object plane is established, and then the image plane coordinates of the image to be detected are converted into object plane coordinates;
The S12 includes:
s121: identifying and extracting a center positioning point of the target pattern to obtain an image plane coordinate of the center positioning point;
S122: identifying and extracting a plurality of edge positioning points of the target pattern to obtain image plane coordinates of the plurality of edge positioning points;
the center positioning point in the S11 is the center of a center positioning circle, and the edge positioning point is the center of an edge positioning circle; in a corresponding manner to the fact that,
The S121 includes:
S1211: performing edge detection on the target pattern by adopting a boundary extraction method, and extracting an edge topology table with edge grade information;
S1212: locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
S1213: processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinates of the center positioning circle, namely the image plane coordinates of the center positioning point;
The S122 includes:
S1221: performing ellipse detection on the target pattern by adopting a boundary extraction method, and locking boundaries of the plurality of edge positioning points;
S1222: processing boundaries of a plurality of edge positioning points by adopting least square fitting to obtain circle center coordinates of the plurality of edge positioning points, namely image plane coordinates of the plurality of edge positioning points;
The centering circles in the step S111 comprise at least two centering circles, wherein the at least two centering circles are concentric circles, and the radiuses of the at least two centering circles are different; in a corresponding manner to the fact that,
The S1213 is: and processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinates of the center positioning circle, and performing average value processing on at least more than two center coordinates of the center positioning circle to obtain the image plane coordinates of the center positioning point.
2. The automatic displacement measurement method based on machine vision according to claim 1, wherein the step S121 is preceded by: and carrying out binarization treatment on the target pattern.
3. The automatic displacement measurement method based on machine vision according to any one of claims 1 to 2, wherein the creating a two-dimensional coordinate system of the object plane in S13, and further converting the image plane image of the image to be measured into object plane coordinates comprises:
s1311: solving a perspective transformation matrix of the image plane mapped to the object plane under a two-dimensional coordinate system by utilizing each positioning point in the positioning point group;
S1312: transforming the image plane coordinates of the image to be measured into two-dimensional coordinates of an object plane by utilizing the perspective transformation matrix, wherein the transformation formula is as follows:
Wherein x/s is the abscissa of the two-dimensional coordinate system of the object plane, y/s is the ordinate of the two-dimensional coordinate system of the object plane, s is a scaling factor, W is a perspective transformation matrix, P is an image plane point homogeneous coordinate vector, and P ^ is an object plane point coordinate vector.
4. The machine vision based automatic displacement measurement method according to any one of claims 1 to 2, wherein the target pattern in S11 includes: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 respectively include: image plane coordinates of the left view and image plane coordinates of the right view;
in the step S13, establishing a three-dimensional coordinate system of the object plane, and further converting the image plane image of the image to be measured into the object plane coordinate includes:
S1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P cam of each positioning point in a binocular camera set coordinate system;
S1322: utilizing the three-dimensional coordinate result P cam of each positioning point in the binocular camera set coordinate system, and obtaining an object plane equation of a plane where the target pattern is located through least square fitting;
S1323: establishing an x-axis and a y-axis on an object plane, establishing a z-axis on the normal direction of the object plane, and establishing a three-dimensional coordinate system of the object plane by taking the three-dimensional coordinate of a central positioning point P c in the positioning point group as a coordinate system origin;
s1324: and (3) setting a matrix formed by three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set as R, taking P c as a translation vector, converting a three-dimensional coordinate result P cam of the image to be detected in the coordinate system of the binocular camera set into a coordinate result P world in the three-dimensional coordinate system of the object plane through coordinate conversion, wherein the conversion formula is as follows:
Pworld=R-1(Pcam-Pc ^)
Wherein P c is the three-dimensional coordinates of P c.
5. A machine vision based automatic displacement measurement system, comprising: the system comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
The locating point group calibration module is used for calibrating locating point groups of target patterns; the locating point group in the locating point group calibration module comprises: the edge positioning point is the center of the edge positioning circle;
the image plane coordinate acquisition module is used for identifying and extracting a locating point group of the target pattern to obtain the image plane coordinate of each locating point in the locating point group;
the object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of an object plane according to the image plane coordinates of each positioning point, and further converting the image plane coordinates of the image to be detected into object plane coordinates;
The image plane coordinate acquisition module comprises: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
The edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the center positioning circle is used for locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
The image plane coordinate obtaining unit of the center positioning point is used for processing the boundary of the center positioning circle by adopting least square fitting to obtain the center coordinate of the center positioning circle, namely the image plane coordinate of the center positioning point;
The boundary locking unit of the edge positioning circle is used for detecting the ellipse of the target pattern by adopting a boundary extraction method and locking the boundaries of a plurality of edge positioning points;
The image plane coordinate obtaining unit of the edge locating points is used for processing boundaries of the plurality of edge locating points by adopting least square fitting to obtain circle center coordinates of the plurality of edge locating points, namely image plane coordinates of the plurality of edge locating points;
The center positioning circles comprise at least two concentric circles, and the radiuses of the at least two center positioning circles are different; correspondingly, the boundary of the center positioning circle is processed by least square fitting to obtain the center coordinates of the center positioning circle, and average value processing is carried out on at least more than two center coordinates of the center positioning circle to obtain the image plane coordinates of the center positioning point.
CN202210104210.XA 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision Active CN114440776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Publications (2)

Publication Number Publication Date
CN114440776A CN114440776A (en) 2022-05-06
CN114440776B true CN114440776B (en) 2024-07-19

Family

ID=81368838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104210.XA Active CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114440776B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A kind of stereoscopic vision structure dynamic displacement measurement method for building face based on three targets

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3339090B2 (en) * 1993-02-25 2002-10-28 ソニー株式会社 Position detection method
JP3208900B2 (en) * 1993-03-10 2001-09-17 株式会社デンソー Method and apparatus for recognizing three-dimensional position and orientation based on vision
JP4006296B2 (en) * 2002-08-21 2007-11-14 倉敷紡績株式会社 Displacement measuring method and displacement measuring apparatus by photogrammetry
CN101866496B (en) * 2010-06-04 2012-01-04 西安电子科技大学 Augmented reality method based on concentric ring pattern group
CN102944191B (en) * 2012-11-28 2015-06-10 北京航空航天大学 Method and device for three-dimensional vision measurement data registration based on planar circle target
KR101972582B1 (en) * 2017-05-12 2019-04-29 경북대학교 산학협력단 Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
CN107741224A (en) * 2017-08-28 2018-02-27 浙江大学 A kind of AGV automatic-posture-adjustment localization methods of view-based access control model measurement and demarcation
CN109816733B (en) * 2019-01-14 2023-08-18 京东方科技集团股份有限公司 Camera parameter initialization method and device, camera parameter calibration method and device and image acquisition system
CN110207605B (en) * 2019-06-13 2024-06-14 广东省特种设备检测研究院东莞检测院 Device and method for measuring metal structure deformation based on machine vision
CN111089569B (en) * 2019-12-26 2021-11-30 中国科学院沈阳自动化研究所 Large box body measuring method based on monocular vision
CN111402343A (en) * 2020-04-09 2020-07-10 深圳了然视觉科技有限公司 High-precision calibration plate and calibration method
CN112362034B (en) * 2020-11-11 2022-07-08 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision
CN113310426A (en) * 2021-05-14 2021-08-27 昆山市益企智能装备有限公司 Thread parameter measuring method and system based on three-dimensional profile
CN113610917B (en) * 2021-08-09 2024-10-01 河南工业大学 Circular array target center image point positioning method based on blanking points

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A kind of stereoscopic vision structure dynamic displacement measurement method for building face based on three targets

Also Published As

Publication number Publication date
CN114440776A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN104867160B (en) A kind of directionality demarcation target demarcated for camera interior and exterior parameter
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
Apollonio et al. Evaluation of feature-based methods for automated network orientation
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN109727239A (en) Based on SURF feature to the method for registering of inspection figure and reference map
CN103593838B (en) A kind of cross-correlation gray level image matching method and device fast
CN107092905B (en) Method for positioning instrument to be identified of power inspection robot
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
CN107067441B (en) Camera calibration method and device
Zhuo et al. Machine vision detection of pointer features in images of analog meter displays
CN113221805B (en) Method and device for acquiring image position of power equipment
Lou et al. High-precision location for occluded reference hole based on robust extraction algorithm
Wang et al. Target recognition and localization of mobile robot with monocular PTZ camera
CN114440776B (en) Automatic displacement measurement method and system based on machine vision
CN100553349C (en) Determine the method for target topological relation and the camera calibration target that can put arbitrarily
CN116245948A (en) Monocular vision cooperative target and pose measuring and calculating method
Fujita et al. Floor fingerprint verification using a gravity-aware smartphone
Deng et al. Circle center automatic extraction and sorting based on improved circular target
Gao et al. A self-identifying checkerboard-like pattern for camera calibration
He et al. 6D pose measurement of metal parts based on virtual geometric feature point matching
Wang et al. Algorithm for automatic image registration on Harris-Laplace features
Shi et al. Vision Sensor for Measuring Aerial Refueling Drogue Using Robust Method
Wang et al. A Novel Visual Detecting and Positioning Method for Screw Holes
Yazdani et al. Sub-pixel X-marker detection by Hough transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant