CN114440776A - Automatic displacement measuring method and system based on machine vision - Google Patents

Automatic displacement measuring method and system based on machine vision Download PDF

Info

Publication number
CN114440776A
CN114440776A CN202210104210.XA CN202210104210A CN114440776A CN 114440776 A CN114440776 A CN 114440776A CN 202210104210 A CN202210104210 A CN 202210104210A CN 114440776 A CN114440776 A CN 114440776A
Authority
CN
China
Prior art keywords
positioning
edge
coordinates
positioning point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210104210.XA
Other languages
Chinese (zh)
Other versions
CN114440776B (en
Inventor
李得睿
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotu Technology Co ltd
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotu Technology Co ltd
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotu Technology Co ltd, Shanghai Jiaotong University filed Critical Shanghai Jiaotu Technology Co ltd
Priority to CN202210104210.XA priority Critical patent/CN114440776B/en
Publication of CN114440776A publication Critical patent/CN114440776A/en
Application granted granted Critical
Publication of CN114440776B publication Critical patent/CN114440776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a displacement automatic measurement method and system based on machine vision, and the method comprises the following steps: calibrating a positioning point group of the target pattern; identifying and extracting a positioning point group of the target pattern to obtain image surface coordinates of each positioning point in the positioning point group; and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates. The invention has self-calibration function, can automatically construct the quantitative conversion relation from the image plane coordinate to the true object plane coordinate at any camera shooting angle, and can track the target by adopting various machine vision methods, thereby realizing the automatic measurement of the machine vision displacement without human intervention.

Description

Automatic displacement measuring method and system based on machine vision
Technical Field
The invention relates to the technical field of machine vision, in particular to a displacement automatic measurement method and system based on machine vision.
Background
Displacement measurement is one of the measurement contents that are very frequently involved in the academic field and the engineering field. With the development and popularization of machine vision technology, the academia gradually tries to combine machine vision with displacement measurement to realize displacement measurement based on machine vision, thereby serving engineering practice. The displacement measurement based on machine vision has many advantages, compared with the traditional method, the displacement measurement based on machine vision has obvious advantages of non-contact, low cost, high precision, real-time measurement and the like, and therefore, the displacement measurement based on machine vision is a research hotspot in academia for a long time. At present, many practical engineering cases and commercial products based on machine vision displacement measurement exist at home and abroad.
Although the reported machine vision displacement measurement has many applications, the advanced measurement method still has a considerable distance from wide popularization and application. One of the important reasons is the external reference calibration problem. Camera calibration involves two types of problems, one is calibration of the camera's own related physical dimensions, such as calibration of camera internal optical parameters, or calibration of pose between cameras within a binocular camera set, which may be collectively referred to as camera or camera set internal parameter calibration. At present, mature solutions such as Zhangyingyou scaling method and the like exist for the problems.
And the problem of calibrating the conversion relation from the pixel scale to the real physical scale is also solved. When the machine vision measures displacement, the obtained raw displacement data result is only a pixel scale result in units of pixels, and the result required in actual engineering is a displacement result in units of real physical dimensions (such as meters, millimeters and the like). At this time, the conversion relationship from the pixel scale to the physical scale needs to be calibrated so as to convert the pixel displacement result into a physical scale displacement result with physical significance, namely external reference calibration. However, no unified solution for such external reference calibration problem has appeared at present, and the biggest disadvantages of each existing solution are: human intervention is required.
Therefore, the machine vision displacement measurement technology at the present stage cannot realize real displacement automatic measurement, and the calibration process requiring human intervention greatly increases the application barrier and the measurement difficulty of the machine vision displacement measurement technology. In the practical application process, the situation is complex and changeable, for example, in an engineering field, the difference of the service capacity of an experimenter is large, so that the measurement result is influenced by a large probability of a calibration mode needing human intervention, and the reliability of the result is greatly reduced. Therefore, the self-calibration method is realized, and the method has important significance for automatic measurement of machine vision displacement.
Disclosure of Invention
The invention provides a displacement automatic measurement method and system based on machine vision aiming at the problems in the prior art, and aims to solve the problem that intervention is required in the existing external parameter calibration.
In order to solve the technical problems, the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a machine vision-based automatic displacement measurement method, comprising:
s11: calibrating a positioning point group of the target pattern;
s12: identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
s13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates.
Preferably, the S11 includes:
s111: calibrating the center of the target pattern to obtain a center positioning point;
s112: calibrating the edge of the target pattern to obtain a plurality of edge positioning points, wherein the edge positioning points can be points close to the boundary of the target pattern or points on the boundary of the target pattern.
Preferably, the S12 includes:
s121: identifying and extracting a central positioning point of the target pattern to obtain an image surface coordinate of the central positioning point;
s122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image surface coordinates of the plurality of edge positioning points.
Preferably, S121 further includes: and carrying out binarization processing on the target pattern.
Preferably, the center positioning point in S11 is the center of a center positioning circle, and the edge positioning point is the center of an edge positioning circle; in a corresponding manner, the first and second electrodes are,
the S121 includes:
s1211: performing edge detection on the target pattern by adopting a boundary extraction method, and extracting an edge topology table with edge grade information;
s1212: locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
s1213: processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, namely the image plane coordinates of the central positioning point;
the S122 includes:
s1221: carrying out ellipse detection on the target pattern by adopting a boundary extraction method, and locking the boundaries of the plurality of edge positioning points;
s1222: and processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely image surface coordinates of the edge positioning points.
Preferably, the center positioning circles in S111 include at least two concentric circles, and the radii of the at least two concentric circles are different; in a corresponding manner, the first and second electrodes are,
the S1213 is: and processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, and averaging the center coordinates of at least two central positioning circles to obtain the image plane coordinates of the central positioning point.
Preferably, the establishing a two-dimensional coordinate system of the object plane in S13, and further transforming the image plane image of the image to be measured into the object plane coordinates includes:
s1311: solving a perspective transformation matrix under a two-dimensional coordinate system of which the image surface is mapped to the object surface by utilizing each positioning point in the positioning point group;
s1312: and transforming the image surface coordinate of the image to be measured into a two-dimensional coordinate of the object surface by using the perspective transformation matrix, wherein the transformation formula is as follows:
Figure BDA0003493324170000041
wherein x/s is the abscissa of the two-dimensional coordinate system of the object plane, y/s is the ordinate of the two-dimensional coordinate system of the object plane, s is a proportionality coefficient, W is a perspective transformation matrix, P is an image plane point homogeneous coordinate vector, and P ^ is an object plane point coordinate vector.
Preferably, the target pattern in S11 includes: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 include: the image plane coordinates of the left view and the image plane coordinates of the right view;
in S13, establishing a three-dimensional coordinate system of the object plane, and further transforming the image plane image of the image to be measured into the object plane coordinates includes:
s1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P of each positioning point in a coordinate system of the binocular camera setcam
S1322: utilizing the three-dimensional coordinate result P of each positioning point in the coordinate system of the binocular camera setcamObtaining an object plane equation of a plane where the target pattern is located through least square fitting;
s1323: building x and y axes in an object plane, building a z axis in a normal direction of the object plane, and locating a central locating point P in the locating point groupcThe three-dimensional coordinate of the object plane is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object plane is established;
s1324: setting a matrix formed by three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set as R, and taking P as the reference valuecAs a translation vector, converting the three-dimensional coordinate result P of the image to be detected in the coordinate system of the binocular camera set through coordinate transformationcamConversion into coordinate results P in a three-dimensional coordinate system of the object planeworldThe conversion formula is:
Figure BDA0003493324170000051
wherein,
Figure BDA0003493324170000052
is PcThree-dimensional coordinates of (a).
According to a second aspect of the present invention, there is provided a machine vision based automatic displacement measuring system comprising: the device comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
the positioning point group calibration module is used for calibrating a positioning point group of the target pattern;
the image plane coordinate acquisition module is used for identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
the object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
Preferably, the image plane coordinate acquiring module includes: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
the positioning point group in the positioning point group calibration module comprises: the positioning device comprises a central positioning point and an edge positioning point, wherein the central positioning point is the circle center of a central positioning circle, and the edge positioning point is the circle center of an edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the central positioning circle is used for locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
the image plane coordinate obtaining unit of the central positioning point is used for processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinate of the central positioning circle, namely the image plane coordinate of the central positioning point;
the boundary locking unit of the edge positioning circle is used for carrying out ellipse detection on the target pattern by adopting a boundary extraction method and locking the boundaries of the edge positioning points;
the image plane coordinate obtaining unit of the edge positioning points is used for processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely the image plane coordinates of the edge positioning points.
Compared with the prior art, the invention has the following advantages:
according to the displacement automatic measurement method and system based on the machine vision, the target pattern is calibrated, the coordinate system of the object plane is obtained according to the calibrated target pattern, and the image plane coordinate of the image to be measured can be converted into the object plane coordinate, so that automatic external reference calibration is realized, manual access is not needed, and the influence of a calibration mode with manual intervention on the measurement result is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of an automatic displacement measurement method based on machine vision according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a positioning point group of a self-calibration target pattern according to a preferred embodiment of the present invention;
FIG. 3 is a diagram illustrating a self-calibration result in a two-dimensional displacement measurement scenario based on machine vision according to a preferred embodiment of the present invention;
fig. 4 is a diagram illustrating a self-calibration result in a three-dimensional displacement measurement scenario based on machine vision according to a preferred embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In one embodiment, the present invention provides a displacement automatic measurement method based on machine vision, please refer to fig. 1, which includes:
s11: calibrating a positioning point group of the target pattern;
s12: identifying and extracting a positioning point group of the target pattern to obtain image surface coordinates of each positioning point in the positioning point group;
s13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates.
The embodiment of the invention does not need human access, and avoids the influence of a calibration mode of human intervention on the measurement result.
Preferably, in an embodiment, S11 may further include:
s111: calibrating the center of the target pattern to obtain a center positioning point;
s112: and calibrating the edge of the target pattern to obtain a plurality of edge positioning points.
Preferably, in an embodiment, S12 includes:
s121: identifying and extracting a central positioning point of the target pattern to obtain an image surface coordinate of the central positioning point;
s122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image surface coordinates of the plurality of edge positioning points.
Preferably, in an embodiment, S121 further includes: the target pattern is subjected to binarization processing, so that the boundary of the target pattern is more prominent, and the subsequent center positioning points and edge positioning points are more accurately extracted.
Preferably, in an embodiment, the center positioning point in S11 is the center of the center positioning circle, and the edge positioning point is the center of the edge positioning circle; in response to this, the mobile terminal is allowed to,
s121 includes:
s1211: adopting a boundary extraction method to carry out edge detection on the target pattern, and extracting an edge topology table with edge grade information;
the edge level information may be: the center positioning circle is of a first grade, and the edge positioning circle is of a second grade; or other grading forms can also be adopted;
s1212: locking the boundary of the center positioning circle by searching the edge grade information of the edge topology table;
s1213: processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, namely the image plane coordinates of the central positioning point;
s122 includes:
s1221: carrying out ellipse detection on the target pattern by adopting a boundary extraction method, and locking the boundaries of a plurality of edge positioning points;
the target pattern is circular, and the camera is just opposite to the shooting target, so that the target is circular in the camera picture; however, when the camera shoots the target obliquely, the target pattern presents an elliptical shape in the camera picture, and the ellipse detection can detect the perfect circle and the ellipse, so the detection structure can be more accurate by adopting the ellipse detection;
s1222: and processing the boundaries of the plurality of edge positioning points by adopting least square fitting to obtain circle center coordinates of the plurality of edge positioning points, namely image surface coordinates of the plurality of edge positioning points.
Preferably, in an embodiment, the center positioning circles in S111 include at least two, at least two of the center positioning circles are concentric circles, and the radii of the at least two of the center positioning circles are different. Referring to fig. 2, the center positioning circle is exemplified by three concentric circles. In different embodiments, the number of the center positioning circles may also be two, or more than three.
Correspondingly, S1213 is: and processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, and averaging the center coordinates of at least two central positioning circles to obtain the image plane coordinates of the central positioning point.
Under the two-dimensional displacement measurement scene, the quantitative transformation relation of the transformation from the image plane coordinate to the object plane coordinate in S13 is a homography matrix of the transformation from the image plane coordinate system to a certain two-dimensional object plane coordinate system in space.
In one embodiment, the step of establishing a two-dimensional coordinate system of the object plane in S13, and further transforming the image plane image of the image to be measured into the object plane coordinates includes:
s1311: solving a perspective transformation matrix W under a two-dimensional coordinate system of which the image surface is mapped to the object surface by utilizing each positioning point in the positioning point group;
s1312: transforming the image surface coordinate of the image to be measured into a two-dimensional coordinate of an object surface by using a perspective transformation matrix, wherein the transformation formula is as follows:
Figure BDA0003493324170000091
wherein x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is a proportionality coefficient, P is an image plane point, and P ^ is an object plane point.
In a three-dimensional displacement measurement scene, the quantitative transformation relationship of transforming the image plane coordinates into the object plane coordinates in S13 is a linear transformation relationship of transforming the coordinate system of the binocular camera set into a coordinate system of a certain three-dimensional object plane in space, and the transformation relationship may be composed of a corresponding translation matrix and a rotation matrix.
In one embodiment, the target pattern in S11 includes: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 include: the image plane coordinates of the left view and the image plane coordinates of the right view;
establishing a three-dimensional coordinate system of the object plane in the step S13, and further transforming the image plane image of the image to be measured into the object plane coordinate includes:
s1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P of each positioning point in the coordinate system of the binocular camera setcam
S1322: utilizing three-dimensional coordinate result P of each positioning point in coordinate system of binocular camera setcamSolving an object plane equation of a plane where the target pattern is located;
s1323: building the x-axis and the y-axis in the object plane, building the z-axis in the normal direction of the object plane, and positioning the central positioning point P in the point groupcThe three-dimensional coordinate of the object plane is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object plane is established;
s1324: setting a matrix formed by three-axis normalized coordinates in a coordinate system of the binocular camera set of the three-dimensional coordinate system of the object plane as R, and taking P as the referencecAs a translation vector, a three-dimensional coordinate result P of the image to be detected in a coordinate system of the binocular camera set is obtained through coordinate transformationcamConversion into coordinate results P in a three-dimensional coordinate system of the object planeworldThe conversion formula is:
Figure BDA0003493324170000101
wherein,
Figure BDA0003493324170000102
is PcThree-dimensional coordinates of (a).
The self-calibration method based on unique measurement of machine vision of the above embodiment is described below by way of a specific example, but the target pattern, the number of anchor points, and the parameters are not limited to the specific example described below.
Referring to fig. 2, the target pattern in this example is illustrated as a square, the midpoint positioning circle is illustrated as three, and the edge positioning circle is illustrated as four. In different embodiments, the edge positioning circle is not necessarily four, nor is it necessarily the four positions in the figure, as long as the plane where the target pattern is located can be positioned.
S121 includes:
s1211: adopting a boundary extraction method to carry out edge detection on the target pattern, and extracting an edge topology table with edge grade information; taking fig. 2 as an example, the edge shape of three concentric circles and four corner points is obtained by edge detection; the edge level information may be: the edge of the innermost concentric circle in the concentric circles is a first level, the edge of the next inner concentric circle is a second level, and the edge of the outermost concentric circle and the edges of the four corner points are third levels;
s1212: locking the boundaries of the three center positioning circles by searching the edge grade information of the edge topology table;
s1213: processing the boundaries of the three central positioning circles by adopting least square fitting to obtain the center coordinates of the three ellipses, and taking the mean value as the image plane coordinate P of the central positioning pointc
S122 includes:
s1221: the method comprises the following steps of carrying out ellipse detection on a target pattern by adopting a boundary extraction method, and screening out all ellipse boundaries, wherein for the target detection of a real situation, the real target is actually detected in a whole image which is really shot, so that the ellipse detection can detect out all ellipse characteristic boundaries in the scene shot by the real image, and four corner points are screened out from the ellipse characteristic boundaries, and the specific screening method comprises the following steps: finding out four elliptical boundaries closest to the central positioning point, namely the elliptical boundaries of the four edge positioning points;
s1222: processing the elliptical boundaries of the four edge positioning points by adopting least square fitting to obtain circle center coordinates of the four edge positioning points, wherein the circle center coordinates are respectively as follows: p1、P2、P3、P4
In the two-dimensional displacement measurement scenario, S13 specifically includes:
s1311: by PcAnd P1、P2、P3、P4Solving a perspective transformation matrix W of the image surface mapped to the object surface by using a total of five positioning points;
s1312: for the pixel coordinate result of the image to be tested obtained by each test, the image surface coordinate is transformed into the object surface coordinate by adopting a perspective transformation matrix W, and the perspective transformation is realized according to the following formula:
Figure BDA0003493324170000111
wherein x is the abscissa of the two-dimensional coordinate system of the object plane, y is the ordinate of the two-dimensional coordinate system of the object plane, s is the proportionality coefficient, P is the image plane point, and P ^ is the object plane point.
Fig. 3 shows an object plane coordinate system obtained by self-calibration in a two-dimensional displacement measurement scenario by using the self-calibration method of the above example, where x and y axes are two axes of the coordinate system.
In the three-dimensional displacement measurement scenario, S13 specifically includes:
s1321: respectively identifying the obtained P in the left view and the right view by adopting a binocular cameracAnd P1、P2、P3、P4Coordinates of the five positioning points are used for reconstructing a three-dimensional coordinate result P of the five positioning points in a coordinate system of the binocular camera set according to internal reference calibration results of the binocular camera setcam
S1322: three-dimensional coordinate result P in coordinate system of binocular camera set by utilizing five positioning pointscamSolving an object plane equation of a plane where the target pattern is located;
s1323: building the x-axis and the y-axis in the object plane, building the z-axis in the normal direction of the object plane, and positioning the central positioning point P in the point groupcThe three-dimensional coordinate of the object plane is used as the origin of the coordinate system, and the three-dimensional coordinate system of the object plane is established;
s1324: setting a matrix formed by three-axis normalized coordinates in a coordinate system of the binocular camera set of the three-dimensional coordinate system of the object plane as R, and taking P as the referencecAs a translation vector, aThree-dimensional coordinate result P of image to be detected in coordinate system of binocular camera set through coordinate transformationcamConversion into coordinate results P in a three-dimensional coordinate system of the object planeworldThe conversion formula is:
Figure BDA0003493324170000121
wherein,
Figure BDA0003493324170000122
is PcThree-dimensional coordinates of (a).
Fig. 4 shows an object plane coordinate system obtained by self-calibration in a three-dimensional displacement measurement scenario by using the self-calibration method of the above example, where x, y, and z axes are three axes of the coordinate system.
In one embodiment, there is also provided a machine vision-based automatic displacement measurement system, comprising: the device comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
the positioning point group calibration module is used for calibrating a positioning point group of the target pattern;
the image plane coordinate acquisition module is used for identifying and extracting the positioning point group of the target pattern to obtain the image plane coordinates of each positioning point in the positioning point group;
the object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
In one embodiment, the positioning point group calibration module includes: a central positioning point calibration unit and an edge positioning point calibration unit; wherein,
the midpoint positioning point calibration unit is used for calibrating the center of the target pattern to obtain a center positioning point;
the edge positioning point calibration unit is used for calibrating the edge of the target pattern to obtain a plurality of edge positioning points.
In one embodiment, the image plane coordinate acquiring module includes: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
the positioning point group in the positioning point group calibration module comprises: the positioning device comprises a central positioning point and an edge positioning point, wherein the central positioning point is the circle center of a central positioning circle, and the edge positioning point is the circle center of an edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the central positioning circle is used for locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
the image plane coordinate obtaining unit of the central positioning point is used for processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, namely the image plane coordinates of the central positioning point;
the boundary locking unit of the edge positioning circle is used for carrying out ellipse detection on the target pattern by adopting a boundary extraction method and locking the boundaries of the edge positioning points;
the image plane coordinate obtaining unit of the edge positioning point is used for processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely the image plane coordinates of the edge positioning points.
In one embodiment, the image plane coordinate obtaining module further includes: and the binarization processing unit is used for carrying out binarization processing on the target pattern, so that the boundary of the target pattern is more prominent, and the subsequent edge detection result is more accurate.
The technology adopted by each module can refer to the description of the target self-calibration method based on displacement measurement of machine vision, and is not repeated herein.
The method and the system in the embodiment of the invention have a self-calibration function, can automatically construct a quantitative conversion relation from an image plane coordinate to a true object plane coordinate at any camera shooting angle, and can track the target by adopting various machine vision methods, thereby realizing automatic measurement of machine vision displacement without human intervention.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules, devices, units, and the like in the system, and those skilled in the art may refer to the technical solution of the system to implement the step flow of the method, that is, the embodiment in the system may be understood as a preferred example for implementing the method, and details are not described herein.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices provided by the present invention in purely computer readable program code means, the method steps can be fully programmed to implement the same functions by implementing the system and its various devices in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices thereof provided by the present invention can be regarded as a hardware component, and the devices included in the system and various devices thereof for realizing various functions can also be regarded as structures in the hardware component; means for performing the functions may also be regarded as structures within both software modules and hardware components for performing the methods.
In the description herein, reference to the terms "an implementation," "an embodiment," "a specific implementation," "an example" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and not to limit the invention. Any modifications and variations within the scope of the description, which may occur to those skilled in the art, are intended to be within the scope of the invention.

Claims (10)

1. A displacement automatic measurement method based on machine vision is characterized by comprising the following steps:
s11: calibrating a positioning point group of the target pattern;
s12: identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
s13: and establishing a two-dimensional coordinate system or a three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further transforming the image plane coordinates of the image to be detected into object plane coordinates.
2. The machine-vision-based automatic displacement measurement method of claim 1, wherein the S11 includes:
s111: calibrating the center of the target pattern to obtain a center positioning point;
s112: and calibrating the edge of the target pattern to obtain a plurality of edge positioning points.
3. The machine-vision-based automatic displacement measurement method of claim 2, wherein the S12 includes:
s121: identifying and extracting a central positioning point of the target pattern to obtain an image surface coordinate of the central positioning point;
s122: and identifying and extracting a plurality of edge positioning points of the target pattern to obtain image surface coordinates of the plurality of edge positioning points.
4. The machine-vision-based automatic displacement measurement method of claim 3, wherein the step S121 is preceded by the step of: and carrying out binarization processing on the target pattern.
5. The machine-vision-based automatic displacement measurement method of claim 3, wherein the center positioning point in the S11 is a center of a center positioning circle, and the edge positioning points are centers of edge positioning circles; in a corresponding manner, the first and second electrodes are,
the S121 includes:
s1211: performing edge detection on the target pattern by adopting a boundary extraction method, and extracting an edge topology table with edge grade information;
s1212: locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
s1213: processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, namely the image plane coordinates of the central positioning point;
the S122 includes:
s1221: carrying out ellipse detection on the target pattern by adopting a boundary extraction method, and locking the boundaries of the plurality of edge positioning points;
s1222: and processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely image surface coordinates of the edge positioning points.
6. The machine-vision-based automatic displacement measuring method according to claim 5, wherein the center positioning circles in S111 include at least two, at least two of the center positioning circles are concentric circles, and at least two of the center positioning circles have different radii; in a corresponding manner, the first and second electrodes are,
the S1213 is: and processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinates of the central positioning circle, and averaging the center coordinates of at least two central positioning circles to obtain the image plane coordinates of the central positioning point.
7. The machine-vision-based automatic displacement measuring method according to any one of claims 1 to 6, wherein the establishing of the two-dimensional coordinate system of the object plane in S13, and the transforming of the image plane image of the image to be measured into the object plane coordinates comprises:
s1311: solving a perspective transformation matrix under a two-dimensional coordinate system of which the image surface is mapped to the object surface by utilizing each positioning point in the positioning point group;
s1312: and transforming the image surface coordinate of the image to be measured into a two-dimensional coordinate of the object surface by using the perspective transformation matrix, wherein the transformation formula is as follows:
Figure FDA0003493324160000021
wherein x/s is the abscissa of the two-dimensional coordinate system of the object plane, y/s is the ordinate of the two-dimensional coordinate system of the object plane, s is a proportionality coefficient, W is a perspective transformation matrix, P is an image plane point homogeneous coordinate vector, and P ^ is an object plane point coordinate vector.
8. The machine-vision-based automatic displacement measurement method of any one of claims 1 to 6, wherein the target pattern in S11 comprises: a left view target pattern obtained by one camera of the binocular camera set and a right view target pattern obtained by the other camera;
the image plane coordinates of each positioning point in the positioning point group obtained in S12 include: the image plane coordinates of the left view and the image plane coordinates of the right view;
in S13, establishing a three-dimensional coordinate system of the object plane, and further transforming the image plane image of the image to be measured into the object plane coordinates includes:
s1321: reconstructing the image plane coordinates of the left view and the image plane coordinates of the right view of each positioning point according to the internal reference calibration result of the binocular camera set to obtain a three-dimensional coordinate result P of each positioning point in a coordinate system of the binocular camera setcam
S1322: utilizing the three-dimensional coordinate result P of each positioning point in the coordinate system of the binocular camera setcamObtaining an object plane equation of a plane where the target pattern is located through least square fitting;
s1323: building x and y axes in an object plane, building a z axis in a normal direction of the object plane, and locating a central locating point P in the locating point groupcThe three-dimensional coordinates of the three-dimensional coordinate system are used as the origin of the coordinate system to establishA three-dimensional coordinate system of the object plane;
s1324: setting a matrix formed by three-axis normalized coordinates of the three-dimensional coordinate system of the object plane in the coordinate system of the binocular camera set as R, and taking P as the reference valuecAs a translation vector, converting the three-dimensional coordinate result P of the image to be detected in the coordinate system of the binocular camera set through coordinate transformationcamConversion into coordinate results P in a three-dimensional coordinate system of the object planeworldThe conversion formula is:
Figure FDA0003493324160000031
wherein,
Figure FDA0003493324160000032
is PcThree-dimensional coordinates of (a).
9. An automatic displacement measuring system based on machine vision is characterized by comprising: the device comprises a positioning point group calibration module, an image plane coordinate acquisition module and an object plane coordinate system acquisition module; wherein,
the positioning point group calibration module is used for calibrating a positioning point group of the target pattern;
the image plane coordinate acquisition module is used for identifying and extracting a positioning point group of the target pattern to obtain image plane coordinates of each positioning point in the positioning point group;
the object plane coordinate system acquisition module is used for establishing a two-dimensional or three-dimensional coordinate system of the object plane according to the image plane coordinates of the positioning points, and further converting the image plane coordinates of the image to be detected into object plane coordinates.
10. The automatic displacement measuring system based on machine vision according to claim 9, wherein the image plane coordinate acquiring module comprises: the device comprises an edge detection unit, a boundary locking unit of a center positioning circle, an image plane coordinate obtaining unit of the center positioning point, a boundary locking unit of the edge positioning circle and an image plane coordinate obtaining unit of the edge positioning point; wherein,
the positioning point group in the positioning point group calibration module comprises: the positioning device comprises a central positioning point and an edge positioning point, wherein the central positioning point is the circle center of a central positioning circle, and the edge positioning point is the circle center of an edge positioning circle;
the edge detection unit is used for carrying out edge detection on the target pattern by adopting a boundary extraction method and extracting an edge topology table with edge grade information;
the boundary locking unit of the central positioning circle is used for locking the boundary of the central positioning circle by searching the edge grade information of the edge topology table;
the image plane coordinate obtaining unit of the central positioning point is used for processing the boundary of the central positioning circle by adopting least square fitting to obtain the center coordinate of the central positioning circle, namely the image plane coordinate of the central positioning point;
the boundary locking unit of the edge positioning circle is used for carrying out ellipse detection on the target pattern by adopting a boundary extraction method and locking the boundaries of the edge positioning points;
the image plane coordinate obtaining unit of the edge positioning points is used for processing the boundaries of the edge positioning points by adopting least square fitting to obtain circle center coordinates of the edge positioning points, namely the image plane coordinates of the edge positioning points.
CN202210104210.XA 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision Active CN114440776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104210.XA CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Publications (2)

Publication Number Publication Date
CN114440776A true CN114440776A (en) 2022-05-06
CN114440776B CN114440776B (en) 2024-07-19

Family

ID=81368838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104210.XA Active CN114440776B (en) 2022-01-28 2022-01-28 Automatic displacement measurement method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN114440776B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06249615A (en) * 1993-02-25 1994-09-09 Sony Corp Position detecting method
JPH06258028A (en) * 1993-03-10 1994-09-16 Nippondenso Co Ltd Method and system for visually recognizing three dimensional position and attitude
JP2004077377A (en) * 2002-08-21 2004-03-11 Kurabo Ind Ltd Displacement measuring method and displacement measuring device by photogrammetry
CN101866496A (en) * 2010-06-04 2010-10-20 西安电子科技大学 Augmented reality method based on concentric ring pattern group
CN102944191A (en) * 2012-11-28 2013-02-27 北京航空航天大学 Method and device for three-dimensional vision measurement data registration based on planar circle target
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
CN107741224A (en) * 2017-08-28 2018-02-27 浙江大学 A kind of AGV automatic-posture-adjustment localization methods of view-based access control model measurement and demarcation
KR20180125095A (en) * 2017-05-12 2018-11-22 경북대학교 산학협력단 Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
CN109816733A (en) * 2019-01-14 2019-05-28 京东方科技集团股份有限公司 Camera parameter initial method and device, camera parameter scaling method and equipment, image capturing system
CN110207605A (en) * 2019-06-13 2019-09-06 广东省特种设备检测研究院东莞检测院 A kind of measuring device and method of the metal structure deformation based on machine vision
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A kind of stereoscopic vision structure dynamic displacement measurement method for building face based on three targets
CN111089569A (en) * 2019-12-26 2020-05-01 中国科学院沈阳自动化研究所 Large box body measuring method based on monocular vision
CN111402343A (en) * 2020-04-09 2020-07-10 深圳了然视觉科技有限公司 High-precision calibration plate and calibration method
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113310426A (en) * 2021-05-14 2021-08-27 昆山市益企智能装备有限公司 Thread parameter measuring method and system based on three-dimensional profile
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 Circular array target center image point positioning method based on blanking points

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06249615A (en) * 1993-02-25 1994-09-09 Sony Corp Position detecting method
JPH06258028A (en) * 1993-03-10 1994-09-16 Nippondenso Co Ltd Method and system for visually recognizing three dimensional position and attitude
JP2004077377A (en) * 2002-08-21 2004-03-11 Kurabo Ind Ltd Displacement measuring method and displacement measuring device by photogrammetry
CN101866496A (en) * 2010-06-04 2010-10-20 西安电子科技大学 Augmented reality method based on concentric ring pattern group
CN102944191A (en) * 2012-11-28 2013-02-27 北京航空航天大学 Method and device for three-dimensional vision measurement data registration based on planar circle target
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
KR20180125095A (en) * 2017-05-12 2018-11-22 경북대학교 산학협력단 Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
CN107741224A (en) * 2017-08-28 2018-02-27 浙江大学 A kind of AGV automatic-posture-adjustment localization methods of view-based access control model measurement and demarcation
CN109816733A (en) * 2019-01-14 2019-05-28 京东方科技集团股份有限公司 Camera parameter initial method and device, camera parameter scaling method and equipment, image capturing system
CN110207605A (en) * 2019-06-13 2019-09-06 广东省特种设备检测研究院东莞检测院 A kind of measuring device and method of the metal structure deformation based on machine vision
CN110415300A (en) * 2019-08-02 2019-11-05 哈尔滨工业大学 A kind of stereoscopic vision structure dynamic displacement measurement method for building face based on three targets
CN111089569A (en) * 2019-12-26 2020-05-01 中国科学院沈阳自动化研究所 Large box body measuring method based on monocular vision
CN111402343A (en) * 2020-04-09 2020-07-10 深圳了然视觉科技有限公司 High-precision calibration plate and calibration method
CN112362034A (en) * 2020-11-11 2021-02-12 上海电器科学研究所(集团)有限公司 Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision
CN113310426A (en) * 2021-05-14 2021-08-27 昆山市益企智能装备有限公司 Thread parameter measuring method and system based on three-dimensional profile
CN113610917A (en) * 2021-08-09 2021-11-05 河南工业大学 Circular array target center image point positioning method based on blanking points

Also Published As

Publication number Publication date
CN114440776B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
US10621791B2 (en) Three-dimensional modeling method and system thereof
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN109671174A (en) A kind of pylon method for inspecting and device
CN113450292B (en) High-precision visual positioning method for PCBA parts
US20130058526A1 (en) Device for automated detection of feature for calibration and method thereof
US8949060B2 (en) Inspection method
CN110415300A (en) A kind of stereoscopic vision structure dynamic displacement measurement method for building face based on three targets
CN110095089B (en) Method and system for measuring rotation angle of aircraft
CN112254656A (en) Stereoscopic vision three-dimensional displacement measurement method based on structural surface point characteristics
Wang et al. Error analysis and improved calibration algorithm for LED chip localization system based on visual feedback
CN107092905B (en) Method for positioning instrument to be identified of power inspection robot
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
Zhao et al. Vision-based adaptive stereo measurement of pins on multi-type electrical connectors
CN116399314B (en) Calibrating device for photogrammetry and measuring method thereof
Wang et al. Target recognition and localization of mobile robot with monocular PTZ camera
CN107976146B (en) Self-calibration method and measurement method of linear array CCD camera
CN101894369B (en) Real-time method for computing focal length of camera from image sequence
CN118131140A (en) Neural network training method and global calibration method for FOD radar and camera
CN114440776A (en) Automatic displacement measuring method and system based on machine vision
CN111415384A (en) Industrial image component accurate positioning system based on deep learning
CN106622990B (en) Part fixation and recognition processing system
CN115661446A (en) Pointer instrument indication automatic reading system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant