CN110415304B - Vision calibration method and system - Google Patents

Vision calibration method and system Download PDF

Info

Publication number
CN110415304B
CN110415304B CN201910701453.XA CN201910701453A CN110415304B CN 110415304 B CN110415304 B CN 110415304B CN 201910701453 A CN201910701453 A CN 201910701453A CN 110415304 B CN110415304 B CN 110415304B
Authority
CN
China
Prior art keywords
coordinate
target
calibration
coordinate system
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910701453.XA
Other languages
Chinese (zh)
Other versions
CN110415304A (en
Inventor
胡坤
施越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Boshi Intelligent Motion Technology Co ltd
Original Assignee
Beijing Boshi Intelligent Motion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Boshi Intelligent Motion Technology Co ltd filed Critical Beijing Boshi Intelligent Motion Technology Co ltd
Priority to CN201910701453.XA priority Critical patent/CN110415304B/en
Publication of CN110415304A publication Critical patent/CN110415304A/en
Application granted granted Critical
Publication of CN110415304B publication Critical patent/CN110415304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the application discloses a visual calibration method and a system, comprising the following steps: the method comprises the steps that target images are respectively obtained by a multi-camera, a first coordinate and a second coordinate of a marker point in each target image are determined, namely the coordinate of the marker point in an image coordinate system and the coordinate of the marker point in a local target coordinate system are determined, and a third coordinate is obtained, wherein the third coordinate refers to the coordinate of a two-dimensional code in the target image in the image coordinate system; extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, the fourth coordinate refers to the coordinate of the current two-dimension code in a global target coordinate system, the second coordinate is corrected to a fifth coordinate, and the fifth coordinate is the coordinate of a fixed point in the global target coordinate system; and establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate. And a target with a two-dimensional code is utilized, and the local target coordinates of the calibration point are automatically corrected to be global target coordinates through two-dimensional code information, so that the full-automatic calibration and calibration of the multi-camera vision system are realized.

Description

Vision calibration method and system
Technical Field
The present disclosure relates to the field of vision calibration technologies, and in particular, to a vision calibration method and system.
Background
With the continuous increase of automation degree of factories, a large number of production and inspection devices are required to be provided with an image vision system to improve the automation degree of the devices. In a production device, an image vision system is often used to guide a motion control system of the device to automatically operate so as to adapt to the situations of inconsistent feeding positions, sizes and the like of products to be processed, such as cog, fog devices and the like in the LCD industry.
In the inspection apparatus, an image vision system is often used for dimensional measurement, assembly accuracy measurement, appearance inspection, and the like of a processed product to ensure the yield of a final product. High-precision industrial production and detection equipment put forward the requirements of high precision and large visual field to an image vision system, while a single-camera vision system is difficult to simultaneously meet the requirements of high precision and large visual field, and a multi-camera vision system can simultaneously realize the requirements of high precision and large visual field.
Multi-camera vision systems generally need to achieve the unification of multi-camera coordinate systems by means of high-precision calibration targets to simultaneously meet the requirements of high precision and large field of view. And each camera respectively establishes a coordinate conversion relation with the target coordinate system, so that the coordinate systems of the cameras can be unified to the unique target coordinate system.
However, in the conventional multi-camera calibration method, the coordinates of the corner points of the checkerboard need to be extracted first, and then target coordinates corresponding to the markers in the calibration image need to be manually input, so as to establish a mapping relationship between an image coordinate system and a target coordinate system. The calibration method has low efficiency and complex operation, needs manual participation and cannot realize the full-automatic calibration of the multi-camera vision system.
Disclosure of Invention
The application discloses a visual calibration method and a system, which aim to solve the problems that the calibration method in the prior art is low in efficiency, complex in operation, needs manual participation and cannot realize full-automatic calibration of a multi-camera visual system.
In a first aspect of the present application, a method of visual calibration is disclosed, comprising:
the method comprises the following steps that target images are respectively obtained by multiple cameras, wherein at least one two-dimensional code is arranged on a target, and each camera at least comprises one complete two-dimensional code in the visual field range;
determining a first coordinate and a second coordinate of a calibration point in each target image, wherein the first coordinate is a coordinate of the calibration point in an image coordinate system, and the second coordinate is a coordinate of the calibration point in a local target coordinate system, the local target coordinate system is a coordinate system where a local target in the field of view of the camera is located at present, and the image coordinate system is a coordinate system where the camera is located at present;
obtaining a third coordinate, wherein the third coordinate is a coordinate of the two-dimensional code in the target image in the image coordinate system;
extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, and the fourth coordinate refers to a coordinate of the current two-dimension code in a global target coordinate system, wherein the global target coordinate system is a coordinate system where the complete target is located;
correcting the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate and the fourth coordinate, wherein the fifth coordinate is a coordinate of the index point in the global target coordinate system;
and establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
Further, the determining the first and second coordinates of the landmark points in each of the target images includes:
extracting first coordinates of all the calibration points in each target image;
determining the topological relation between the labeled points in each target image according to the extracted first coordinates;
and determining a second coordinate of the marking point in each target image according to the topological relation.
Further, when the target is a checkerboard target, the extracting first coordinates of all the calibration points in each of the target images includes:
calculating the pixel coordinates of the calibration point through a Hessian matrix, wherein the calibration point is the angular point;
calculating sub-pixel coordinates of the calibration point by using Taylor expansion, wherein the sub-pixel coordinates are the first coordinates.
Further, when the target is a checkerboard target, determining a topological relation between the labeled points in each target image according to the extracted first coordinates, including:
generating a delaunay triangle according to the extracted first coordinates, wherein the calibration point is the corner point;
merging adjacent triangles with the closest gray mean value to the Delaunay triangle into a quadrangle;
screening the quadrangles, wherein the quadrangles obtained after screening simultaneously satisfy: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle at present, the length ratio of the opposite sides of the quadrangle is between the intervals (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is between the intervals (0.75,1.25);
organizing to form a topological network according to the screened quadrangle;
and determining the second coordinate according to the position of the quadrangle in the topological network.
Further, correcting the second coordinate to a fifth coordinate includes:
correcting the directions of the X axis and the Y axis of the coordinate of the calibration point;
correcting the positive direction of the X axis and the positive direction of the Y axis of the coordinate of the calibration point;
and correcting the initial value of the coordinate of the calibration point.
Further, the target may be a checkerboard target, a grid target, or a circular array target.
In a second aspect of the present application, a vision calibration system is disclosed, comprising:
the image acquisition module is used for respectively acquiring target images by multiple cameras, wherein at least one two-dimensional code is arranged on the target, and each camera at least comprises one complete two-dimensional code in the visual field range;
the first determining module is configured to determine a first coordinate and a second coordinate of a calibration point in each target image, where the first coordinate is a coordinate of the calibration point in an image coordinate system, and the second coordinate is a coordinate of the calibration point in a local target coordinate system, where the local target coordinate system is a coordinate system where a local target in the field of view of the camera is located currently, and the image coordinate system is a coordinate system where the camera is located currently;
the obtaining module is used for obtaining a third coordinate, wherein the third coordinate refers to a coordinate of the two-dimensional code in the target image in the image coordinate system;
the first extraction module is used for extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, the fourth coordinate refers to a coordinate of the current two-dimension code in a global target coordinate system, and the global target coordinate system is a coordinate system where the complete target is located;
a correction module, configured to correct the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate, and the fourth coordinate, where the fifth coordinate is a coordinate of the index point in the global target coordinate system;
and the establishing module is used for establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
Further, the first determining module specifically includes:
the second extraction module is used for extracting first coordinates of all the calibration points in each target image;
the second determining module is used for determining the topological relation between the calibration points in each target image according to the extracted first coordinates;
and the third determining module is used for determining a second coordinate of the marking point in each target image according to the topological relation.
Further, when the target is a checkerboard target, the second extraction module specifically includes:
the first calculation module is used for calculating the pixel coordinates of the calibration point through a Hessian matrix, wherein the calibration point is the angular point;
and the second calculation module is used for calculating the sub-pixel coordinates of the calibration point by using a Taylor expansion, wherein the sub-pixel coordinates are the first coordinates.
Further, when the target is a checkerboard target, the second determining module specifically includes:
a generating module, configured to generate a delaunay triangle according to the extracted first coordinate, where the calibration point is the corner point;
the merging module is used for merging the adjacent triangles with the gray mean value closest to the Delou triangle into a quadrangle;
the screening module is used for screening the quadrangle, wherein the quadrangle obtained after screening simultaneously meets the following requirements: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle at present, the length ratio of the opposite sides of the quadrangle is between the intervals (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is between the intervals (0.75,1.25);
the organizing module is used for organizing and forming a topological network according to the screened quadrangle;
and the fourth determining module is used for determining the second coordinate according to the position of the quadrangle in the topological network.
According to the vision calibration method and the system, a target with a two-dimension code is utilized, the local target coordinate of a calibration point is automatically corrected through two-dimension code information to be a global target coordinate, full-automatic calibration and calibration of a multi-camera vision system are achieved, in addition, the visual field range of each camera in the vision calibration method can include the two-dimension code, namely the visual field of each camera does not need to be designed to be too large, and therefore the pixel precision of a single camera is guaranteed.
Drawings
In order to more clearly describe the technical solution of the present application, the drawings required to be used in the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a schematic workflow diagram of a visual calibration method disclosed in an embodiment of the present application;
FIG. 2 is a schematic illustration of a target disclosed in embodiments herein;
fig. 3 is a schematic diagram of acquiring a target image by a plurality of cameras disclosed in the embodiments of the present application;
FIG. 4 is a schematic workflow diagram of another vision calibration method disclosed in the embodiments of the present application;
FIG. 5 is a schematic workflow diagram of another vision calibration method disclosed in the embodiments of the present application;
FIG. 6 is a schematic flowchart illustrating a further method for visual calibration according to an embodiment of the present disclosure;
FIG. 7 is a graph illustrating the results of generating Delaunay triangles as disclosed in an embodiment of the present application;
FIG. 8 is a diagram illustrating the result of merging adjacent triangles into a quadrilateral according to an embodiment of the present disclosure;
FIG. 9 is a diagram illustrating results of quadrilateral screening disclosed in the embodiments of the present application;
FIG. 10 is a diagram illustrating the result of determining the second coordinate disclosed in the embodiment of the present application
Fig. 11 is a schematic diagram illustrating a result of extracting two-dimensional code information disclosed in an embodiment of the present application;
FIG. 12 is a diagram illustrating a result of modifying the second coordinate into a fifth coordinate according to an embodiment of the disclosure;
FIG. 13 is a schematic diagram illustrating the establishment of a mapping relationship disclosed in an embodiment of the present application;
fig. 14 is a block diagram of a visual calibration system according to an embodiment of the present disclosure;
FIG. 15 is a block diagram illustrating a further exemplary vision calibration system disclosed in an embodiment of the present application;
FIG. 16 is a block diagram illustrating a further exemplary vision calibration system disclosed in an embodiment of the present application;
fig. 17 is a block diagram of a further vision calibration system according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing multi-camera calibration method needs to firstly extract the coordinates of the corner points of the checkerboard, then manually input the target coordinates corresponding to the markers in the calibration image, and establish the mapping relation between the image coordinate system and the target coordinate system. The calibration method has low efficiency and complex operation, needs manual participation and cannot realize the full-automatic calibration of the multi-camera vision system. In order to solve the technical problem, the embodiment of the application discloses a visual calibration method and a system.
The embodiment of the application discloses a visual calibration method, which refers to fig. 1, and comprises the following steps:
s10, the multiple cameras respectively acquire target images, wherein at least one two-dimensional code is arranged on the target, and each camera at least comprises one complete two-dimensional code in the visual field range.
The target in this application is a target with a two-dimensional code, as shown in fig. 2, the target may be a checkerboard target, a grid target, or a circular array target, and the two-dimensional code on the target may be arranged in an array manner as shown in fig. 2, or may be arranged in other manners, which is not limited in this application.
When the target is designed, a checkerboard target is taken as an example, the two-dimensional code is rectangular, and the center of the two-dimensional code accurately covers a certain angular point in the checkerboard. Preferably, the two-dimensional code occupies four rectangular positions in the checkerboard target.
The two-dimensional code may be a DM two-dimensional code or a QR two-dimensional code, where each two-dimensional code includes position information (coordinates) of the current two-dimensional code in the complete chessboard target, for example, a first two-dimensional code at the upper left corner in fig. 2 includes position information (5,5), that is, the center of the current two-dimensional code corresponds to the 5 th row and the 5 th column in the complete chessboard target.
As shown in fig. 3, four cameras respectively acquire images of a target, each camera captures a portion of the target, and each of the images acquired by the cameras includes at least one complete two-dimensional code.
Step S11, determining a first coordinate and a second coordinate of a calibration point in each target image, wherein the first coordinate refers to a coordinate of the calibration point in an image coordinate system, and the second coordinate refers to a coordinate of the calibration point in a local target coordinate system, the local target coordinate system is a coordinate system where a local target in the current camera view field is located, and the image coordinate system is a coordinate system where the camera is located.
The target can be a checkerboard target, a reticle grid target or a circular array target, if the target is the checkerboard target, the corresponding calibration points are checkerboard angular points, if the target is the reticle grid target, the corresponding calibration points are intersections of the reticle, and if the target is the circular array target, the corresponding calibration points are dot centers.
In order to ensure high shooting precision, each camera shoots a part of the target, and the local target coordinate system is the coordinate system of the local target shot by the current camera.
As shown in fig. 4, a specific implementation manner of determining the first coordinate and the second coordinate of the calibration point in each target image may be as follows:
and step S110, extracting first coordinates of all the calibration points in each target image.
Taking the example that the four cameras respectively acquire the target images, the coordinates of all the calibration points in each target image in the corresponding image coordinate system are respectively extracted. If the current target image is shot by the first camera, extracting the coordinates of all the calibration points in the current target image in the image coordinate system corresponding to the first camera.
When the target is a checkerboard target, as shown in fig. 5, a specific implementation manner of extracting the first coordinates of all the calibration points in each target image may be according to the following steps S1100 to S1101:
and S1100, calculating the pixel coordinates of the calibration point through a Hessian matrix, wherein the calibration point is the angular point.
The Hessian matrix of image pixels is calculated by gaussian filtering:
Figure BDA0002150932050000051
wherein, f xx 、f xy 、f yy The second partial derivatives of the current image gray scale with respect to x and y, respectively. Two eigenvalues λ of the Hessian matrix 1 And λ 2 Corresponding to the gradient value in the direction in which the gradation change is most severe and the gradient value in the normal direction thereof, respectively. Aiming at the corner points corresponding to the checkerboard, the judgment criterion is as follows:
Figure BDA0002150932050000052
where t is the gradient threshold.
Step S1101, calculating a sub-pixel coordinate of the calibration point by using a taylor expansion, wherein the sub-pixel coordinate is the first coordinate.
The sub-pixel positions of the tessellated corners are further computed by a taylor expansion, which can be expressed as follows:
Figure BDA0002150932050000053
wherein (x) 0 ,y 0 ) For the corner pixel coordinates calculated by the Hessian matrix, (s, t) is the corner relative to (x) 0 ,y 0 ) Sub-pixel shift of f 0 Is the current pixel gray value, f x 、f y 、f xx 、f xy 、f yy The first and second partial derivatives of the current pixel, respectively. According to the condition that the sub-pixel coordinates of the angular point correspond to the extreme points of the gray distribution of the neighborhood image, the following properties can be obtained
Figure BDA0002150932050000061
Solving the above equation system can obtain
Figure BDA0002150932050000062
Sub-pixel coordinates of the corner points of the checkerboard, i.e. (x) 0 +s,y 0 +t)。
In the above steps S1100-S1101, the method for extracting the first coordinates of all the calibration points in each target image does not need interpolation or surface fitting, and the extraction process is simple and fast, compared with the conventional Harris corner extraction method; on the other hand, compared with the method for solving the intersection point by edge fitting, the method only utilizes the image gray level in the area near the corner point, so that the method has higher extraction precision even if the image has distortion and larger noise.
And S111, determining a topological relation between the calibration points in each target image according to the extracted first coordinates.
And determining the topological relation between the calibration points in each target image according to the extracted first coordinates of all the calibration points in the target image, namely determining the adjacent relation between each calibration point and other calibration points.
And S112, determining a second coordinate of the calibration point in each target image according to the topological relation.
The adjacent relation of each calibration point and other calibration points is determined, and the coordinates of the calibration point in the local target coordinate system in each target image can be determined.
When the target is a checkerboard target, as shown in fig. 6, the specific implementation manner for determining the topological relationship between the labeled points in each target image may be as follows:
step S1110, generating a delaunay triangle according to the extracted first coordinates, as shown in fig. 7, where the index point is the corner point.
Delaunay triangle definition: a triangle is called a delaunay triangle if its circumcircle does not contain any other point in the plane.
Step S1111, merge the adjacent triangles with the gray mean closest to the Delou triangle into a quadrangle, as shown in FIG. 8.
The merge rule is as follows: one delaunay triangle is selected, all the adjacent delaunay triangles of the selected delaunay triangle are searched, and the delaunay triangle closest to the gray level mean value is taken and merged.
As shown in fig. 8, since the two-dimensional code also contains a black-to-white region, some quadrangles that do not satisfy the requirements may exist after the two-dimensional code is merged according to the merging rule, and therefore, it is necessary to further screen the quadrangles.
Step S1112, screening the quadrangles, as shown in fig. 9, wherein the quadrangles obtained after screening simultaneously satisfy: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle, the length ratio of the opposite sides of the quadrangle is between intervals (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is between intervals (0.75,1.25).
And S1113, organizing and forming a topological network according to the screened quadrangle.
The screened quadrilaterals are linked together to form a topological network.
Step S1114, determining the second coordinate according to the position of the quadrilateral in the topological network, as shown in fig. 10.
And generating the coordinates of the corner points of the checkerboard corresponding to the vertexes of each quadrangle according to the positions of the quadrangles in the topological network.
And S12, obtaining a third coordinate, wherein the third coordinate refers to a coordinate of the two-dimensional code in the target image in the image coordinate system.
And determining a two-dimensional code area, positioning the position of the center of the two-dimensional code area in the image, and representing the two-dimensional code area by a quadrangle to obtain a third coordinate.
And S13, extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, the fourth coordinate refers to the coordinate of the current two-dimension code in a global target coordinate system, and the global target coordinate system is the coordinate system where the complete target is located.
And decoding the two-dimension code in the target image, and extracting two-dimension code information, wherein the two-dimension code information comprises the coordinate of the current two-dimension code in a global target coordinate system. As shown in fig. 11, the coordinates of the two-dimensional code in the global target coordinate system are (0.00,10.00), and the coordinates of the two-dimensional code in the global target coordinate system are the coordinates of the center of the two-dimensional code in the global target coordinate system.
And recording the fourth coordinate in the two-dimensional code, and decoding the corresponding two-dimensional code as required without manual marking.
And S14, correcting the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate and the fourth coordinate, wherein the fifth coordinate is the coordinate of the index point in the global target coordinate system.
According to the third coordinate and the fourth coordinate, the corresponding relation between the coordinate of the two-dimensional code in the image coordinate system and the coordinate in the global target coordinate system can be obtained, and then according to the first coordinate and the second coordinate, the second coordinate is corrected to be the fifth coordinate, namely the coordinate of the calibration point in the global target coordinate system is obtained.
Wherein, revise the second coordinate as the fifth coordinate, include: correcting the directions of the X axis and the Y axis of the coordinate of the calibration point; correcting the positive direction of the X axis and the positive direction of the Y axis of the coordinate of the calibration point; and correcting the initial value of the coordinate of the calibration point.
As shown in fig. 11, the coordinates of the obtained two-dimensional code in the global target coordinate system are (0.00,10.00), and the coordinates of all the corner points shown in fig. 10 in the local target coordinate system are corrected to the coordinates of all the corner points shown in fig. 12 in the global target coordinate system. For example: in the target image, the coordinate of the top left corner point in the local target coordinate system is (0.00), and after correction, the coordinate of the top left corner point in the global target coordinate system is (-4.00,14.00).
And S15, establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
Each camera corresponds to one image coordinate system, and the image coordinate system corresponding to each camera is unified to establish a corresponding mapping relation with the global target coordinate system, so that multi-camera calibration and calibration are realized.
And establishing a mapping relation between the global target coordinate system and the image coordinate system, namely solving a mapping matrix.
The mapping relationship between two spatial planes can be represented by a homography, which is a concept in projective geometry, also called projective transformation. It maps points (three-dimensional homogeneous vectors) on one projective plane onto another projective plane and maps straight lines into straight lines, having line-preserving properties.
The coordinates of the same calibration point in the image coordinate system are (u, v), the coordinates in the global target coordinate system are (x, y), the homography matrix of the two coordinate systems is H, and then
Figure BDA0002150932050000081
Wherein h is 11 、h 12 、h 21 、h 22 Representing a linear transformation matrix, h 13 、、h 23 Represents a translation vector, h 31 、h 32 Representing the amount of perspective transformation.
As can be seen from the above formula, the homography matrix H has 8 degrees of freedom, and thus, theoretically, the homography matrix H can be solved by only needing at least 4 sets of corresponding coordinates of the index points.
As shown in fig. 13, the first coordinate and the fifth coordinate corresponding to the A, B, C, D four calibration points are obtained, and the homography matrix H can be obtained by using the above formula for calculating the homography matrix.
The vision calibration method disclosed by the embodiment of the application comprises the steps that firstly, multiple cameras respectively obtain target images, wherein at least one two-dimensional code is arranged on a target, and each camera at least comprises one complete two-dimensional code in a visual field range; then determining a first coordinate and a second coordinate of a calibration point in each target image, wherein the first coordinate is a coordinate of the calibration point in an image coordinate system, the second coordinate is a coordinate of the calibration point in a local target coordinate system, the local target coordinate system is a coordinate system where a local target in the field of view of the camera is located at present, and the image coordinate system is a coordinate system where the camera is located at present; obtaining a third coordinate, wherein the third coordinate is a coordinate of the two-dimensional code in the target image in the image coordinate system; extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, and the fourth coordinate refers to a coordinate of the current two-dimension code in a global target coordinate system, wherein the global target coordinate system is a coordinate system where the complete target is located; then, according to the first coordinate, the second coordinate, the third coordinate and the fourth coordinate, the second coordinate is corrected into a fifth coordinate, and the fifth coordinate is the coordinate of the index point in the global target coordinate system; and finally, establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
The vision calibration method disclosed by the application utilizes a target with a two-dimensional code, and the local target coordinate of the calibration point is automatically corrected through two-dimensional code information to be a global target coordinate, so that the full-automatic calibration and calibration of a multi-camera vision system are realized.
Accordingly, referring to fig. 14, in another embodiment of the present invention, a vision calibration system is further disclosed, comprising:
the image acquisition module 110 is configured to acquire target images by multiple cameras, where the target is provided with at least one two-dimensional code, and each camera view range includes at least one complete two-dimensional code;
a first determining module 120, configured to determine a first coordinate and a second coordinate of a calibration point in each target image, where the first coordinate is a coordinate of the calibration point in an image coordinate system, and the second coordinate is a coordinate of the calibration point in a local target coordinate system, where the local target coordinate system is a coordinate system where a local target in the field of view of the camera is located currently, and the image coordinate system is a coordinate system where the camera is located currently;
an obtaining module 130, configured to obtain a third coordinate, where the third coordinate is a coordinate of a two-dimensional code in the target image in the image coordinate system;
the first extraction module 140 is configured to extract two-dimensional code information in the target image, where the two-dimensional code information includes a fourth coordinate, and the fourth coordinate is a coordinate of the current two-dimensional code in a global target coordinate system, where the global target coordinate system is a coordinate system in which the complete target is located;
a correcting module 150, configured to correct the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate, and the fourth coordinate, where the fifth coordinate is a coordinate of the index point in the global target coordinate system;
an establishing module 160, configured to establish a mapping relationship between the global target coordinate system and each of the image coordinate systems according to the first coordinate and the fifth coordinate.
Further, referring to fig. 15, the first determining module 120 specifically includes:
a second extracting module 210, configured to extract first coordinates of all the calibration points in each of the target images;
a second determining module 220, configured to determine, according to the extracted first coordinates, a topological relationship between the labeled points in each of the target images;
a third determining module 230, configured to determine a second coordinate of the calibration point in each of the target images according to the topological relation.
Further, referring to fig. 16, when the target is a checkerboard target, the second extraction module 210 specifically includes:
a first calculating module 310, configured to calculate pixel coordinates of the calibration point through a Hessian matrix, where the calibration point is the corner point;
a second calculating module 320, configured to calculate a sub-pixel coordinate of the calibration point by using a taylor expansion, where the sub-pixel coordinate is the first coordinate.
Further, referring to fig. 17, when the target is a checkerboard target, the second determining module 220 specifically includes:
a generating module 410, configured to generate a delaunay triangle according to the extracted first coordinates, where the index point is the corner point;
a merging module 420, configured to merge the triangles in the adjacent triangle with the closest gray mean to the delaunay into a quadrangle;
a screening module 430, configured to screen the quadrangles, where the quadrangles obtained after screening simultaneously satisfy: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle at present, the length ratio of the opposite sides of the quadrangle is between the intervals (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is between the intervals (0.75,1.25);
the organizing module 440 is used for organizing and forming a topological network according to the screened quadrangle;
a fourth determining module 450, configured to determine the second coordinate according to a position of the quadrilateral in the topological network.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiments of the system, since they are substantially similar to the method embodiments, the description is simple, and for the relevant points, reference may be made to the description of the method embodiments.
The present application has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to limit the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the presently disclosed embodiments and implementations thereof without departing from the spirit and scope of the present disclosure, and these fall within the scope of the present disclosure. The protection scope of this application is subject to the appended claims.
In a specific implementation manner, the present application further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program is executed, the program may include some or all of the steps in each embodiment of the vision calibration method provided by the present application. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (10)

1. A method of visual calibration, comprising:
the method comprises the following steps that multiple cameras respectively obtain target images, wherein at least one two-dimensional code is arranged on a target, and each camera at least comprises one complete two-dimensional code in a visual field range;
determining a first coordinate and a second coordinate of a calibration point in each target image, wherein the first coordinate is a coordinate of the calibration point in an image coordinate system, and the second coordinate is a coordinate of the calibration point in a local target coordinate system, the local target coordinate system is a coordinate system where a local target in the field of view of the camera is located at present, and the image coordinate system is a coordinate system where the camera is located at present;
obtaining a third coordinate, wherein the third coordinate is a coordinate of the two-dimensional code in the target image in the image coordinate system;
extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, and the fourth coordinate refers to a coordinate of the current two-dimension code in a global target coordinate system, wherein the global target coordinate system is a coordinate system where a complete target is located;
correcting the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate and the fourth coordinate, wherein the fifth coordinate is a coordinate of the index point in the global target coordinate system;
and establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
2. The visual calibration method of claim 1, wherein the determining the first and second coordinates of the calibration point in each of the target images comprises:
extracting first coordinates of all calibration points in each target image;
determining a topological relation between the calibration points in each target image according to the extracted first coordinates;
and determining a second coordinate of the calibration point in each target image according to the topological relation.
3. The visual targeting method of claim 2, wherein said extracting first coordinates of all the targeting points in each of the target images when the target is a checkerboard target comprises:
calculating the pixel coordinates of the calibration points through a Hessian matrix, wherein the calibration points are angular points;
calculating sub-pixel coordinates of the calibration point by using Taylor expansion, wherein the sub-pixel coordinates are the first coordinates.
4. The visual calibration method of claim 2, wherein determining the topological relationship between the calibration points in each of the target images according to the extracted first coordinates when the target is a checkerboard target comprises:
generating a delaunay triangle according to the extracted first coordinates, wherein the calibration point is an angular point;
merging adjacent triangles with the gray mean value closest to the Delaunay triangle into a quadrangle;
screening the quadrangles, wherein the quadrangles obtained after screening simultaneously satisfy: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle at present, the length ratio of the opposite sides of the quadrangle is in the interval (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is in the interval (0.75,1.25);
organizing and forming a topological network according to the screened quadrangles;
and determining the second coordinate according to the position of the quadrangle in the topological network.
5. The vision calibration method of claim 1, wherein the correcting the second coordinate to a fifth coordinate comprises:
correcting the directions of the X axis and the Y axis of the coordinate of the calibration point;
correcting the positive direction of the X axis and the positive direction of the Y axis of the coordinate of the calibration point;
and correcting the initial value of the coordinate of the calibration point.
6. The method of visual calibration according to claim 1, wherein the target is a checkerboard target, a grid target, or a circular array target.
7. A vision calibration system, comprising:
the image acquisition module is used for respectively acquiring target images by multiple cameras, wherein at least one two-dimensional code is arranged on the target, and each camera at least comprises one complete two-dimensional code in the visual field range;
the first determining module is used for determining a first coordinate and a second coordinate of a calibration point in each target image, wherein the first coordinate refers to a coordinate of the calibration point in an image coordinate system, the second coordinate refers to a coordinate of the calibration point in a local target coordinate system, the local target coordinate system is a coordinate system where a local target in the field of view of the current camera is located, and the image coordinate system is a coordinate system where the current camera is located;
the obtaining module is used for obtaining a third coordinate, wherein the third coordinate refers to a coordinate of the two-dimensional code in the target image in the image coordinate system;
the first extraction module is used for extracting two-dimension code information in the target image, wherein the two-dimension code information comprises a fourth coordinate, the fourth coordinate refers to a coordinate of the current two-dimension code in a global target coordinate system, and the global target coordinate system is a coordinate system where the complete target is located;
a correction module, configured to correct the second coordinate into a fifth coordinate according to the first coordinate, the second coordinate, the third coordinate, and the fourth coordinate, where the fifth coordinate is a coordinate of the index point in the global target coordinate system;
and the establishing module is used for establishing a mapping relation between the global target coordinate system and each image coordinate system according to the first coordinate and the fifth coordinate.
8. The vision calibration system of claim 7, wherein the first determination module specifically comprises:
the second extraction module is used for extracting first coordinates of all the calibration points in each target image;
the second determining module is used for determining the topological relation between the calibration points in each target image according to the extracted first coordinates;
and the third determining module is used for determining a second coordinate of the marking point in each target image according to the topological relation.
9. The vision calibration system of claim 8, wherein when the target is a checkerboard target, the second extraction module specifically comprises:
the first calculation module is used for calculating the pixel coordinates of the calibration point through a Hessian matrix, wherein the calibration point is an angular point;
and the second calculation module is used for calculating the sub-pixel coordinates of the calibration point by using a Taylor expansion, wherein the sub-pixel coordinates are the first coordinates.
10. The vision calibration system of claim 8, wherein when the target is a checkerboard target, the second determining module specifically comprises:
a generating module, configured to generate a delaunay triangle according to the extracted first coordinate, where the index point is an angular point;
the merging module is used for merging the adjacent triangles with the gray mean value closest to the Delou triangle into a quadrangle;
the screening module is used for screening the quadrangles, wherein the quadrangles obtained after screening simultaneously meet the following requirements: the intersection point of the diagonal lines of the quadrangle is inside the quadrangle at present, the length ratio of the opposite sides of the quadrangle is between the intervals (0.75,1.25), and the length ratio of the adjacent sides of the quadrangle is between the intervals (0.75,1.25);
the organizing module is used for organizing and forming a topological network according to the screened quadrangle;
and the fourth determining module is used for determining the second coordinate according to the position of the quadrangle in the topological network.
CN201910701453.XA 2019-07-31 2019-07-31 Vision calibration method and system Active CN110415304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910701453.XA CN110415304B (en) 2019-07-31 2019-07-31 Vision calibration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910701453.XA CN110415304B (en) 2019-07-31 2019-07-31 Vision calibration method and system

Publications (2)

Publication Number Publication Date
CN110415304A CN110415304A (en) 2019-11-05
CN110415304B true CN110415304B (en) 2023-03-03

Family

ID=68364685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910701453.XA Active CN110415304B (en) 2019-07-31 2019-07-31 Vision calibration method and system

Country Status (1)

Country Link
CN (1) CN110415304B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179360B (en) * 2020-04-13 2020-07-10 杭州利珀科技有限公司 High-precision automatic calibration plate and calibration method
CN111986267B (en) * 2020-08-20 2024-02-20 佛山隆深机器人有限公司 Coordinate system calibration method of multi-camera vision system
CN112651261B (en) * 2020-12-30 2023-11-10 凌云光技术股份有限公司 Calculation method for conversion relation between high-precision 2D camera coordinate system and mechanical coordinate system
CN112381893B (en) * 2021-01-13 2021-04-20 中国人民解放军国防科技大学 Three-dimensional calibration plate calibration method for annular multi-camera system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101419709A (en) * 2008-12-08 2009-04-29 北京航空航天大学 Plane target drone characteristic point automatic matching method for demarcating video camera
US9230326B1 (en) * 2012-12-31 2016-01-05 Cognex Corporation System, method and calibration plate employing embedded 2D data codes as self-positioning fiducials
CN107270810A (en) * 2017-04-28 2017-10-20 深圳大学 The projector calibrating method and device of multi-faceted projection
CN108571971A (en) * 2018-05-17 2018-09-25 北京航空航天大学 A kind of AGV vision positioning systems and method
CN109099883A (en) * 2018-06-15 2018-12-28 哈尔滨工业大学 The big visual field machine vision metrology of high-precision and caliberating device and method
CN109920009A (en) * 2019-03-13 2019-06-21 武汉汉宁轨道交通技术有限公司 Control point detection and management method and device based on two dimensional code mark

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7165484B2 (en) * 2017-04-17 2022-11-04 コグネックス・コーポレイション High precision calibration system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101419709A (en) * 2008-12-08 2009-04-29 北京航空航天大学 Plane target drone characteristic point automatic matching method for demarcating video camera
US9230326B1 (en) * 2012-12-31 2016-01-05 Cognex Corporation System, method and calibration plate employing embedded 2D data codes as self-positioning fiducials
CN107270810A (en) * 2017-04-28 2017-10-20 深圳大学 The projector calibrating method and device of multi-faceted projection
CN108571971A (en) * 2018-05-17 2018-09-25 北京航空航天大学 A kind of AGV vision positioning systems and method
CN109099883A (en) * 2018-06-15 2018-12-28 哈尔滨工业大学 The big visual field machine vision metrology of high-precision and caliberating device and method
CN109920009A (en) * 2019-03-13 2019-06-21 武汉汉宁轨道交通技术有限公司 Control point detection and management method and device based on two dimensional code mark

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARTag,a fiducial marker system using digital techniques;Mark Fiala;《2005 IEEE Computer Society Conference on Computer Vision and Pattern Recogniton》;20050620;全文 *
Using ARUCO coded targets for camera calibration automation;Sergio Leandro Alves da Silva等;《http://ds.doi.org/10.1590/S1982-21702014000300036》;20140930;全文 *
摄像机标定中特征点的一种自动对应方法;谭海曙等;《光电子.激光》;20110515(第05期);全文 *

Also Published As

Publication number Publication date
CN110415304A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110415304B (en) Vision calibration method and system
CN110390640B (en) Template-based Poisson fusion image splicing method, system, equipment and medium
JP6348093B2 (en) Image processing apparatus and method for detecting image of detection object from input data
US6671399B1 (en) Fast epipolar line adjustment of stereo pairs
US8452081B2 (en) Forming 3D models using multiple images
Wöhler 3D computer vision: efficient methods and applications
JP6426968B2 (en) INFORMATION PROCESSING APPARATUS AND METHOD THEREOF
US8447099B2 (en) Forming 3D models using two images
US8144238B2 (en) Image processing apparatus and method
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN113841384B (en) Calibration device, chart for calibration and calibration method
CN112381847B (en) Pipeline end space pose measurement method and system
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
JP2011198330A (en) Method and program for collation in three-dimensional registration
CN112862683B (en) Adjacent image splicing method based on elastic registration and grid optimization
KR102098687B1 (en) Edge-based Visual Odometry method and device
CN106296587B (en) Splicing method of tire mold images
JP2007064836A (en) Algorithm for automating camera calibration
CN112215925A (en) Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine
JP2012185712A (en) Image collation device and image collation method
Yoon et al. Targetless multiple camera-LiDAR extrinsic calibration using object pose estimation
CN104992431A (en) Method and device for multispectral image registration
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant