CN113516721B - Multi-camera-based measurement method and device and storage medium - Google Patents

Multi-camera-based measurement method and device and storage medium Download PDF

Info

Publication number
CN113516721B
CN113516721B CN202111065772.XA CN202111065772A CN113516721B CN 113516721 B CN113516721 B CN 113516721B CN 202111065772 A CN202111065772 A CN 202111065772A CN 113516721 B CN113516721 B CN 113516721B
Authority
CN
China
Prior art keywords
checkerboard
straight line
image
transverse
longitudinal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111065772.XA
Other languages
Chinese (zh)
Other versions
CN113516721A (en
Inventor
黄一格
王南南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Casi Vision Technology Luoyang Co Ltd
Priority to CN202111065772.XA priority Critical patent/CN113516721B/en
Publication of CN113516721A publication Critical patent/CN113516721A/en
Application granted granted Critical
Publication of CN113516721B publication Critical patent/CN113516721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The application provides a measuring method, a device and a storage medium based on multiple cameras, wherein the method comprises the following steps: acquiring the transverse angle and the longitudinal angle of the checkerboard in each camera and the transverse pixel equivalent and the longitudinal pixel equivalent of each camera according to the acquired image of each camera on the checkerboard calibration plate; acquiring the maximum interesting area of all corner points in a checkerboard calibration board contained in each collected image by using the characteristic polygon pattern; the calibration of a single camera is realized by utilizing the image coordinates and the physical coordinates of all the corner points; mutual conversion among the multiple cameras is realized based on the overlapping area among the multiple cameras, so that automatic calibration among the multiple cameras is realized. The method can carry out full-automatic calibration on detection equipment of a large-scale and multiple industrial cameras, improves the efficiency and stability of multi-camera calibration, and can save a large amount of labor cost under the condition that multiple times of calibration are needed for multiple times of adjustment of the image acquisition equipment of the machine platform.

Description

Multi-camera-based measurement method and device and storage medium
Technical Field
The present disclosure relates to the field of image analysis, and in particular, to a multi-camera based measurement method, apparatus, and storage medium.
Background
In the field of industrial visual detection, for a large object to be detected, multiple cameras are required to be used for image acquisition and detection due to the fact that the object to be detected is too large. Before detection, the cameras are calibrated to detect errors between the image acquisition results and the truth values of the cameras, so that the measurement results are compensated, and meanwhile, the position relation among the multiple cameras is calculated to realize the mutual conversion from the images to the world truth values.
In the prior art, the common single camera is calibrated one by one and then calibrated in a unified manner, when a single camera is calibrated, external personnel draw a proper characteristic area for calibration, and simultaneously, a large number of related parameters including the number of the horizontal lines and the lines of the checkerboard in the current camera and the occupied width of the calibration board in the current image are input according to the position of the camera, each camera corresponds to different parameters, the single camera calibration manner is troublesome, a large amount of manpower is wasted, and especially when a large number of calibrations are needed for testing equipment to be tested, the pressure of field personnel is increased.
Disclosure of Invention
In view of this, an object of the present application is to provide a method and an apparatus for measuring based on multiple cameras, and a storage medium, so as to solve the problem of how to improve the efficiency and stability of calibration of multiple cameras in the prior art, and to more stably and efficiently solve the complicated operation of calibration of multiple cameras in the prior art.
In a first aspect, the present application provides a multi-camera based measurement method, applied to a multi-camera detection apparatus, where the multi-camera detection apparatus includes at least two cameras, and the acquisition regions of adjacent cameras in the at least two cameras partially overlap, and the method includes:
placing the chessboard grid calibration plate on an objective table to obtain the collected images of all cameras;
acquiring a central point for determining a checkerboard transverse straight line and a checkerboard longitudinal straight line in each acquired image, and performing straight line fitting by taking the central point as a center to obtain the transverse straight line and the longitudinal straight line of each acquired image;
determining the transverse angle and the longitudinal angle of the checkerboard in each acquired image according to the transverse straight line and the longitudinal straight line of each acquired image;
calculating the horizontal pixel equivalent and the vertical pixel equivalent of the camera corresponding to each acquired image according to the horizontal straight line, the vertical straight line, the horizontal angle and the vertical angle of each acquired image and the physical width of a checkerboard which is input in advance;
acquiring image coordinates and physical coordinates of the characteristic polygonal pattern in each acquired image, and analyzing according to the image coordinates, the physical coordinates, the transverse pixel equivalent, the longitudinal pixel equivalent, the transverse angle and the longitudinal angle of the characteristic polygonal pattern to obtain a maximum interesting area containing all checkerboard corner points in each acquired image;
acquiring all checkerboard angular points in the maximum region of interest aiming at each acquired image, generating corresponding standard grids according to all the checkerboard angular points, and calculating the offset of each checkerboard angular point;
aiming at each collected image, calculating the image-physical coordinate conversion relation of a camera corresponding to the collected image through a weighted least square method and bilinear interpolation according to the image coordinate and the offset of each checkerboard corner point;
selecting any camera as a reference camera, and calculating according to the physical coordinates of the checkerboard corner points of the overlapped part of the collected images to obtain the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras;
and replacing the checkerboard calibration board with a measurement target, and measuring the measurement target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras.
In some embodiments, the obtaining a central point of each acquired image for determining the checkerboard horizontal straight line and the longitudinal straight line, and performing straight line fitting with the central point as a center to obtain the horizontal straight line and the longitudinal straight line of each acquired image includes:
determining the central point of the transverse straight line and the longitudinal straight line of the checkerboard in the image aiming at each collected image;
drawing a transverse straight line interested area and a longitudinal straight line interested area in each acquired image by taking the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard in each acquired image as a center according to the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard, the pixel width of one checkerboard, the pixel specification of the transverse straight line interested area and the pixel specification of the longitudinal straight line interested area which are input in advance in each acquired image;
and aiming at each acquired image, respectively acquiring checkerboard edge points in the transverse straight line interested region and checkerboard edge points in the longitudinal straight line interested region, fitting transverse straight lines according to the checkerboard edge points in the transverse straight line interested region, and fitting longitudinal straight lines according to the checkerboard edge points in the longitudinal straight line interested region to obtain the transverse straight lines and the longitudinal straight lines of each acquired image.
In some embodiments, the obtaining, for each acquired image, a checkerboard edge point in the transverse straight line interest region and a checkerboard edge point in the longitudinal straight line interest region, respectively, and fitting a transverse straight line according to the checkerboard edge point in the transverse straight line interest region, and fitting a longitudinal straight line according to the checkerboard edge point in the longitudinal straight line interest region to obtain a transverse straight line and a longitudinal straight line of each acquired image includes:
aiming at each collected image, respectively detecting the edge points of the checkerboards at fixed intervals from the starting point to the ending point along the scanning direction according to the starting point, the ending point, the pixel specification and the scanning direction of the transverse straight line interesting area and the starting point, the ending point, the pixel specification and the scanning direction of the longitudinal straight line interesting area to obtain the edge points of the checkerboards in the transverse straight line interesting area and the edge points of the checkerboards in the longitudinal straight line interesting area;
aiming at each collected image, respectively carrying out linear fitting on a transverse linear interesting area and a longitudinal linear interesting area through random consistency sampling to obtain a plurality of lines to be selected;
refitting the first target straight line in the transverse straight line interesting region by a weighted least square method to obtain a transverse straight line, and refitting the second target straight line in the longitudinal straight line interesting region to obtain a longitudinal straight line; the first target straight line is the straight line to be selected, which has the largest distance to the straight line and the largest checkerboard edge point within a preset pixel error range, in the straight lines to be selected in the transverse straight line interesting region; and the second target straight line is the straight line to be selected with the largest checkerboard edge point, the distance from which to the straight line is within the preset pixel error range, in the straight lines to be selected in the longitudinal straight line interesting region.
In some embodiments, the calculating the horizontal pixel equivalent and the vertical pixel equivalent of the camera corresponding to each acquired image according to the horizontal line, the vertical line, the horizontal angle and the vertical angle of each acquired image includes:
determining a transverse reference straight line of each acquired image at a preset distance from the transverse straight line and a longitudinal reference straight line of each acquired image at a preset distance from the longitudinal straight line according to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle of each acquired image; the preset distance is half of the diagonal line of a single grid in the checkerboard;
and for each collected image, calculating the horizontal pixel equivalent of the camera according to the pixel widths among all the edge points of the chequers on the horizontal reference line and the physical width of one chequer input in advance, and calculating the vertical pixel equivalent of the camera according to the pixel widths among all the edge points of the chequers on the vertical reference line and the pixel width of one chequer input in advance.
In some embodiments, the obtaining image coordinates and physical coordinates of the feature polygonal pattern in each of the collected images, and analyzing according to the image coordinates, the physical coordinates, the horizontal pixel equivalent, the vertical pixel equivalent, the horizontal angle, and the vertical angle of the feature polygonal pattern to obtain a maximum interest area including all checkerboard corner points in each of the collected images includes:
matching each acquired image according to the characteristic polygon pattern template to obtain image coordinates and physical coordinates of the characteristic polygon pattern in the acquired image;
determining a first line of a checkerboard in each acquired image according to the image coordinates and the physical coordinates of the characteristic polygonal patterns in the acquired image, and determining the transverse full state of the checkerboard in the acquired image by combining the physical coordinates, the transverse pixel equivalent and the transverse angle of the characteristic polygonal patterns and the preset physical width of a checkerboard calibration board; the horizontal full state comprises horizontal full, horizontal left side not full and horizontal right side not full;
determining an undetermined upper edge line of a maximum interest area in a first row of the checkerboards in the acquired image according to the transverse full state of the acquired image, and confirming all checkerboard edge points on the undetermined upper edge line;
performing adaptive movement on end points on two sides of the undetermined upper edge line, and determining an undetermined upper left corner point and an undetermined upper right corner point of the maximum interest area of the collected image;
determining an undetermined left lower angular point and an undetermined right lower angular point of a maximum interest area of the acquired image according to the undetermined left upper angular point, the undetermined right upper angular point, the longitudinal angle, the longitudinal pixel equivalent and the physical height of the chessboard grid calibration board of the acquired image;
judging whether an undetermined lower left corner point and an undetermined lower right corner point of the maximum interest area of the collected image exceed the range of the collected image;
if the detected angle exceeds the preset value, moving the undetermined left lower corner point and the undetermined right lower corner point inwards by taking the width of the checkerboard as a unit according to the transverse angle and the longitudinal angle until the undetermined left lower corner point and the undetermined right lower corner point are in the range of the collected image and on the same line of the checkerboard, determining the undetermined left lower corner point after the movement is completed as a target left lower corner point of the maximum interest area of the collected image, and determining the undetermined right lower corner point after the movement is completed as a target right lower corner point of the maximum interest area of the collected image;
and adjusting the undetermined upper left corner point and the undetermined upper right corner point according to the target lower left corner point and the target lower right corner point of the maximum interest region of the collected image to obtain the maximum interest region in the collected image.
In some embodiments, the matching, for each captured image, according to the characteristic polygon pattern template to obtain the image coordinates and the physical coordinates of the characteristic polygon pattern in the captured image includes:
generating a characteristic polygon pattern template according to a preset characteristic polygon pattern;
for each collected image, carrying out template matching on the collected image according to the characteristic polygonal pattern template to obtain image coordinates and a serial number of the characteristic polygonal pattern in the collected image;
and obtaining the physical coordinates of the characteristic polygon patterns according to the serial numbers of the characteristic polygon patterns in the collected images and a pre-input physical coordinate comparison table of the characteristic polygon patterns on the chessboard pattern calibration board.
In some embodiments, the acquiring, for each acquired image, all checkerboard corner points in the maximum region of interest, generating a corresponding standard grid according to all the checkerboard corner points, and calculating an offset of each checkerboard corner point includes:
for each acquired image, dividing a maximum region of interest in the acquired image into a plurality of interested subareas from left to right and from top to bottom according to the transverse pixel equivalent, the longitudinal pixel equivalent and the physical width of the checkerboard of the acquired image; the size of the interesting subarea is the pixel size of a checkerboard in the acquired image;
for each interested partition, performing straight line fitting of a transverse straight line and a longitudinal straight line according to the checkerboard edge points in the interested partition to obtain the transverse straight line and the longitudinal straight line of each interested partition, and taking the intersection point of the transverse straight line and the longitudinal straight line of each interested partition as the checkerboard corner point of the maximum interested area in the acquired image;
and generating corresponding standard grids according to all the checkerboard corner points of the largest interesting area in the collected image, and calculating the offset of each checkerboard corner point.
In some embodiments, the method further comprises:
calculating the average gray value of the front ten pixels in the preset direction of each checkerboard edge point and the average gray value of the back ten pixels in the preset direction of each checkerboard edge point; the preset direction is the scanning line direction when the edge point of the checkerboard is obtained;
judging whether the average gray value of the front ten pixels in the preset direction of the edge point of the checkerboard and the average gray value of the rear ten pixels in the preset direction meet a fixed multiple difference and the difference is larger than a preset difference;
if the average gray value of the front ten pixels in the preset direction of the edge point of the checkerboard and the average gray value of the rear ten pixels in the preset direction meet a fixed multiple difference and the difference is greater than a preset difference, determining the edge point of the checkerboard as a real edge point;
and if the average gray value of the front ten pixels in the preset direction of the checkerboard edge point and the average gray value of the rear ten pixels in the preset direction do not meet the fixed multiple difference and the difference is greater than the preset difference, determining the checkerboard edge point as a false edge point, and deleting the checkerboard edge point.
In a second aspect, the present application provides a multi-camera based measurement apparatus applied to a multi-camera detection device, where the multi-camera detection device includes at least two cameras, and the acquisition regions of adjacent cameras in the at least two cameras partially overlap, the apparatus includes:
the acquisition module is used for placing the chessboard grid calibration plate on the objective table and acquiring the acquired images of all the cameras;
the fitting module is used for acquiring a central point used for determining the transverse straight line and the longitudinal straight line of the checkerboard in each acquired image, and performing straight line fitting by taking the central point as a center to obtain the transverse straight line and the longitudinal straight line of each acquired image;
the angle module is used for determining the transverse angle and the longitudinal angle of the checkerboard in each acquired image according to the transverse straight line and the longitudinal straight line of each acquired image;
the equivalent module is used for calculating the transverse pixel equivalent and the longitudinal pixel equivalent of the camera corresponding to each acquired image according to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle of each acquired image and the physical width of a checkerboard which is input in advance;
the region module is used for acquiring image coordinates and physical coordinates of the characteristic polygonal pattern in each acquired image, and analyzing according to the image coordinates, the physical coordinates, the horizontal pixel equivalent, the vertical pixel equivalent, the horizontal angle and the vertical angle of the characteristic polygonal pattern to obtain the maximum interesting region containing all the checkerboard corner points in each acquired image;
the offset module is used for acquiring all the checkerboard angular points in the maximum region of interest aiming at each acquired image, generating corresponding standard grids according to all the checkerboard angular points and calculating the offset of each checkerboard angular point;
the calibration module is used for calculating the conversion relation between the image coordinate and the physical coordinate of the camera corresponding to each acquired image through a weighted least square method and bilinear interpolation according to the image coordinate and the offset of each checkerboard corner point;
the conversion module is used for selecting any camera as a reference camera and calculating according to the physical coordinates of the checkerboard corner points of the overlapped part of the collected images to obtain the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras;
and the measuring module is used for replacing the chessboard grid calibration plate with a measuring target and measuring the measuring target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinate of the reference camera and the physical coordinate of each other camera.
In a third aspect, the present application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first aspects described above.
According to the measuring method based on the multiple cameras, the chessboard pattern calibration plate provided with the characteristic polygonal patterns is placed on the objective table, so that all the cameras can simultaneously acquire images for one time; taking the central point of each collected image for determining the transverse straight line and the longitudinal straight line of the checkerboard as a center, and performing straight line fitting to obtain the transverse straight line and the longitudinal straight line; determining the transverse angle and the longitudinal angle of the checkerboard in the collected image through the transverse straight line and the longitudinal straight line, and further calculating the transverse pixel equivalent and the longitudinal pixel equivalent of each camera; drawing a maximum region of interest in the acquired image by acquiring image coordinates and physical coordinates of a characteristic polygonal pattern in the image, and transverse pixel equivalent, longitudinal pixel equivalent, transverse angle and longitudinal angle; finding out all corner points in the maximum region of interest, performing offset analysis on the corner points and a standard grid, and further calculating an image-physical coordinate conversion relation of the camera through a weighted least square method and bilinear interpolation; according to the acquired image overlapping parts of adjacent cameras, the physical coordinate conversion relation between the cameras is determined, the automatic calibration of the multiple cameras is completed by combining the image-physical coordinate conversion relation of each camera, and the calibrated multiple cameras are used for measuring a target. The measuring method based on the multiple cameras, provided by the embodiment of the application, can be used for carrying out full-automatic calibration on detection equipment of a large industrial camera and multiple industrial cameras, the calibration efficiency and stability of the multiple cameras are improved, and a large amount of labor cost can be saved under the condition that multiple times of calibration is needed in multiple times of adjustment of the image acquisition equipment of the machine platform.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a multi-camera based measurement method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a linear array camera acquisition area according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a characteristic polygon pattern provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a center point of a checkerboard provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a transverse straight line region and a longitudinal straight line region provided by an embodiment of the present application;
FIG. 6 is a schematic view of a transverse line and a longitudinal line provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an iterative line fitting method provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a transverse reference line and a longitudinal reference line provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a captured image with a checkerboard filled up laterally according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a captured image with the left side of the checkerboard not fully occupied according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a captured image with the right side of the checkerboard left unoccupied according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of acquiring a checkerboard corner provided in an embodiment of the present application;
fig. 13 is a schematic flow chart of a method for re-judging edge points of a checkerboard according to an embodiment of the present application;
fig. 14 is a schematic diagram of a multi-camera based measurement apparatus provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a measurement method based on multiple cameras, which is applied to multiple-camera detection equipment, wherein the multiple-camera detection equipment comprises at least two cameras, and the acquisition regions of adjacent cameras in the at least two cameras are partially overlapped, as shown in fig. 1, the measurement method comprises the following steps:
s101, placing a chessboard grid calibration plate on an objective table to obtain collected images of all cameras;
s102, obtaining a central point for determining a checkerboard transverse straight line and a checkerboard longitudinal straight line in each collected image, and performing straight line fitting by taking the central point as a center to obtain the transverse straight line and the longitudinal straight line of each collected image;
s103, determining the transverse angle and the longitudinal angle of the checkerboard in each acquired image according to the transverse straight line and the longitudinal straight line of each acquired image;
step S104, calculating the horizontal pixel equivalent and the vertical pixel equivalent of the camera corresponding to each acquired image according to the horizontal straight line, the vertical straight line, the horizontal angle and the vertical angle of each acquired image and the physical width of a checkerboard which is input in advance;
step S105, obtaining image coordinates and physical coordinates of the characteristic polygonal pattern in each collected image, and analyzing according to the image coordinates, the physical coordinates, the horizontal pixel equivalent, the vertical pixel equivalent, the horizontal angle and the vertical angle of the characteristic polygonal pattern to obtain the maximum interesting area containing all checkerboard corner points in each collected image;
step S106, aiming at each collected image, acquiring all checkerboard angular points in the maximum region of interest, generating corresponding standard grids according to all the checkerboard angular points, and calculating the offset of each checkerboard angular point;
step S107, aiming at each collected image, calculating the image-physical coordinate conversion relation of a camera corresponding to the collected image through a weighted least square method and bilinear interpolation according to the image coordinate and the offset of each checkerboard angular point;
s108, selecting any camera as a reference camera, and calculating according to the physical coordinates of the checkerboard corner points of the overlapped part of the collected images to obtain the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras;
and step S109, replacing the checkerboard calibration board with a measurement target, and measuring the measurement target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras.
Specifically, the linear array camera is adopted, and a measuring method based on multiple cameras is provided. The following description will specifically discuss a line camera including four cameras as shown in fig. 2 as an example.
The checkerboard calibration plate is placed on the objective table or the conveying belt, so that the checkerboard calibration plate can be simultaneously present in the acquisition range of all the cameras, and the characteristic polygon patterns are arranged on the upper edges of the checkerboards in the checkerboard calibration plate in advance according to the acquisition range of each camera, so that the acquisition range of each camera contains different and unique characteristic polygon patterns. The characteristic polygonal pattern may be a polygon as shown in fig. 3, and the specific shape of the characteristic polygonal pattern is not limited by the present application.
After the four cameras respectively collect a collected image, firstly, a central point for determining the transverse straight line and the longitudinal straight line of the checkerboard is found from the collected image, and the transverse straight line and the longitudinal straight line passing through the central point are fitted.
And forming included angles between the transverse straight line and the longitudinal straight line and the transverse and longitudinal directions of the image coordinate system of the acquired image to obtain the transverse angle and the longitudinal angle of the checkerboard in the acquired image. According to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle, the transverse pixel equivalent and the longitudinal pixel equivalent of the camera can be calculated.
And respectively matching the characteristic polygonal pattern aiming at each acquired image, determining the image coordinates of the characteristic polygonal pattern, and acquiring the physical coordinates of the characteristic polygonal pattern input in advance. Determining four corner points of a maximum ROI (Region of interest) in a checkerboard in the captured image by combining image coordinates and physical coordinates of the characteristic polygon pattern in the captured image and the horizontal pixel equivalent, the vertical pixel equivalent, the horizontal angle and the vertical angle to determine the maximum ROI in the checkerboard in the captured image.
And calculating the offset (delta x, delta y) of each checkerboard corner point by performing coordinate analysis based on straight line fitting on each checkerboard corner point in the maximum ROI to obtain the image coordinates of all the checkerboard corner points in the maximum ROI and generating a corresponding standard grid according to the image coordinates of all the checkerboard corner points.
And then analyzing the offsets of all the checkerboard corner points by a weighted least square method to obtain an offset rule of the image coordinate and the physical coordinate under the camera, then realizing the mutual conversion of the image coordinate and the physical coordinate by bilinear interpolation, and finding the image-physical coordinate conversion relation of the camera to finish the single-camera calibration of the camera. The bilinear interpolation formula is as follows:
f(x,y)=f(0,0)(1-x)(1-y)+f(1,0)x(1-y)+f(0,1)(1-x)y+f(0,0)xy (1)
because the acquisition ranges of adjacent cameras are partially overlapped, namely the acquired images of the adjacent cameras are partially overlapped in a checkerboard manner, by utilizing the characteristic, any camera (generally, the leftmost camera) is selected as a reference camera, the checkerboard corner points of the overlapped part of the acquired image of the reference camera and the acquired images of the other adjacent cameras are selected, and the physical coordinate sets of the checkerboard corner points in the acquired image of the reference camera and the physical coordinate sets of the checkerboard corner points in the acquired images of the other adjacent cameras are substituted into a physical coordinate conversion relation formula, so that the conversion relation of physical coordinates between two camera coordinate systems can be obtained. The physical coordinate conversion relation formula is concretely as follows:
Figure 704815DEST_PATH_IMAGE001
(2)
wherein the content of the first and second substances,
Figure 457482DEST_PATH_IMAGE002
are the physical coordinates of the reference camera and,
Figure 895417DEST_PATH_IMAGE003
are the physical coordinates of the other cameras in the neighborhood,
Figure 316034DEST_PATH_IMAGE004
is a translation matrix that is a matrix of translations,θ is the rotation angle matrix.
The translation matrix and the rotation angle matrix are obtained by iteration of a gradient descent formula obtained through a loss function of a physical coordinate conversion relation formula.
From the physical coordinate transformation relational equation, the loss function can be derived as follows:
Figure 788604DEST_PATH_IMAGE005
(3)
the gradient descent formula is as follows:
Figure 964370DEST_PATH_IMAGE006
(4)
where f () is the loss function described above.
The condition for completing the iteration of the gradient descent formula is any one of the following items:
(a) the difference value of two adjacent loss functions is within 1 e-5;
(b) the cycle was completed 3000 times.
The multi-camera-based measurement method can be used for carrying out full-automatic calibration on large-scale detection equipment with a plurality of industrial cameras, the calibration efficiency and stability of the multi-camera are improved, and a large amount of labor cost can be saved under the condition that multiple times of calibration are needed for multiple times of adjustment of the image acquisition equipment of the machine platform.
In some embodiments, the step S102 includes:
step a1, aiming at each collected image, determining the central point of the checkerboard horizontal straight line and the central point of the checkerboard vertical straight line in the image;
a2, drawing a transverse straight line interested area and a longitudinal straight line interested area in each acquired image by taking the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard in each acquired image as the center according to the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard, the pixel width of one checkerboard, the pixel specification of the transverse straight line interested area and the pixel specification of the longitudinal straight line interested area which are input in advance in each acquired image;
step a3, aiming at each collected image, respectively obtaining the checkerboard edge points in the transverse straight line interested region and the checkerboard edge points in the longitudinal straight line interested region, fitting transverse straight lines according to the checkerboard edge points in the transverse straight line interested region, and fitting longitudinal straight lines according to the checkerboard edge points in the longitudinal straight line interested region to obtain the transverse straight lines and the longitudinal straight lines of each collected image.
Specifically, for each collected image, as shown in fig. 4, from the upper point ((width-1)/2,0) and the lower point ((width-1)/2, (height-1)) of the collected image, the images are respectively traversed to the points ((width-1)/2, (height-1)/2) until the first point with a lighter gray value is respectively found (i.e., the checkerboard is found), and the middle point of the two points is taken as the center point of the collected image for determining the horizontal line and the vertical line of the checkerboard.
The pixel width of one checkerboard is input in advance, and based on the pixel width, as shown in fig. 5, a horizontal straight line ROI which is 5 checkerboard pixel widths wide and 1 checkerboard pixel width high and a vertical straight line ROI which is 1 checkerboard pixel width wide and 5 checkerboard pixel widths high are drawn with the center point as a center.
Then, fitting of a transverse straight line is performed on the transverse straight line ROI, and a longitudinal straight line is performed on the longitudinal straight line ROI, so that a transverse straight line and a longitudinal straight line as shown in fig. 6 are obtained.
In some embodiments, the step a3 includes:
a31, aiming at each collected image, respectively detecting the edge points of the checkerboards at intervals of fixed length from the starting point to the ending point along the scanning direction according to the starting point, the ending point, the pixel specification and the scanning direction of the transverse straight line interesting area and the starting point, the ending point, the pixel specification and the scanning direction of the longitudinal straight line interesting area to obtain the edge points of the checkerboards in the transverse straight line interesting area and the edge points of the checkerboards in the longitudinal straight line interesting area;
step a32, aiming at each collected image, respectively carrying out linear fitting on a transverse linear interested region and a longitudinal linear interested region through random consistency sampling to obtain a plurality of lines to be selected;
step a33, fitting a first target straight line in the transverse straight line interesting region again through a weighted least square method to obtain a transverse straight line, and fitting a second target straight line in the longitudinal straight line interesting region again to obtain a longitudinal straight line; the first target straight line is the straight line to be selected, which has the largest distance to the straight line and the largest checkerboard edge point within a preset pixel error range, in the straight lines to be selected in the transverse straight line interesting region; the second target straight line is the straight line to be selected with the largest checkerboard edge point, the distance from which to the straight line is within the preset pixel error range, in the straight lines to be selected in the longitudinal straight line interesting region.
Specifically, according to the starting point and the ending point of the transverse straight line ROI and the longitudinal straight line ROI, the checkerboard edge points inside the transverse straight line ROI and the longitudinal straight line ROI are scanned according to the corresponding scanning direction and the fixed width interval length, so that all the checkerboard edge points in the transverse straight line ROI and all the checkerboard edge points in the longitudinal straight line ROI are obtained. The fixed length setting can control the run time and can be set to 3 pixels in general, which can be much finer if no 1 pixel is required for the run time.
And respectively carrying out random consistency sampling on the edge points of the checkerboard aiming at the transverse straight line ROI and the longitudinal straight line ROI, and randomly selecting two points to carry out straight line fitting to obtain a plurality of straight lines to be selected.
And calculating the distance from the edge point of other chequers except the line to be selected in the same ROI to the line to be selected, and reserving the edge point of other chequers with the distance to the line within a preset pixel error range, wherein the error range of the edge point of other chequers except the line to be selected in the same ROI can be set to be 1 pixel generally.
Respectively selecting a line to be selected (namely a first target line and a second target line) with the most other chessboard pattern edge points reserved in the horizontal line ROI and the longitudinal line ROI and other chessboard pattern edge points within a preset pixel error range from the line to be selected, and performing line fitting by a weighted least square method, wherein the weighted least square method has the following formula:
W*A*X=W*B (5)
wherein A, B is checkerboard edge point information, X is a straight line parameter, and W is a weight parameter. The calculation formula of the weight parameter is as follows:
Figure 388136DEST_PATH_IMAGE007
(6)
where, error is the distance from the current point to the result of fitting the straight line, and σ is the pixel error value allowed from the preset point to the straight line. Iterative calculation is performed by using a weighted least square method and a weight parameter calculation formula as the flow shown in fig. 7 until the error value of the two fitting results is within an allowable pixel error value (the error value is 1e-7 in the first fitting, and the error value is 1e-5 in the second fitting), and finally the horizontal straight line and the vertical straight line of the acquired image are obtained.
In some embodiments, the step S104 includes:
b1, determining a transverse reference straight line of each acquired image at a preset distance from the transverse straight line and a longitudinal reference straight line of each acquired image at a preset distance from the longitudinal straight line according to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle of each acquired image; the preset distance is half of the diagonal line of a single grid in the checkerboard;
step b2, aiming at each collected image, calculating the horizontal pixel equivalent of the camera according to the pixel width among all the checkerboard edge points on the horizontal reference line and the physical width of one checkerboard input in advance, and calculating the vertical pixel equivalent of the camera according to the pixel width among all the checkerboard edge points on the vertical reference line and the physical width of one checkerboard input in advance.
Specifically, with the horizontal straight line of the current captured image as a reference, a parallel line of the horizontal straight lines inclined at the same horizontal angle is drawn in the checkerboard in the current captured image as a horizontal reference straight line, and the distance between the horizontal reference straight line and the horizontal straight line is half of the diagonal line of a single checkerboard in the current captured image. And calculating the pixel width between the edge points of the checkerboards intersected with the transverse reference and the physical width of one checkerboard input in advance to obtain the transverse pixel equivalent of the camera corresponding to the current acquired image.
And drawing a parallel line of the longitudinal straight lines with the same inclined longitudinal angle in the checkerboard in the current acquired image as a longitudinal reference straight line by taking the longitudinal straight line of the current acquired image as a reference, wherein the distance between the longitudinal reference straight line and the longitudinal straight line is half of the diagonal line of a single check in the checkerboard in the current acquired image. And calculating by acquiring the pixel width between the longitudinal reference and the edge point of the checkerboard intersected with the physical width of one checkerboard input in advance to obtain the longitudinal pixel equivalent of the camera corresponding to the current acquired image.
The transverse reference line and the longitudinal reference line are drawn as shown in fig. 8, corresponding to the transverse line and the longitudinal line in fig. 6.
By pixel equivalent, the size conversion between physics and images can be realized, and the subsequent camera calibration is completed.
In some embodiments, the step S105 includes:
step c1, aiming at each collected image, matching according to the characteristic polygon pattern template to obtain the image coordinates and physical coordinates of the characteristic polygon pattern in the collected image;
step c2, determining the first line of the checkerboard in each collected image according to the image coordinate and the physical coordinate of the characteristic polygon pattern in the collected image, and determining the horizontal full state of the checkerboard in the collected image by combining the physical coordinate, the horizontal pixel equivalent and the horizontal angle of the characteristic polygon pattern and the preset physical width of the checkerboard calibration board; the horizontal full state comprises horizontal full, horizontal left side not full and horizontal right side not full;
step c3, according to the horizontal full state of the collected image, determining the undetermined upper edge line of the maximum interest area in the first line of the checkerboards in the collected image, and confirming all checkerboard edge points on the undetermined upper edge line;
step c4, performing adaptive movement on end points on two sides of the undetermined upper edge line, and determining an undetermined upper left corner point and an undetermined upper right corner point of the maximum interest area of the collected image;
step c5, determining an undetermined left lower corner point and an undetermined right lower corner point of the maximum interest area of the collected image according to the undetermined left upper corner point, the undetermined right upper corner point, the longitudinal angle, the longitudinal pixel equivalent and the physical height of the chessboard grid calibration plate of the collected image;
step c6, judging whether an undetermined lower left corner point and an undetermined lower right corner point of the maximum interest area of the collected image exceed the range of the collected image;
step c7, if the angle exceeds the preset value, moving the undetermined left lower corner point and the undetermined right lower corner point inwards by taking the width of the checkerboard as a unit according to the transverse angle and the longitudinal angle until the undetermined left lower corner point and the undetermined right lower corner point are in the range of the collected image and are on the same checkerboard line, determining the undetermined left lower corner point after the movement is completed as a target lower left corner point of the maximum interest area of the collected image, and determining the undetermined lower right corner point after the movement is completed as a target lower right corner point of the maximum interest area of the collected image;
and c8, adjusting the undetermined upper left corner point and the undetermined upper right corner point according to the target lower left corner point and the target lower right corner point of the maximum interest region of the collected image, and obtaining the maximum interest region in the collected image.
Specifically, the characteristic polygon patterns on the checkerboard calibration board are set in advance according to the acquisition area of each camera in the line camera and according to the occurrence of only one characteristic polygon pattern in the acquired image of each camera, so that the image coordinates and the physical coordinates of the characteristic polygon patterns in the acquired image can be determined by performing template matching on the characteristic polygon patterns in the acquired image.
Determining a first line of checkerboards in the collected image according to the characteristic polygon pattern, determining the physical lengths of the characteristic polygon pattern from the left end and the right end of the checkerboard calibration plate according to the physical coordinates of the characteristic polygon pattern and the physical width of the checkerboard calibration plate input in advance, and performing physical-image size conversion through the transverse pixel equivalent to obtain the pixel lengths of the characteristic polygon pattern from the left end and the right end of the checkerboard calibration plate. And respectively confirming whether the left end and the right end of the checkerboard calibration plate exceed the image boundary in the current acquired image according to the transverse angle and the pixel length of the characteristic polygonal pattern from the left end and the right end of the checkerboard calibration plate so as to determine the transverse full state of the acquired image, wherein the transverse full state is divided into the following three conditions:
i) lateral fullness (fig. 9): the pixel lengths of the characteristic polygon pattern from the left end and the right end of the checkerboard calibration board respectively exceed the distance from the characteristic polygon pattern to the left boundary and the right boundary of the acquired image, namely the acquired image is acquired by a camera positioned in the middle;
ii) lateral left side not full (fig. 10): the pixel length of the characteristic polygon pattern from the left end of the checkerboard calibration plate does not exceed the distance from the characteristic polygon pattern to the left boundary of the collected image, and the pixel length of the characteristic polygon pattern from the right end of the checkerboard calibration plate exceeds the distance from the characteristic polygon pattern to the right boundary of the collected image, namely the pixel length is collected by a camera positioned at the leftmost side when the image is collected;
iii) lateral right side not full (FIG. 11): the pixel length of the characteristic polygon pattern from the left end of the checkerboard calibration plate exceeds the distance from the characteristic polygon pattern to the left boundary of the collected image, and the pixel length of the characteristic polygon pattern from the right end of the checkerboard calibration plate does not exceed the distance from the characteristic polygon pattern to the right boundary of the collected image, namely the pixel length is collected by the camera positioned at the rightmost side during image collection.
Drawing a straight line crossing the first line of the checkerboards in the collected image, solving the image coordinates of the edge points of all the checkerboards on the straight line, and then carrying out adaptive movement on the end points on two sides of the upper edge line to be determined. The adaptive movement is carried out according to the distance from the end points on the two sides to the edge of the collected image on the straight line extension line on the end side, and if the distance is more than one tenth of the width of the checkerboard, the end points are moved to the current position of the end points and the midpoint of the edge of the collected image on the straight line extension line on the end side; if the distance is less than one tenth of the width of the checkerboard, the end point is moved to the edge point of the checkerboard adjacent to the end point and the middle point of the edge of the collected image on the side straight line extension line. So as to determine the undetermined upper left corner and the undetermined upper right corner of the maximum ROI of the acquired image.
And calculating the pixel height of the checkerboard calibration plate according to the longitudinal pixel equivalent of the acquired image and the physical height of the checkerboard calibration plate, and then determining the undetermined left lower angular point and the undetermined right lower angular point of the maximum ROI of the acquired image according to the positions of the undetermined left upper angular point and the undetermined right upper angular point which are downwards distant from the checkerboard calibration plate by the longitudinal angle.
However, due to the size of the acquisition range or the longitudinal angle of the camera, the undetermined left lower corner point and the undetermined right lower corner point determined above may be outside the frame of the acquired image, in this case, the undetermined left lower corner point and the undetermined right lower corner point need to be moved, the movement is performed in units of a checkerboard, and the acquired image includes the upper and lower sections of a checkerboard calibration board, so that the undetermined left lower corner point and the undetermined right lower corner point only need to be moved transversely into the checkerboard in the acquired image, and after the undetermined left lower corner point and the undetermined right lower corner point are moved, the undetermined left upper corner point and the undetermined right upper corner point also need to be moved correspondingly, so that the maximum ROI formed by the four corner points is a parallelogram. The four corner points are all located in one of the checkerboards.
In some embodiments, the step c1 includes:
step c11, generating a characteristic polygon pattern template according to a preset characteristic polygon pattern;
step c12, aiming at each collected image, carrying out template matching on the collected image according to the characteristic polygonal pattern template to obtain the image coordinates and the serial number of the characteristic polygonal pattern in the collected image;
and c13, obtaining the physical coordinates of the characteristic polygon pattern according to the serial number of the characteristic polygon pattern in the collected image and the pre-input physical coordinate comparison table of the characteristic polygon pattern on the chessboard pattern calibration board.
Specifically, a characteristic polygon pattern template is generated based on a characteristic polygon pattern that is set in advance and is shaped as shown in fig. 3, and is used for matching the characteristic polygon pattern in the captured image.
And matching the collected image based on the characteristic polygon pattern template to obtain the image coordinates of the characteristic polygon pattern in the collected image. When the characteristic polygon patterns are arranged on the chessboard pattern calibration plate in advance, each characteristic polygon pattern has a sequence number, and a comparison table of the sequence number of each characteristic polygon pattern and the physical coordinate is established corresponding to the sequence number, so that the physical coordinate corresponding to the sequence number of the matched characteristic polygon can be obtained through table lookup.
In some embodiments, the step S106 includes:
d1, aiming at each collected image, dividing the maximum interested area in the collected image into a plurality of interested subareas from left to right and from top to bottom according to the transverse pixel equivalent, the longitudinal pixel equivalent and the physical width of the checkerboard of the collected image; the size of the region of interest is the pixel size of a checkerboard in the captured image;
d2, for each interested partition, performing straight line fitting of a transverse straight line and a longitudinal straight line according to the checkerboard edge points in the interested partition to obtain the transverse straight line and the longitudinal straight line of each interested partition, and taking the intersection point of the transverse straight line and the longitudinal straight line of each interested partition as the checkerboard corner point of the maximum interested area in the acquired image;
and d3, generating a corresponding standard grid according to all the checkerboard corner points of the region of maximum interest in the collected image, and calculating the offset of each checkerboard corner point.
Specifically, as shown in fig. 12, for each acquired image, the determined maximum ROI is the ROI that includes the largest checkerboard corner points in the acquired image, and the pixel size of each checkerboard can be calculated according to the previously input physical widths of the checkerboards and the horizontal pixel equivalent and the vertical pixel equivalent of the acquired image. ROI partitions with the size of checkerboard pixel size are divided from left to right and from top to bottom from the upper left corner of the maximum ROI, and each ROI partition comprises and only has one checkerboard corner point.
And acquiring checkerboard edge points in each ROI partition, fitting transverse straight lines and longitudinal straight lines in the ROI partition through the checkerboard edge points, calculating image coordinates of intersection points of the transverse straight lines and the longitudinal straight lines in the ROI partition, and taking the image coordinates of the intersection points as image coordinates of checkerboard corner points contained in the ROI partition.
And after the image coordinates of all the checkerboard corner points are calculated, generating corresponding standard grids according to all the checkerboard corner points. And (3) according to the previously input checkerboard pixel size obtained by converting the transverse pixel equivalent and the longitudinal pixel equivalent of the physical width of the checkerboard, taking the center of the maximum ROI as an origin, correspondingly generating standard coordinates of each checkerboard corner point according to the transverse angle and the longitudinal angle, and generating the standard grid according to the standard coordinates of the checkerboard corner points.
And comparing the standard coordinates of each checkerboard corner point on the standard grid with the image coordinates of the corresponding checkerboard corner point in the maximum ROI, and calculating the offset of each checkerboard corner point.
Compared with the method for searching all corner functions in a HALCON operator library, the method has the advantages that all the obtained corners are ordered, later sequencing is not needed, and the method is more convenient and faster.
In some embodiments, as shown in fig. 13, the method further comprises:
step S201, calculating the average gray value of the first ten pixels of the checkerboard edge points in the preset direction and the average gray value of the last ten pixels of the checkerboard edge points in the preset direction aiming at each checkerboard edge point; the preset direction is the scanning line direction when the edge point of the checkerboard is obtained;
step S202, judging whether the average gray value of the front ten pixels in the preset direction of the edge point of the checkerboard and the average gray value of the back ten pixels in the preset direction meet a fixed multiple difference and the difference is larger than a preset difference;
step S203, if the average gray value of the first ten pixels in the preset direction of the checkerboard edge point and the average gray value of the last ten pixels in the preset direction meet a fixed multiple difference and the difference is greater than a preset difference, determining the checkerboard edge point as a real edge point;
step S204, if the average gray value of the previous ten pixels in the preset direction of the checkerboard edge point and the average gray value of the next ten pixels in the preset direction do not meet a fixed multiple difference and the difference is greater than a preset difference, determining the checkerboard edge point as a false edge point, and deleting the checkerboard edge point.
Specifically, when the edge points of the checkerboard are obtained by scanning, the edge points of the checkerboard are obtained by taking the difference between the gray values of the two previous and next points as the edge points of the checkerboard when the difference between the gray values of the two previous and next points is greater than the fixed threshold, but the difference is not the edge point of the checkerboard when the difference between the gray values of the two previous and next points is greater than the fixed threshold.
Aiming at each checkerboard edge point, acquiring gray values of the first ten pixel points of the checkerboard edge point on a straight line used for scanning the checkerboard edge point, and calculating the average gray value of the pixel points; and acquiring the gray values of ten pixel points behind the edge point of the checkerboard on a straight line used for scanning the edge point of the checkerboard, and calculating the average gray value of the pixel points.
Calculating the ratio of the average gray value of other checkerboard edge points in the first ten pixels of the checkerboard edge point to the average gray value of other checkerboard edge points in the last ten pixels, judging whether the ratio of the average gray values meets a fixed multiple difference, and whether the difference between the average gray value of other checkerboard edge points in the first ten pixels and the average gray value of other checkerboard edge points in the last ten pixels is larger than a preset difference value, and if the difference meets the preset difference value, determining that the checkerboard edge point is a real edge point; if not, determining the edge point of the checkerboard as a false edge point, and deleting the false edge point. The fixed multiple difference and the preset difference are set by experience and image sampling.
Corresponding to the above method embodiment, the present application further provides a measuring apparatus based on multiple cameras, applied to a multiple-camera detection device, where the multiple-camera detection device includes at least two cameras, and the acquisition regions of adjacent cameras in the at least two cameras are partially overlapped, as shown in fig. 14, the apparatus includes:
the acquisition module 30 is used for placing the chessboard grid calibration plate on the objective table and acquiring the acquired images of all the cameras;
the fitting module 31 is configured to obtain a central point in each acquired image, where the central point is used to determine a checkerboard horizontal straight line and a checkerboard longitudinal straight line, and perform straight line fitting with the central point as a center to obtain a horizontal straight line and a longitudinal straight line of each acquired image;
the angle module 32 is configured to determine a horizontal angle and a vertical angle of the checkerboard in each acquired image according to a horizontal straight line and a vertical straight line of each acquired image;
the equivalent module 33 is configured to calculate the horizontal pixel equivalent and the vertical pixel equivalent of each acquired image corresponding to the camera according to the horizontal line, the vertical line, the horizontal angle, and the vertical angle of each acquired image and a physical width of a checkerboard input in advance;
the region module 34 is configured to obtain image coordinates and physical coordinates of the characteristic polygonal pattern in each of the acquired images, and analyze the image coordinates, the physical coordinates, the horizontal pixel equivalents, the vertical pixel equivalents, the horizontal angles, and the vertical angles of the characteristic polygonal pattern to obtain a maximum region of interest including all the checkerboard corner points in each of the acquired images;
the offset module 35 is configured to acquire all checkerboard corner points in the maximum region of interest for each acquired image, generate a corresponding standard grid according to all the checkerboard corner points, and calculate an offset of each checkerboard corner point;
a calibration module 36, configured to calculate, for each acquired image, a conversion relationship between an image coordinate and a physical coordinate of a camera corresponding to the acquired image by a weighted least square method and bilinear interpolation according to the image coordinate and the offset of each checkerboard corner point;
the conversion module 37 is configured to select any one of the cameras as a reference camera, and perform calculation according to physical coordinates of checkerboard corner points of an overlapping portion between the acquired images to obtain a conversion relationship between physical coordinates of the reference camera and physical coordinates of each of the other cameras;
and the measuring module 38 is used for replacing the checkerboard calibration board with a measuring target and measuring the measuring target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinates of the reference camera and the physical coordinates of each other camera.
For the specific description of the embodiment of the apparatus, reference may be made to the above-mentioned method embodiment and its extended embodiments, which are not described herein again.
Corresponding to the multi-camera based measurement method in fig. 1, the present application further provides a computer device 400, as shown in fig. 15, the device includes a memory 401, a processor 402, and a computer program stored in the memory 401 and executable on the processor 402, wherein the processor 402 implements the multi-camera based measurement method when executing the computer program.
Specifically, the memory 401 and the processor 402 can be general memories and processors, which are not limited in this respect, and when the processor 402 runs a computer program stored in the memory 401, the multi-camera-based measurement method can be executed, so as to solve the problem in the prior art how to improve the efficiency and stability of multi-camera calibration.
Corresponding to the multi-camera based measurement method in fig. 1, the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the multi-camera based measurement method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is run, the measurement method based on multiple cameras can be executed, so that the problem of how to improve the efficiency and stability of calibration of multiple cameras in the prior art is solved; taking the central point of each collected image for determining the transverse straight line and the longitudinal straight line of the checkerboard as a center, and performing straight line fitting to obtain the transverse straight line and the longitudinal straight line; determining the transverse angle and the longitudinal angle of the checkerboard in the collected image through the transverse straight line and the longitudinal straight line, and further calculating the transverse pixel equivalent and the longitudinal pixel equivalent of each camera; drawing a maximum region of interest in the acquired image by acquiring image coordinates and physical coordinates of a characteristic polygonal pattern in the image, and transverse pixel equivalent, longitudinal pixel equivalent, transverse angle and longitudinal angle; finding out all corner points in the maximum region of interest, performing offset analysis on the corner points and a standard grid, and further calculating an image-physical coordinate conversion relation of the camera through a weighted least square method and bilinear interpolation; according to the acquired image overlapping parts of adjacent cameras, the physical coordinate conversion relation between the cameras is determined, the automatic calibration of the multiple cameras is completed by combining the image-physical coordinate conversion relation of each camera, and the calibrated multiple cameras are used for measuring a target. The measuring method based on the multiple cameras, provided by the embodiment of the application, can be used for carrying out full-automatic calibration on detection equipment of a large industrial camera and multiple industrial cameras, the calibration efficiency and stability of the multiple cameras are improved, and a large amount of labor cost can be saved under the condition that multiple times of calibration is needed in multiple times of adjustment of the image acquisition equipment of the machine platform.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A measurement method based on multiple cameras is applied to a multi-camera detection device, the multi-camera detection device comprises at least two cameras, and the acquisition areas of the adjacent cameras in the at least two cameras are partially overlapped, the method comprises the following steps:
placing the chessboard grid calibration plate on an objective table to obtain the collected images of all cameras;
acquiring a central point for determining a checkerboard transverse straight line and a checkerboard longitudinal straight line in each acquired image, and performing straight line fitting by taking the central point as a center to obtain the transverse straight line and the longitudinal straight line of each acquired image;
determining the transverse angle and the longitudinal angle of the checkerboard in each acquired image according to the transverse straight line and the longitudinal straight line of each acquired image;
calculating the horizontal pixel equivalent and the vertical pixel equivalent of the camera corresponding to each acquired image according to the horizontal straight line, the vertical straight line, the horizontal angle and the vertical angle of each acquired image and the physical width of a checkerboard which is input in advance;
acquiring image coordinates and physical coordinates of the characteristic polygonal pattern in each acquired image, and analyzing according to the image coordinates, the physical coordinates, the transverse pixel equivalent, the longitudinal pixel equivalent, the transverse angle and the longitudinal angle of the characteristic polygonal pattern to obtain a maximum interesting area containing all checkerboard corner points in each acquired image;
acquiring all checkerboard angular points in the maximum region of interest aiming at each acquired image, generating corresponding standard grids according to all the checkerboard angular points, and calculating the offset of each checkerboard angular point;
aiming at each collected image, calculating the image-physical coordinate conversion relation of a camera corresponding to the collected image according to the image coordinate and the offset of each checkerboard corner point;
selecting any camera as a reference camera, and calculating according to the physical coordinates of the checkerboard corner points of the overlapped part of the collected images to obtain the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras;
and replacing the checkerboard calibration board with a measurement target, and measuring the measurement target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras.
2. The method of claim 1, wherein the obtaining of the center points of the cross straight lines and the longitudinal straight lines for determining the checkerboard pattern in each of the captured images and fitting the straight lines with the center points as the center to obtain the cross straight lines and the longitudinal straight lines in each of the captured images comprises:
determining the central point of the transverse straight line and the longitudinal straight line of the checkerboard in the image aiming at each collected image;
drawing a transverse straight line interested area and a longitudinal straight line interested area in each acquired image by taking the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard in each acquired image as a center according to the central point for determining the transverse straight line and the longitudinal straight line of the checkerboard, the pixel width of one checkerboard, the pixel specification of the transverse straight line interested area and the pixel specification of the longitudinal straight line interested area which are input in advance in each acquired image;
and aiming at each acquired image, respectively acquiring checkerboard edge points in the transverse straight line interested region and checkerboard edge points in the longitudinal straight line interested region, fitting transverse straight lines according to the checkerboard edge points in the transverse straight line interested region, and fitting longitudinal straight lines according to the checkerboard edge points in the longitudinal straight line interested region to obtain the transverse straight lines and the longitudinal straight lines of each acquired image.
3. The method of claim 2, wherein the obtaining, for each of the acquired images, checkerboard edge points in the transverse straight line region of interest and checkerboard edge points in the longitudinal straight line region of interest, respectively, and fitting transverse straight lines according to the checkerboard edge points in the transverse straight line region of interest and fitting longitudinal straight lines according to the checkerboard edge points in the longitudinal straight line region of interest to obtain the transverse straight lines and the longitudinal straight lines of each of the acquired images comprises:
aiming at each collected image, respectively detecting the edge points of the checkerboards at fixed intervals from the starting point to the ending point along the scanning direction according to the starting point, the ending point, the pixel specification and the scanning direction of the transverse straight line interesting area and the starting point, the ending point, the pixel specification and the scanning direction of the longitudinal straight line interesting area to obtain the edge points of the checkerboards in the transverse straight line interesting area and the edge points of the checkerboards in the longitudinal straight line interesting area;
aiming at each collected image, respectively carrying out linear fitting on a transverse linear interesting area and a longitudinal linear interesting area through random consistency sampling to obtain a plurality of lines to be selected;
determining to refit a first target straight line in the region of interest of the transverse straight line by a weighted least square method to obtain the transverse straight line, and refit a second target straight line in the region of interest of the longitudinal straight line to obtain the longitudinal straight line; the first target straight line is the straight line to be selected, which has the largest distance to the straight line and the largest checkerboard edge point within a preset pixel error range, in the straight lines to be selected in the transverse straight line interesting region; and the second target straight line is the straight line to be selected with the largest checkerboard edge point, the distance from which to the straight line is within the preset pixel error range, in the straight lines to be selected in the longitudinal straight line interesting region.
4. The method of claim 1, wherein calculating the horizontal pixel equivalent and the vertical pixel equivalent of the corresponding camera for each captured image based on the horizontal line, the vertical line, the horizontal angle, and the vertical angle of each captured image comprises:
determining a transverse reference straight line of each acquired image at a preset distance from the transverse straight line and a longitudinal reference straight line of each acquired image at a preset distance from the longitudinal straight line according to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle of each acquired image; the preset distance is half of the diagonal line of a single grid in the checkerboard;
and for each collected image, calculating the horizontal pixel equivalent of the camera according to the pixel widths among all the edge points of the chequers on the horizontal reference line and the physical width of one chequer input in advance, and calculating the vertical pixel equivalent of the camera according to the pixel widths among all the edge points of the chequers on the vertical reference line and the pixel width of one chequer input in advance.
5. The method of claim 1, wherein the obtaining image coordinates and physical coordinates of the characteristic polygon pattern in each of the captured images, and analyzing according to the image coordinates, the physical coordinates, the horizontal pixel equivalents, the vertical pixel equivalents, the horizontal angles, and the vertical angles of the characteristic polygon pattern to obtain a maximum region of interest including all the checkerboard corner points in each of the captured images comprises:
matching each acquired image according to the characteristic polygon pattern template to obtain image coordinates and physical coordinates of the characteristic polygon pattern in the acquired image;
determining a first line of a checkerboard in each acquired image according to the image coordinates and the physical coordinates of the characteristic polygonal patterns in the acquired image, and determining the transverse full state of the checkerboard in the acquired image by combining the physical coordinates, the transverse pixel equivalent and the transverse angle of the characteristic polygonal patterns and the preset physical width of a checkerboard calibration board; the horizontal full state comprises horizontal full, horizontal left side not full and horizontal right side not full;
determining an undetermined upper edge line of a maximum interest area in a first row of the checkerboards in the acquired image according to the transverse full state of the acquired image, and confirming all checkerboard edge points on the undetermined upper edge line;
performing adaptive movement on end points on two sides of the undetermined upper edge line, and determining an undetermined upper left corner point and an undetermined upper right corner point of the maximum interest area of the collected image;
determining an undetermined left lower angular point and an undetermined right lower angular point of a maximum interest area of the acquired image according to the undetermined left upper angular point, the undetermined right upper angular point, the longitudinal angle, the longitudinal pixel equivalent and the physical height of the chessboard grid calibration board of the acquired image;
judging whether an undetermined lower left corner point and an undetermined lower right corner point of the maximum interest area of the collected image exceed the range of the collected image;
if the detected angle exceeds the preset value, moving the undetermined left lower corner point and the undetermined right lower corner point inwards by taking the width of the checkerboard as a unit according to the transverse angle and the longitudinal angle until the undetermined left lower corner point and the undetermined right lower corner point are in the range of the collected image and on the same line of the checkerboard, determining the undetermined left lower corner point after the movement is completed as a target left lower corner point of the maximum interest area of the collected image, and determining the undetermined right lower corner point after the movement is completed as a target right lower corner point of the maximum interest area of the collected image;
and adjusting the undetermined upper left corner point and the undetermined upper right corner point according to the target lower left corner point and the target lower right corner point of the maximum interest region of the collected image to obtain the maximum interest region in the collected image.
6. The method of claim 5, wherein said matching, for each captured image, against a characteristic polygon pattern template to obtain image coordinates and physical coordinates of a characteristic polygon pattern in the captured image comprises:
generating a characteristic polygon pattern template according to a preset characteristic polygon pattern;
for each collected image, carrying out template matching on the collected image according to the characteristic polygonal pattern template to obtain image coordinates and a serial number of the characteristic polygonal pattern in the collected image;
and obtaining the physical coordinates of the characteristic polygon patterns according to the serial numbers of the characteristic polygon patterns in the collected images and a pre-input physical coordinate comparison table of the characteristic polygon patterns on the chessboard pattern calibration board.
7. The method of claim 1, wherein the obtaining all tessellated corner points within the largest region of interest for each acquired image, and generating corresponding standard meshes from the all tessellated corner points, calculating an offset for each tessellated corner point, comprises:
for each acquired image, dividing a maximum region of interest in the acquired image into a plurality of interested subareas from left to right and from top to bottom according to the transverse pixel equivalent, the longitudinal pixel equivalent and the physical width of the checkerboard of the acquired image; the size of the interesting subarea is the pixel size of a checkerboard in the acquired image;
for each interested partition, performing straight line fitting of a transverse straight line and a longitudinal straight line according to the checkerboard edge points in the interested partition to obtain the transverse straight line and the longitudinal straight line of each interested partition, and taking the intersection point of the transverse straight line and the longitudinal straight line of each interested partition as the checkerboard corner point of the maximum interested area in the acquired image;
and generating corresponding standard grids according to all the checkerboard corner points of the largest interesting area in the collected image, and calculating the offset of each checkerboard corner point.
8. The method of any of claims 1-7, further comprising:
calculating the average gray value of the front ten pixels in the preset direction of each checkerboard edge point and the average gray value of the back ten pixels in the preset direction of each checkerboard edge point; the preset direction is the scanning line direction when the edge point of the checkerboard is obtained;
judging whether the average gray value of the front ten pixels in the preset direction of the edge point of the checkerboard and the average gray value of the rear ten pixels in the preset direction meet a fixed multiple difference and the difference is larger than a preset difference;
if the average gray value of the front ten pixels in the preset direction of the edge point of the checkerboard and the average gray value of the rear ten pixels in the preset direction meet a fixed multiple difference and the difference is greater than a preset difference, determining the edge point of the checkerboard as a real edge point;
and if the average gray value of the front ten pixels in the preset direction of the checkerboard edge point and the average gray value of the rear ten pixels in the preset direction do not meet the fixed multiple difference and the difference is greater than the preset difference, determining the checkerboard edge point as a false edge point, and deleting the checkerboard edge point.
9. A measuring device based on multiple cameras is characterized by being applied to a multi-camera detection device, wherein the multi-camera detection device comprises at least two cameras, the acquisition areas of the adjacent cameras in the at least two cameras are partially overlapped, and the device comprises:
the acquisition module is used for placing the chessboard grid calibration plate on the objective table and acquiring the acquired images of all the cameras;
the fitting module is used for acquiring a central point used for determining the transverse straight line and the longitudinal straight line of the checkerboard in each acquired image, and performing straight line fitting by taking the central point as a center to obtain the transverse straight line and the longitudinal straight line of each acquired image;
the angle module is used for determining the transverse angle and the longitudinal angle of the checkerboard in each acquired image according to the transverse straight line and the longitudinal straight line of each acquired image;
the equivalent module is used for calculating the transverse pixel equivalent and the longitudinal pixel equivalent of the camera corresponding to each acquired image according to the transverse straight line, the longitudinal straight line, the transverse angle and the longitudinal angle of each acquired image and the physical width of a checkerboard which is input in advance;
the region module is used for acquiring image coordinates and physical coordinates of the characteristic polygonal pattern in each acquired image, and analyzing according to the image coordinates, the physical coordinates, the horizontal pixel equivalent, the vertical pixel equivalent, the horizontal angle and the vertical angle of the characteristic polygonal pattern to obtain the maximum interesting region containing all the checkerboard corner points in each acquired image;
the offset module is used for acquiring all the checkerboard angular points in the maximum region of interest aiming at each acquired image, generating corresponding standard grids according to all the checkerboard angular points and calculating the offset of each checkerboard angular point;
the calibration module is used for calculating the conversion relation between the image coordinate and the physical coordinate of the camera corresponding to each collected image according to the image coordinate and the offset of each checkerboard corner point aiming at each collected image;
the conversion module is used for selecting any camera as a reference camera and calculating according to the physical coordinates of the checkerboard corner points of the overlapped part of the collected images to obtain the conversion relation between the physical coordinates of the reference camera and the physical coordinates of other cameras;
and the measuring module is used for replacing the chessboard grid calibration plate with a measuring target and measuring the measuring target according to the image-physical coordinate conversion relation of each camera and the conversion relation between the physical coordinate of the reference camera and the physical coordinate of each other camera.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 8.
CN202111065772.XA 2021-09-13 2021-09-13 Multi-camera-based measurement method and device and storage medium Active CN113516721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111065772.XA CN113516721B (en) 2021-09-13 2021-09-13 Multi-camera-based measurement method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111065772.XA CN113516721B (en) 2021-09-13 2021-09-13 Multi-camera-based measurement method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113516721A CN113516721A (en) 2021-10-19
CN113516721B true CN113516721B (en) 2021-11-12

Family

ID=78063304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111065772.XA Active CN113516721B (en) 2021-09-13 2021-09-13 Multi-camera-based measurement method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113516721B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116634134B (en) * 2023-05-19 2024-01-30 中科慧远视觉技术(洛阳)有限公司 Imaging system calibration method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805935A (en) * 2018-05-02 2018-11-13 南京大学 It is a kind of based on orthogonal pixel equivalent than line-scan digital camera distortion correction method
CN110458898A (en) * 2019-08-15 2019-11-15 北京迈格威科技有限公司 Camera calibration plate, nominal data acquisition method, distortion correction method and device
CN112614146A (en) * 2020-12-21 2021-04-06 广东奥普特科技股份有限公司 Method and device for judging chessboard calibration corner points and computer readable storage medium
CN112797900A (en) * 2021-04-07 2021-05-14 中科慧远视觉技术(北京)有限公司 Multi-camera plate size measuring method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PT2742484T (en) * 2011-07-25 2017-01-02 Univ De Coimbra Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805935A (en) * 2018-05-02 2018-11-13 南京大学 It is a kind of based on orthogonal pixel equivalent than line-scan digital camera distortion correction method
CN110458898A (en) * 2019-08-15 2019-11-15 北京迈格威科技有限公司 Camera calibration plate, nominal data acquisition method, distortion correction method and device
CN112614146A (en) * 2020-12-21 2021-04-06 广东奥普特科技股份有限公司 Method and device for judging chessboard calibration corner points and computer readable storage medium
CN112797900A (en) * 2021-04-07 2021-05-14 中科慧远视觉技术(北京)有限公司 Multi-camera plate size measuring method

Also Published As

Publication number Publication date
CN113516721A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
US7630539B2 (en) Image processing apparatus
US9892504B2 (en) Image inspection method and inspection region setting method
JP6363863B2 (en) Information processing apparatus and information processing method
US20150109418A1 (en) Method and system for three-dimensional data acquisition
KR20110120317A (en) Registration of 3d point cloud data to 2d electro-optical image data
KR101752554B1 (en) System and method for nuclear fuel assembly deformation measurement
CN107155341A (en) 3 D scanning system and framework
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN113516721B (en) Multi-camera-based measurement method and device and storage medium
CN115641337B (en) Linear defect detection method, device, medium, equipment and system
CN111882530A (en) Sub-pixel positioning map generation method, positioning method and device
CN111681186A (en) Image processing method and device, electronic equipment and readable storage medium
CN114638824A (en) Fusion method, device, equipment and medium for collecting images based on AOI equipment
KR102023087B1 (en) Method for camera calibration
US10516822B2 (en) Method and device for merging images of calibration devices
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN109148433B (en) Method and apparatus for determining dimensions of an integrated circuit device
CN113077523A (en) Calibration method, calibration device, computer equipment and storage medium
CN112102391A (en) Measuring method and device, electronic device and storage medium
KR20140113449A (en) Drawing data generating method, drawing method, drawing data generating apparatus and drawing apparatus
CN116379965A (en) Structured light system calibration method and device, structured light system and storage medium
CN113379681B (en) Method and system for obtaining inclination angle of LED chip, electronic device and storage medium
CN110035279B (en) Method and device for searching SFR test area in checkerboard test pattern
CN116993803B (en) Landslide deformation monitoring method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant