CN116385558A - Calibration method, device and equipment for camera parameters - Google Patents

Calibration method, device and equipment for camera parameters Download PDF

Info

Publication number
CN116385558A
CN116385558A CN202310319505.3A CN202310319505A CN116385558A CN 116385558 A CN116385558 A CN 116385558A CN 202310319505 A CN202310319505 A CN 202310319505A CN 116385558 A CN116385558 A CN 116385558A
Authority
CN
China
Prior art keywords
target
camera
focal length
parameter
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310319505.3A
Other languages
Chinese (zh)
Inventor
周杨
陈元吉
邓志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN202310319505.3A priority Critical patent/CN116385558A/en
Publication of CN116385558A publication Critical patent/CN116385558A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a method, a device and equipment for calibrating camera parameters, wherein the method comprises the following steps: determining pixel coordinates of the calibration point under the image coordinate system based on the calibration two-dimensional image, and determining physical coordinates of the calibration point under the system coordinate system based on the position relation between the optical identifier and the calibration point; generating a plurality of coordinate point pairs corresponding to the plurality of calibration points; configuring an initial focal length value for the focal length of the camera, and determining a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and a plurality of coordinate point pairs; determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value; and calibrating a target focal length value and a target parameter value for the two-dimensional camera. Through the technical scheme of the application, the calibration operation can be simplified, the calibration time is reduced, and the labor cost of calibration is saved.

Description

Calibration method, device and equipment for camera parameters
Technical Field
The application relates to the technical field of vision, in particular to a method, a device and equipment for calibrating camera parameters.
Background
In the field of vision-based multidimensional information measurement and identification, a plurality of cameras can be used for acquiring information of different dimensions of an object, and then the identification information of the plurality of cameras of the same object is bound by utilizing a time domain correlation prediction technology to obtain complete information of the object. For example, in a parcel volume measurement and optical symbology identification application of a logistics system, a three-dimensional camera may be used for volume measurement and parcel sorting of parcels, a plurality of small-field two-dimensional cameras may be used for optical symbology identification of parcels, and a large-field two-dimensional camera may be used for obtaining a panoramic view of parcels. And then, carrying out three-dimensional position association on a plurality of cameras, acquiring the volume value, the category information, the optical symbol information, the panorama and other information of each package by combining a time domain association prediction technology, and carrying out subsequent operations such as piece counting, anti-missing, package rechecking, classified loading and the like on the basis of the information.
In order to correlate three-dimensional positions of a plurality of cameras, it is necessary to calibrate a camera internal parameter and a camera external parameter for each camera and correlate three-dimensional positions of a plurality of cameras based on the camera internal parameter and the camera external parameter of the plurality of cameras. However, how to calibrate the camera internal parameters and the camera external parameters for each camera does not have a reasonable calibration mode in the related art, and the problems of complex calibration operation, long calibration time, high calibration labor cost and the like exist.
Disclosure of Invention
The application provides a calibration method of camera parameters, a calibration object is placed on a measurement object, the calibration object comprises an optical identifier and a plurality of calibration points, and the method comprises the following steps:
acquiring a calibrated two-dimensional image aiming at the measuring object through a two-dimensional camera, determining pixel coordinates of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determining physical coordinates of the calibration point under a system coordinate system based on the position relation between the optical identifier and the calibration point;
generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, wherein the coordinate point pairs corresponding to each calibration point comprise pixel coordinates and physical coordinates corresponding to the calibration point;
Configuring an initial focal length value for a focal length of a camera, and determining a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs;
determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
calibrating the target focal length value and the target parameter value for the two-dimensional camera.
The application provides a calibration device of camera parameter, has placed the calibration thing on measuring object, the calibration thing includes optical identifier and a plurality of calibration points, the device includes:
the acquisition module is used for acquiring a calibrated two-dimensional image aiming at the measuring object through a two-dimensional camera, determining pixel coordinates of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determining physical coordinates of the calibration point under a system coordinate system based on the position relation between the optical identifier and the calibration point;
the generation module is used for generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, and the coordinate point pairs corresponding to each calibration point comprise pixel coordinates and physical coordinates corresponding to the calibration point;
The determining module is used for configuring an initial focal length value for the focal length of the camera, and determining a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs; determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
and the calibration module is used for calibrating the target focal length value and the target parameter value for the two-dimensional camera.
The application provides an electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is used for executing machine executable instructions to realize the calibration method of the camera parameters.
According to the technical scheme, in the embodiment of the application, the camera calibration can be performed only by paving the calibration object (such as the calibration cloth or the calibration plate) carrying the optical identifier, so that the manufacturing and carrying costs of the calibration object are effectively reduced. The two-dimensional camera is used for acquiring the calibrated two-dimensional image aiming at the measuring object, and the camera internal parameter and the camera external parameter of the two-dimensional camera can be calculated by combining the vertical height between the two-dimensional camera and the measuring object (namely the erection height of the two-dimensional camera), and the camera internal parameter and the camera external parameter are calibrated for the two-dimensional camera, so that the implementation efficiency is greatly improved, and the labor cost is saved. By associating each camera with the system coordinate system, and decoupling between cameras, the scalability of camera calibration is improved. The initial focal length value is configured for the focal length of the camera, and the target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera is determined based on the plurality of coordinate point pairs on the basis of the initial focal length value, so that the calculated amount is reduced, the calculation resource is saved, and the calibration speed is improved. Based on the camera calibration mode, the calibration operation can be simplified, the calibration time is reduced, and the calibration labor cost is saved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of a method for calibrating camera parameters in one embodiment of the present application;
FIG. 2 is a schematic view of an application scenario in one embodiment of the present application;
FIG. 3 is a schematic diagram of a calibration process of a three-dimensional camera in one embodiment of the present application;
FIG. 4 is a schematic diagram of a calibration process of a two-dimensional camera in one embodiment of the present application;
FIG. 5 is a flow chart of a method for calibrating camera parameters in one embodiment of the present application;
FIG. 6 is a schematic structural diagram of a calibration device for camera parameters in one embodiment of the present application;
fig. 7 is a hardware configuration diagram of an electronic device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
In this embodiment of the present application, a calibration method for camera parameters is provided, where a calibration object (the calibration object may be a calibration cloth or a calibration board) is placed on a measurement object, and the calibration object may include an optical identifier and a plurality of calibration points, as shown in fig. 1, which is a schematic flow chart of the method, and the method may include:
step 101, acquiring a calibrated two-dimensional image aiming at a measurement object through a two-dimensional camera, determining pixel coordinates of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determining physical coordinates of the calibration point under the system coordinate system based on the position relation between the optical identifier and the calibration point.
In one possible implementation, determining physical coordinates of the calibration point in the system coordinate system based on the positional relationship between the optical identifier and the calibration point may include, but is not limited to: resolving physical coordinates of the target position of the optical identifier in a system coordinate system from the optical identifier; wherein the target location of the optical identifier may include, but is not limited to, an upper left, upper right, lower left, lower right, or center location of the optical identifier. Then, based on the physical coordinates of the target position of the optical identifier in the system coordinate system, the positional relationship between the target position of the optical identifier and the calibration point, the physical coordinates of the calibration point in the system coordinate system can be determined.
Step 102, generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, wherein for each coordinate point pair corresponding to each calibration point, the coordinate point pair can include a pixel coordinate and a physical coordinate corresponding to the calibration point.
Step 103, configuring an initial focal length value for the focal length of the camera, and determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs.
Illustratively, the parameters to be calibrated may include, but are not limited to, a camera center, a distortion coefficient, a rotation parameter, and a translation parameter, and determining, based on the initial focal length value and the plurality of coordinate point pairs, a target parameter value corresponding to the parameters to be calibrated of the two-dimensional camera may include, but is not limited to: configuring an initial center value for the center of the camera, and configuring an initial coefficient value for the distortion coefficient; and determining a rotation parameter initial value corresponding to the rotation parameter and a translation parameter initial value corresponding to the translation parameter based on the initial focal length value, the initial center value, the initial coefficient value and the coordinate point pairs. And optimizing (such as nonlinear optimization) the initial center value, the initial coefficient value, the rotation parameter initial value and the translation parameter initial value to obtain a target center value corresponding to the camera center, a target coefficient value corresponding to the distortion coefficient, a rotation parameter target value corresponding to the rotation parameter and a translation parameter target value corresponding to the translation parameter.
Step 104, determining a target focal length value corresponding to the focal length of the two-dimensional camera based on the target parameter value corresponding to the parameter to be calibrated, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value.
Illustratively, determining the target focal length value corresponding to the camera focal length of the two-dimensional camera based on the target parameter value corresponding to the parameter to be calibrated, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value may include, but is not limited to: and determining a candidate focal length value corresponding to the focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measured object and the initial focal length value. If the difference between the candidate focal length value and the initial focal length value is smaller than a preset threshold value, the candidate focal length value can be determined as a target focal length value corresponding to the focal length of the camera; otherwise, if the difference between the candidate focal length value and the initial focal length value is not smaller than the preset threshold, determining the candidate focal length value as the initial focal length value, and returning to execute the operation of determining the target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs.
Illustratively, the parameters to be calibrated may include, but are not limited to: the camera center, the distortion coefficient, the rotation parameter and the translation parameter, wherein the translation parameter comprises an X-axis translation parameter, a Y-axis translation parameter and a Z-axis translation parameter; on the basis, based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value, determining a candidate focal length value corresponding to the camera focal length of the two-dimensional camera may include, but is not limited to: and determining an adjustment factor based on the target parameter value corresponding to the vertical height and the Z-axis translation parameter, and adjusting the initial focal length value based on the adjustment factor to obtain a candidate focal length value corresponding to the focal length of the two-dimensional camera.
Step 105, calibrating the target focal length value and the target parameter value for the two-dimensional camera.
By way of example, the target parameter values may include, but are not limited to, a target center value corresponding to a camera center, a target coefficient value corresponding to a distortion coefficient, a rotation parameter target value corresponding to a rotation parameter, a translation parameter target value corresponding to a translation parameter, and it is apparent that by calibrating a target focal length value corresponding to a camera focal length, a target center value corresponding to a camera center, and a target coefficient value corresponding to a distortion coefficient for a two-dimensional camera, that is, calibrating a camera internal parameter for a two-dimensional camera. The camera external parameters are calibrated for the two-dimensional camera by calibrating a rotation parameter target value corresponding to the rotation parameter and a translation parameter target value corresponding to the translation parameter for the two-dimensional camera.
In one possible implementation manner, a three-dimensional camera is used for collecting a calibrated three-dimensional point cloud for a measurement object, and a three-dimensional space coordinate corresponding to the calibrated point is determined based on the calibrated three-dimensional point cloud; physical coordinates of the calibration point in the system coordinate system are determined based on the positional relationship between the optical identifier and the calibration point. A plurality of coordinate point pairs corresponding to the plurality of calibration points are generated, and for each coordinate point pair corresponding to the calibration point, the coordinate point pairs can comprise three-dimensional space coordinates and physical coordinates corresponding to the calibration point. Based on the calibrated camera internal parameter of the three-dimensional camera and the plurality of coordinate point pairs, a target parameter value corresponding to the camera external parameter can be determined, and the target parameter value of the camera external parameter is calibrated for the three-dimensional camera, that is, the camera external parameter is calibrated for the three-dimensional camera.
In one possible implementation manner, when a target object is placed on a measurement object and the measurement object moves linearly at a uniform speed, a target two-dimensional image of the measurement object can be acquired through a two-dimensional camera, and a target three-dimensional point cloud of the measurement object can be acquired through a three-dimensional camera. The three-dimensional space coordinate corresponding to the target object can be determined based on the target three-dimensional point cloud, and the three-dimensional space coordinate is converted into the initial physical coordinate of the target object under a system coordinate system based on the camera internal parameter and the camera external parameter of the three-dimensional camera; and determining the target physical coordinates of the target object under a system coordinate system based on the movement speed of the measured object, the target duration and the initial physical coordinates. Then, the target physical coordinates can be converted into target pixel coordinates of the target object under an image coordinate system based on the camera internal parameters and the camera external parameters of the two-dimensional camera, and the pixel positions corresponding to the target pixel coordinates are positioned from the target two-dimensional image corresponding to the target duration.
According to the technical scheme, in the embodiment of the application, the camera calibration can be performed only by paving the calibration object (such as the calibration cloth or the calibration plate) carrying the optical identifier, so that the manufacturing and carrying costs of the calibration object are effectively reduced. The two-dimensional camera is used for acquiring the calibrated two-dimensional image aiming at the measuring object, and the camera internal parameter and the camera external parameter of the two-dimensional camera can be calculated by combining the vertical height between the two-dimensional camera and the measuring object (namely the erection height of the two-dimensional camera), and the camera internal parameter and the camera external parameter are calibrated for the two-dimensional camera, so that the implementation efficiency is greatly improved, and the labor cost is saved. By associating each camera with the system coordinate system, and decoupling between cameras, the scalability of camera calibration is improved. The initial focal length value is configured for the focal length of the camera, and the target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera is determined based on the plurality of coordinate point pairs on the basis of the initial focal length value, so that the calculated amount is reduced, the calculation resource is saved, and the calibration speed is improved. Based on the camera calibration mode, the calibration operation can be simplified, the calibration time is reduced, and the calibration labor cost is saved.
The above technical solutions of the embodiments of the present application are described below with reference to specific application scenarios.
In the field of vision-based multidimensional information measurement and identification, a plurality of cameras can be used for acquiring information of different dimensions of an object, and then the identification information of the plurality of cameras of the same object is bound by utilizing a time domain correlation prediction technology to obtain complete information of the object. For example, in a parcel volume measurement and optical symbology identification application of a logistics system, a three-dimensional camera may be used for volume measurement and parcel sorting of parcels, a plurality of small-field two-dimensional cameras may be used for optical symbology identification of parcels, and a large-field two-dimensional camera may be used for obtaining a panoramic view of parcels. And then, carrying out three-dimensional position association on a plurality of cameras, acquiring the volume value, the category information, the optical symbol information, the panorama and other information of each package by combining a time domain association prediction technology, and carrying out subsequent operations such as piece counting, anti-missing, package rechecking, classified loading and the like on the basis of the information.
In order to correlate three-dimensional positions of a plurality of cameras, it is necessary to calibrate a camera internal parameter and a camera external parameter for each camera and correlate three-dimensional positions of a plurality of cameras based on the camera internal parameter and the camera external parameter of the plurality of cameras. However, how to calibrate the camera internal parameters and the camera external parameters for each camera does not have a reasonable calibration mode in the related art, and the problems of complex calibration operation, long calibration time, high calibration labor cost and the like exist.
In view of the above findings, in this embodiment, a multi-camera calibration method based on a single frame image is provided, where a piece of calibration cloth or calibration board with an optical identifier needs to be laid on a measurement object, so that the calibration cloth (calibration board) covers measurement areas of all cameras, and it is only required to ensure that each camera can see the calibration grid and the optical identifier. By measuring the erection height of each two-dimensional camera (i.e., the vertical height between the two-dimensional camera and the measurement object), and storing the erection height to the two-dimensional camera. And under the static state of the calibration cloth (calibration plate), each two-dimensional camera acquires one frame of calibration two-dimensional image, and the internal parameter calibration and the external parameter calibration of the two-dimensional camera can be completed based on the calibration two-dimensional image and the erection height of the two-dimensional camera. In the above manner, the calibration object is a piece of calibration cloth (calibration board) that can be printed on site, and the laying of the calibration cloth (calibration board) needs to be completed manually, that is, the calibration cloth (calibration board) is laid on the measurement object. On the basis, the two-dimensional camera only needs to know the erection height and collect one frame of calibration two-dimensional image aiming at the measurement object, so that the internal reference calibration and the external reference calibration of the two-dimensional camera can be completed, the manufacturing and carrying cost of a calibration object is effectively reduced, the implementation efficiency is greatly improved, the calibration operation of camera parameters can be simplified, the calibration time is shortened, and the calibration labor cost is saved.
In one possible implementation manner, referring to fig. 2, a schematic application scenario of the embodiment of the present application is shown, where a calibration cloth or a calibration board is laid on a measurement object, and the description is given by taking the calibration cloth as an example.
Because time domain correlation and prediction are needed for the objects on the measurement object, and multi-camera information correlation is needed, the measurement object may be a linear and uniform motion object, such as a linear and uniform motion logistics conveyor belt, a linear and uniform motion intersecting belt, a linear and uniform motion robot, a linear and uniform motion vehicle, etc., and the type of the measurement object is not limited, and the measurement object may also be referred to as a measurement system.
The calibration cloth can be paved on the measurement object, the length and the width of the calibration cloth can cover the whole measurement range, the calibration cloth can be printed and spliced in a blocking way, the material of the calibration cloth is not limited, and the calibration cloth can be paved in a static state in the whole measurement range. For example, in the case where the object to be measured is a logistics conveyor or a cross belt, a calibration cloth may be laid on the logistics conveyor or the cross belt. In the case where the measurement object is a robot or a vehicle, the calibration cloth is laid on the robot or the vehicle and on the moving path of the robot or the vehicle so that the calibration cloth is laid in the moving range of the robot or the vehicle.
The calibration cloth may be a calibration cloth with calibration cells, and the calibration cells may be checkerboard calibration cells or dot calibration cells, and the type of the calibration cells is not limited, and in fig. 2, the checkerboard calibration cells are taken as an example. The calibration points on the calibration cloth may be determined based on the calibration grid, for example, for the checkerboard calibration grid, the intersection points of the black checkerboard and the white checkerboard may be used as the calibration points, and for the dot calibration grid, the circle center may be used as the calibration points, which is merely an example, as long as the calibration points can be determined from the calibration cloth.
The calibration cloth may be a calibration cloth with an optical identifier, and the optical identifier may be a two-dimensional code (the unique position is described by the two-dimensional code), a number (the unique position is described by the number), or other information capable of describing the unique position, and the type of the optical identifier is not limited as long as the unique position can be described by the optical identifier. Since the optical identifier is used to describe the unique location, the physical coordinates of the target location of the optical identifier in the system coordinate system can be resolved from the optical identifier.
For example, a system coordinate system may be constructed in advance, which may also be referred to as a world coordinate system or a physical coordinate system, where the system coordinate system is a coordinate system for a measurement object, and a certain position of the calibration cloth may be used as a coordinate origin of the system coordinate system, and the system coordinate system may be constructed. After knowing the origin of coordinates of the system coordinate system, the positional relationship between the target position of the optical identifier and the origin of coordinates can be known, so that the physical coordinates of the target position of the optical identifier in the system coordinate system can be known, and therefore, the physical coordinates of the target position of the optical identifier in the system coordinate system can be described by the optical identifier, and the physical coordinates of the target position of the optical identifier in the system coordinate system can be resolved from the optical identifier.
The target location of the optical identifier may include, but is not limited to: the upper left, or upper right, or lower left, or lower right, or center position of the optical identifier, although the above are just a few examples of target positions, the target positions are not limiting. Taking the center position of the optical identifier as an example for explanation, the positional relationship between the center position of the optical identifier and the origin of coordinates can be obtained, and the physical coordinates of the center position of the optical identifier in the system coordinate system can be obtained, so that the physical coordinates of the center position of the optical identifier in the system coordinate system can be described by the optical identifier.
The calibration cloth may be a calibration cloth with one optical identifier or a calibration cloth with a plurality of optical identifiers, and each optical identifier is used to describe the physical coordinates of the center position of the optical identifier under the system coordinate system, and in fig. 2, a plurality of optical identifiers are taken as an example for illustration.
Referring to fig. 2, taking 1 three-dimensional camera and 3 two-dimensional cameras as an example for explanation, the 3 two-dimensional cameras may include 2 two-dimensional code reading cameras and 1 two-dimensional panorama camera, the two-dimensional code reading cameras are used for reading two-dimensional code information or other code information on the object, and the two-dimensional panorama camera is used for collecting panorama images of the object.
In the installation process of the two-dimensional camera, the installation angle and the installation height of the two-dimensional camera are not limited, and the two-dimensional camera can be installed at will. In the installation process of the three-dimensional camera, the installation angle and the installation height of the three-dimensional camera are not limited, and the three-dimensional camera can be installed at will. After the two-dimensional camera is mounted, it is also necessary to measure a vertical height between the two-dimensional camera and the measurement object (i.e., an erection height of the two-dimensional camera), which may be a camera optical center height of the two-dimensional camera perpendicular to the measurement object. In fig. 2, the vertical height between one two-dimensional code reading camera and the measurement object is h1, the vertical height between the other two-dimensional code reading camera and the measurement object is h2, and the vertical height between the two-dimensional panoramic camera and the measurement object is h3.
For two-dimensional cameras and three-dimensional cameras, camera references may include camera focal lengths (fx, fy), camera centers (cx, cy), distortion coefficients (k 1, k2, p1, p2, k 3). For a two-dimensional camera, the camera internal reference is used to convert a point Pc (x, y, z) in the camera coordinate system to a point Pi (x, y) in the image coordinate system, as shown in formula (1), which is a conversion relationship between the camera coordinate system and the image coordinate system. If the calibration accuracy of the two-dimensional camera is not strict, the camera internal parameters of the two-dimensional camera may be simplified, such as fy=fx, p1=0, p2=0, and k3=0, in which case, as shown in equation (2), the conversion relationship between the camera coordinate system and the image coordinate system is shown.
Pi (x, y) = [ fx, fy, cx, cy ] [ k1, k2, p1, p2, k3] Pc (x, y, z) formula (1)
Pi (x, y) = [ f, f, cx, cy ] [ k1, k2] Pc (x, y, z) formula (2)
As can be seen from the formula (1), the point Pc (x, y, z) in the camera coordinate system can be converted into the point Pi (x, y) in the image coordinate system based on the camera focal length (fx, fy), the camera center (cx, cy), the distortion coefficient (k 1, k2, p1, p2, k 3), and the like, or the point Pi (x, y) in the image coordinate system can be converted into the point Pc (x, y, z) in the camera coordinate system based on the camera focal length (fx, fy), the camera center (cx, cy), the distortion coefficient (k 1, k2, p1, p2, k 3), and the like. Obviously, based on the point Pi (x, y) in the image coordinate system and the point Pc (x, y, z) in the camera coordinate system, i.e., a plurality of coordinate point pairs, camera focal lengths (fx, fy), camera centers (cx, cy), distortion coefficients (k 1, k2, p1, p2, k 3), and the like can also be determined.
As can be seen from the formula (2), the point Pc (x, y, z) in the camera coordinate system can be converted into the point Pi (x, y) in the image coordinate system based on the camera focal length f, the camera center (cx, cy), the distortion coefficient (k 1, k 2) and the like, or the point Pi (x, y) in the image coordinate system can be converted into the point Pc (x, y, z) in the camera coordinate system based on the camera focal length f, the camera center (cx, cy), the distortion coefficient (k 1, k 2) and the like. Based on the point Pi (x, y) in the image coordinate system and the point Pc (x, y, z) in the camera coordinate system, i.e., a plurality of coordinate point pairs, the camera focal length f, the camera center (cx, cy), the distortion coefficient (k 1, k 2), and the like can also be determined.
For two-dimensional cameras and three-dimensional cameras, the camera external parameters may include rotation parameters R (Rx, ry, rz) and translation parameters T (Tx, ty, tz), where Rx is an X-axis rotation parameter, ry is a Y-axis rotation parameter, rz is a Z-axis rotation parameter, tx is an X-axis translation parameter, ty is a Y-axis translation parameter, and Tz is a Z-axis translation parameter. The camera external parameters are used for converting points Ps (x, y, z) under the system coordinate system into points Pc (x, y, z) under the camera coordinate system, and are shown in formula (3), and are conversion relations between the system coordinate system and the camera coordinate system.
Pc (x, y, z) = [ r|t ] Ps (x, y, 0) formula (3)
As can be seen from the formula (3), the point Ps (x, y, z) in the system coordinate system can be converted into the point Pc (x, y, z) in the camera coordinate system based on the camera external parameters such as the rotation parameter R and the translation parameter T, or the point Pc (x, y, z) in the camera coordinate system can be converted into the point Ps (x, y, z) in the system coordinate system based on the camera external parameters such as the rotation parameter R and the translation parameter T. Camera parameters such as rotation parameters R and translation parameters T can also be determined based on points Ps (x, y, z) in the system coordinate system and points Pc (x, y, z) in the camera coordinate system, i.e. a plurality of coordinate point pairs.
For the calibration process of the three-dimensional camera, referring to fig. 3, the following steps may be included:
Step 301, a three-dimensional camera is used for collecting a calibrated three-dimensional point cloud aiming at a measurement object, and a three-dimensional space coordinate corresponding to a calibration point (a plurality of calibration points can be determined) is determined based on the calibrated three-dimensional point cloud.
Illustratively, a calibration cloth with an optical identifier and a calibration grid is laid over the measurement object such that the calibration cloth covers the measurement area of the three-dimensional camera and the three-dimensional camera is able to see the calibration grid and the optical identifier. And in a static state of the calibration cloth, acquiring a calibration three-dimensional point cloud aiming at the measured object through a three-dimensional camera, namely, the calibration three-dimensional point cloud comprises three-dimensional points aiming at the measured object. Since the three-dimensional camera can see the calibration grid, a plurality of calibration points, such as the intersection points of the black checkerboard and the white checkerboard, are determined as the calibration points based on the calibration grid.
Because the number of the calibration points is large, K calibration points can be selected from all the calibration points, and the value of K can be configured according to experience, which means that the calibration operation is completed based on the K groups of coordinate pairs.
In summary, after the calibrated three-dimensional point cloud is obtained, the calibrated three-dimensional point cloud includes three-dimensional space coordinates corresponding to all three-dimensional points, and the three-dimensional space coordinates corresponding to the K calibrated points can be determined based on the calibrated three-dimensional point cloud.
Step 302, determining physical coordinates of the calibration point under a system coordinate system based on a positional relationship between the optical identifier and the calibration point. For example, for each of the K calibration points, the physical coordinates of that calibration point in the system coordinate system are determined based on the positional relationship between the optical identifier and that calibration point.
For example, the physical coordinates of the target location of the optical identifier in the system coordinate system may be resolved from the optical identifier; the target position may be an upper left corner position, an upper right corner position, a lower left corner position, a lower right corner position, or a center position. Then, based on the physical coordinates of the target position of the optical identifier in the system coordinate system, the positional relationship between the target position of the optical identifier and the calibration point, the physical coordinates of the calibration point in the system coordinate system can be determined.
For example, for each of the K calibration points, the optical identifier nearest to the calibration point is selected from all the optical identifiers, and of course, other optical identifiers may be selected, which is not limited thereto. The optical identifier may be a two-dimensional code, a number, or other information capable of describing a unique location, and thus the physical coordinates of the target location of the optical identifier under the system coordinate system are resolved from the optical identifier.
The positional relationship between the target position of the optical identifier and the calibration point is known, for example, the calibration point and the target position of the optical identifier are spaced 4 checkerboards apart, and the calibration point is located on the left side of the target position of the optical identifier, so that the physical coordinates of the calibration point in the system coordinate system can be determined based on the positional relationship between the target position of the optical identifier and the calibration point, and the physical coordinates of the target position of the optical identifier in the system coordinate system, and the determination manner of the physical coordinates is not limited.
Step 303, generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, wherein for each coordinate point pair corresponding to each calibration point, the coordinate point pair may include three-dimensional space coordinates and physical coordinates corresponding to the calibration point.
For example, for each of the K calibration points, in step 301, a three-dimensional space coordinate corresponding to the calibration point may be obtained, and in step 302, a physical coordinate of the calibration point under a system coordinate system may be obtained, so as to obtain a coordinate point pair corresponding to the calibration point, where the coordinate point pair may include the three-dimensional space coordinate and the physical coordinate. Thus, K coordinate point pairs corresponding to the K calibration points can be obtained.
Step 304, based on the calibrated camera internal parameter and the plurality of coordinate point pairs of the three-dimensional camera, a target parameter value corresponding to the camera external parameter can be determined, and the target parameter value of the camera external parameter is calibrated for the three-dimensional camera.
For example, when the three-dimensional camera leaves the factory, the camera parameters, that is, the camera focal lengths (fx, fy), the camera centers (cx, cy), the distortion coefficients (k 1, k2, p1, p2, k 3) and the like, are calibrated in advance, and the camera parameters are not calibrated any more, so that only the camera parameters need to be calibrated for the three-dimensional camera, for example, the rotation parameter target values corresponding to the rotation parameters R (Rx, ry, rz) of the three-dimensional camera, for example, the rotation parameter target values corresponding to the rotation parameters of the X-axis, the rotation parameter target values corresponding to the rotation parameters of the Y-axis, and the rotation parameter target values corresponding to the rotation parameters of the Z-axis, are calibrated. And calibrating a translation parameter target value corresponding to the translation parameter T (Tx, ty, tz) for the three-dimensional camera, such as calibrating a translation parameter target value corresponding to the X-axis translation parameter, a translation parameter target value corresponding to the Y-axis translation parameter, and a translation parameter target value corresponding to the Z-axis translation parameter.
For example, by combining the formula (1) and the formula (3), a conversion relationship shown in the formula (4) can be obtained, and the formula (4) is a conversion relationship between the three-dimensional space coordinate and the system coordinate system.
Pi (x, y, z) = [ fx, fy, cx, cy ] [ k1, k2, p1, p2, k3] [ R|T ] Ps (x, y, z) equation (4)
In the formula (4), the focal length (fx, fy) of the camera, the center (cx, cy) of the camera and the distortion coefficients (K1, K2, p1, p2, K3) are camera parameters, which are known values, the rotation parameter R and the translation parameter T are camera parameters, which are unknown values, which are parameter values to be calibrated, the coordinate point pairs may include three-dimensional space coordinates (corresponding Pi) and physical coordinates (corresponding Ps) in the system coordinate system, and after substituting the K coordinate point pairs into the formula (4), the rotation parameter target value corresponding to the rotation parameter R and the translation parameter target value corresponding to the translation parameter T may be obtained, and then the rotation parameter target value and the translation parameter target value are calibrated for the three-dimensional camera.
For the calibration process of the two-dimensional camera, as shown in fig. 4, the following steps may be included:
step 401, a calibration two-dimensional image of a measurement object is acquired by a two-dimensional camera, and pixel coordinates of a calibration point (a plurality of calibration points may be determined) in an image coordinate system are determined based on the calibration two-dimensional image. The calibration two-dimensional image may be an RGB image, or the calibration two-dimensional image may be a gray-scale image.
Illustratively, a calibration cloth with an optical identifier and a calibration grid is laid over the measurement object such that the calibration cloth covers the measurement area of the two-dimensional camera and the two-dimensional camera is able to see the calibration grid and the optical identifier. In a static state of the calibration cloth, a calibration two-dimensional image aiming at the measuring object is acquired through the two-dimensional camera, namely the calibration two-dimensional image comprises pixel coordinates aiming at the measuring object. Since the two-dimensional camera can see the calibration grid, a plurality of calibration points, such as the intersection points of the black checkerboard and the white checkerboard, are determined as the calibration points based on the calibration grid.
Because the number of the calibration points is large, M calibration points can be selected from all the calibration points, and the value of M can be configured according to experience, which means that the calibration operation is completed based on M groups of coordinate pairs.
In summary, after the calibration two-dimensional image is obtained, the pixel coordinates of each calibration point under the image coordinate system can be determined based on the calibration two-dimensional image, so as to obtain the pixel coordinates of the M calibration points under the image coordinate system.
Step 402, determining physical coordinates of the calibration point under a system coordinate system based on a positional relationship between the optical identifier and the calibration point. For example, for each of the M calibration points, the physical coordinates of that calibration point in the system coordinate system are determined based on the positional relationship between the optical identifier and that calibration point.
For example, the physical coordinates of the target location of the optical identifier in the system coordinate system may be resolved from the optical identifier; the target position may be an upper left corner position, an upper right corner position, a lower left corner position, a lower right corner position, or a center position. Then, based on the physical coordinates of the target position of the optical identifier in the system coordinate system, the positional relationship between the target position of the optical identifier and the calibration point, the physical coordinates of the calibration point in the system coordinate system can be determined.
For example, for each of the M calibration points, the optical identifier nearest to the calibration point is selected from all the optical identifiers, and of course, other optical identifiers may be selected, which is not limited thereto. The optical identifier may be a two-dimensional code, a number, or other information capable of describing a unique location, and thus the physical coordinates of the target location of the optical identifier under the system coordinate system are resolved from the optical identifier.
The positional relationship between the target position of the optical identifier and the calibration point is known, for example, the calibration point and the target position of the optical identifier are spaced 4 checkerboards apart, and the calibration point is located on the left side of the target position of the optical identifier, so that the physical coordinates of the calibration point in the system coordinate system can be determined based on the positional relationship between the target position of the optical identifier and the calibration point, and the physical coordinates of the target position of the optical identifier in the system coordinate system, and the determination manner of the physical coordinates is not limited.
Step 403, generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, where for each coordinate point pair corresponding to a calibration point, the coordinate point pair may include a pixel coordinate and a physical coordinate corresponding to the calibration point.
For example, for each of the M calibration points, in step 401, the pixel coordinates of the calibration point in the image coordinate system may be obtained, and in step 402, the physical coordinates of the calibration point in the system coordinate system may be obtained, so as to obtain a coordinate point pair corresponding to the calibration point, where the coordinate point pair may include the pixel coordinates and the physical coordinates, so as to obtain M coordinate point pairs corresponding to the M calibration points.
Step 404, configuring an initial focal length value (i.e. a fixed focal length value) for the focal length of the camera, and determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs.
For example, when the two-dimensional camera leaves the factory, the camera internal parameter and the camera external parameter are not calibrated, so the camera internal parameter and the camera external parameter which are outside the focal length of the camera are taken as parameters to be calibrated, namely the parameters to be calibrated comprise camera centers (cx, cy), distortion coefficients (k 1, k2, p1, p2, k 3), rotation parameters R (Rx, ry, rz) and translation parameters T (Tx, ty, tz), based on the parameters, a target center value corresponding to the camera center (such as a target center value corresponding to the camera center cx and a target center value corresponding to the camera center cy) is determined, a target coefficient value corresponding to the distortion coefficient (such as a target coefficient value corresponding to the distortion coefficient k1, a target coefficient value corresponding to the distortion coefficient k2, a target coefficient value corresponding to the distortion coefficient p1, a target coefficient corresponding to the distortion coefficient k 3) is determined, a rotation parameter R (Rx, ry, rz) corresponding to a rotation parameter (such as a rotation parameter corresponding to the X axis rotation parameter, a rotation parameter corresponding to the Y axis rotation parameter) is determined, a target value corresponding to the rotation parameter T (such as a rotation parameter corresponding to the X axis rotation parameter, and a translation parameter corresponding to the Z axis is determined, and a translation parameter corresponding to the translation parameter is determined.
In one possible embodiment, the conversion relationship shown in the formula (5) can be obtained by combining the formula (1) and the formula (3), where the formula (5) is a conversion relationship between the image coordinate system and the system coordinate system.
Pi (x, y) = [ fx, fy, cx, cy ] [ k1, k2, p1, p2, k3] [ R|T ] Ps (x, y, 0) equation (5)
In the formula (5), the focal lengths (fx, fy) of the cameras are initial focal length values, which can be configured empirically, for example, fx=fy=1000, and of course, the initial focal length values can be other values, and the initial focal length values are not limited.
For example, the camera center (cx, cy) and the distortion coefficient (k 1, k2, p1, p2, k 3) are camera internal parameters, are unknown values, the rotation parameter R and the translation parameter T are camera external parameters, are unknown values, the parameters are parameters to be calibrated, the coordinate point pair may include a pixel coordinate (corresponding Pi) under an image coordinate system and a physical coordinate (corresponding Ps) under a system coordinate system, and obviously, after substituting the M coordinate point pairs into the formula (5), the target center value corresponding to the camera center (cx, cy), the target coefficient value corresponding to the distortion coefficient (k 1, k2, p1, p2, k 3), the rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and the translation parameter target value corresponding to the translation parameter T (Tx, ty, tz) may be obtained.
In another possible embodiment, on the basis of configuring the initial focal length value for the focal length of the camera, an initial center value may also be configured for the center of the camera, and an initial coefficient value may be configured for the distortion coefficient. For example, the center of the calibration two-dimensional image is taken as an initial center value, that is, half of the resolution of the horizontal axis of the calibration two-dimensional image is taken as an initial center value of the camera center cx, and half of the resolution of the vertical axis of the calibration two-dimensional image is taken as an initial center value of the camera center cy. The initial coefficient value of the distortion coefficient is configured to 0, such as k1=0, k2=0, p1=0, p2=0, k3=0. In this case, the conversion relation shown in the formula (6) can be obtained by combining the formula (1) and the formula (3), and the formula (6) is the conversion relation between the image coordinate system and the system coordinate system. Since the initial coefficient value of the distortion coefficient is 0, equation (6) removes the distortion coefficient as compared to equation (5).
Pi (x, y) = [ fx, fy, cx, cy ] [ r|t ] Ps (x, y, 0) formula (6)
In equation (6), the camera focal length (fx, fy) is an initial focal length value, which may be empirically configured, such as fx=fy=1000. The camera center (cx, cy) is the center of the nominal two-dimensional image, and is a known value. The distortion coefficients (k 1, k2, p1, p2, k 3) are 0, which are known values. The rotation parameter R and the translation parameter T are camera parameters, which are unknown values, and the coordinate point pairs may include pixel coordinates (corresponding Pi) in the image coordinate system and physical coordinates (corresponding Ps) in the system coordinate system, and obviously, after substituting the M coordinate point pairs into the formula (6), the rotation parameter initial values corresponding to the rotation parameters R (Rx, ry, rz) and the translation parameter initial values corresponding to the translation parameters T (Tx, ty, tz) may be obtained, that is, the rotation parameter initial values and the translation parameter initial values are determined based on the initial focal length values, the initial center values, the initial coefficient values and the plurality of coordinate point pairs.
Then, the initial center value, the initial coefficient value, the rotation parameter initial value and the translation parameter initial value may be optimized (such as nonlinear optimization), so as to obtain a target center value corresponding to the camera center (cx, cy), a target coefficient value corresponding to the distortion coefficient (k 1, k2, p1, p2, k 3), a rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and a translation parameter target value corresponding to the translation parameter T (Tx, ty, tz).
For example, according to the least square error of the formula (5), a nonlinear optimization method (for example, LM method is called as Levenberg-Marquardt method), a regression parameter least square estimation method in nonlinear regression, is used to jointly optimize the camera center (cx, cy), the distortion coefficient (k 1, k2, p1, p2, k 3), the rotation parameter R (Rx, ry, rz), and the translation parameter T (Tx, ty, tz), to obtain a target center value corresponding to the camera center (cx, cy), a target coefficient value corresponding to the distortion coefficient (k 1, k2, p1, p2, k 3), a rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and a translation parameter target value corresponding to the translation parameter T (Tx, ty, tz), which is not limited.
For example, regarding the calculation feasibility of the formula (6), since 3 points can determine a plane, and each point can provide 2 equations (x, y) as in the formula (6), 6 parameters can be solved to meet the calculation requirement of the camera external parameters (r|t), cx, cy, k1, k2 are mainly related to distortion, the calculation accuracy depends on whether the calibration grid covers enough camera field, and the relationship between the calibration grid and the number of images is small, while the calibration cloth (calibration board) of the embodiment just meets the condition, so the parameters can be calibrated by a single frame image.
In summary, on the basis that the focal length of the camera is the initial focal length value, the target center value corresponding to the center of the camera, the target coefficient value corresponding to the distortion coefficient, the rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and the translation parameter target value corresponding to the translation parameter T (Tx, ty, tz) can be obtained.
Step 405, determining a candidate focal length value corresponding to a camera focal length (fx, fy) of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value.
The parameters to be calibrated may include a camera center, a distortion coefficient, a rotation parameter and a translation parameter, and the translation parameter includes an X-axis translation parameter, a Y-axis translation parameter and a Z-axis translation parameter, so that a translation parameter target value corresponding to the Z-axis translation parameter can be obtained. For example, the candidate focal length value may be determined using formula (7), and of course, formula (7) is merely an example, and the determination is not limited thereto.
Figure BDA0004152447790000161
In the formula (7), f is used to represent an initial focal length value, i.e., the camera focal length fx and the camera focal length fy each correspond to the initial focal length value, and h is used to representThe vertical height between the two-dimensional camera and the measurement object, tz is used for representing a translation parameter target value corresponding to the Z-axis translation parameter,
Figure BDA0004152447790000171
for indicating adjustment factors, f For representing candidate focal length values, i.e. for which both the camera focal length fx and the camera focal length fy correspond.
As an example, based on the principle of triangulation, tz in the inverse matrix of the camera external parameter (r|t) and the camera focal length f have a nearly linear trend, where Tz is the height of the camera center perpendicular to the plane of the measurement object, so that an initial value of the camera focal length f (i.e., an initial focal length value) can be arbitrarily set, a target central value corresponding to the camera center (cx, cy), a target coefficient value corresponding to the distortion coefficient (k 1, k2, p1, p2, k 3), a rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and a translation parameter target value corresponding to the translation parameter T (Tx, ty, tz) are calculated according to the formula (5) or the formula (6), then, after the initial focal length value is corrected according to the formula (7), a candidate focal length value is obtained, and the correct camera internal parameter and the camera external parameter are obtained by means of iterative update.
Step 406, determining whether the difference between the candidate focal length value and the initial focal length value is smaller than a preset threshold.
If so, step 407 may be performed, and if not, step 408 may be performed.
For example, after the candidate focal length value is obtained, a difference between the candidate focal length value and the initial focal length value may be calculated, and whether the difference between the candidate focal length value and the initial focal length value is smaller than a preset threshold may be determined, where the preset threshold may be empirically configured, and the value of the preset threshold is not limited.
Step 407, determining the candidate focal length value as a target focal length value corresponding to the focal length of the camera.
In summary, the target focal length value corresponding to the focal length (fx, fy) of the camera, the target center value corresponding to the center (cx, cy) of the camera, the target coefficient value corresponding to the distortion coefficient (k 1, k2, p1, p2, k 3), the rotation parameter target value corresponding to the rotation parameter R (Rx, ry, rz), and the translation parameter target value corresponding to the translation parameter T (Tx, ty, tz) can be obtained, so that the camera internal reference and the camera external reference can be calibrated for the two-dimensional camera.
Step 408, determining the candidate focal length value as an initial focal length value corresponding to the focal length of the camera, returning to step 404, redefining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the updated initial focal length value and the plurality of coordinate points, that is, updating the target parameter value, so as to update the focal length (fx, fy) of the camera, the center (cx, cy) of the camera, the distortion coefficients (k 1, k2, p1, p2, k 3), the rotation parameters R (Rx, ry, rz) and the translation parameters T by means of iterative updating until obtaining correct camera internal parameters and camera external parameters, and calibrating the camera internal parameters and the camera external parameters for the two-dimensional camera.
In summary, the camera internal parameters and the camera external parameters are calibrated for the two-dimensional camera, and the camera external parameters are calibrated for the three-dimensional camera, so that the camera calibration process is completed. After the camera calibration process is finished, related applications can be executed based on the camera internal parameters and the camera external parameters of the two-dimensional camera, and the camera internal parameters and the camera external parameters of the three-dimensional camera. For example, in a parcel volume measurement and optical symbology application of a logistics system, a three-dimensional camera is used for volume measurement and parcel sorting of parcels, a plurality of small-view two-dimensional cameras are used for optical symbology of parcels, and a large-view two-dimensional camera is used for obtaining a panoramic view of the parcel. Based on the camera internal parameters and the camera external parameters of the two-dimensional camera and the camera internal parameters and the camera external parameters of the three-dimensional camera, carrying out three-dimensional position association on a plurality of cameras, acquiring the information such as the volume value, the category information, the optical symbol information, the panorama and the like of each package by combining a time domain association prediction technology, and using the information for follow-up operations such as piece counting, anti-leakage scanning, package rechecking, classified loading and the like.
In one possible implementation manner, a target object is placed on a measurement object, and during uniform linear motion of the measurement object, a target two-dimensional image of the measurement object can be acquired through a two-dimensional camera, and a target three-dimensional point cloud of the measurement object can be acquired through a three-dimensional camera.
And then, determining three-dimensional space coordinates corresponding to the target object based on the target three-dimensional point cloud, and converting the three-dimensional space coordinates into initial physical coordinates of the target object under a system coordinate system based on camera internal parameters and camera external parameters of the three-dimensional camera. For example, referring to the conversion relation shown in the formula (4), the formula (4) is a conversion relation between the three-dimensional space coordinate and the system coordinate system, in the formula (4), the camera internal parameter and the camera external parameter are both known values, and after substituting the three-dimensional space coordinate corresponding to the target object into the formula (4), the initial physical coordinate of the target object under the system coordinate system can be obtained. For example, the three-dimensional camera may recognize the three-dimensional space coordinates Pc (x, y, z) of the target object, the equation (4) may be deformed to obtain the conversion relationship shown in the equation (8), and in the equation (8), only the camera external reference of the three-dimensional camera is involved, but the camera internal reference of the three-dimensional camera is not involved, so that the three-dimensional space coordinates Pc (x, y, z) of the target object may be converted into the initial physical coordinates Ps (x, y, z) of the target object in the system coordinate system based on the equation (8).
Ps(x,y,z)=[R|T] -1 Pc (x, y, z) formula (8)
After obtaining the initial physical coordinates of the target object in the system coordinate system, the physical coordinates of the target object after the target duration (denoted as target physical coordinates) can be predicted, that is, the target object moves to the position corresponding to the target physical coordinates after the target duration (for example, after an interval of 10 seconds). For example, the target physical coordinates of the target object in the system coordinate system are determined based on the movement speed of the measurement object, the target time length and the initial physical coordinates. For example, equation (9) may be used to determine the target physical coordinates of the target object in the system coordinate system. In the formula (9), ps (x, y) represents the initial physical coordinates of the target object in the system coordinate system, i.e., z in the initial physical coordinates Ps (x, y, z) is set to 0.Ps' (x, y) represents the target physical coordinates of the target object in the system coordinate system. s represents the movement speed of the measurement object, and the manner of obtaining the movement speed is not limited, for example, the movement speed can be obtained by reading a tachometer, or the movement speed can be measured by using a speedometer. t represents a target duration, and can be configured according to requirements.
Ps' (x, y) =s·t+ps (x, y) formula (9)
After the target physical coordinates of the target object under the system coordinate system are obtained, the target pixel coordinates of the target object after the target duration can be predicted based on the target physical coordinates, so that the target pixel coordinates are associated with the target physical coordinates, namely, the target pixel coordinates after the target duration are associated with the target physical coordinates, and the prediction of the target object is realized. And, the three-dimensional space coordinates corresponding to the target pixel coordinates and the target physical coordinates may also be associated, that is, the target pixel coordinates of the target object in the target two-dimensional image and the three-dimensional space coordinates of the target object in the target three-dimensional point cloud may be associated.
In order to predict the target pixel coordinates of the target object after the target duration based on the target physical coordinates, the target physical coordinates may then be converted into target pixel coordinates of the target object in the image coordinate system based on the camera internal parameters and the camera external parameters of the two-dimensional camera. For example, referring to the conversion relation shown in the formula (5), the formula (5) is a conversion relation between the image coordinate system and the system coordinate system, in the formula (5), the camera internal parameter and the camera external parameter are both known values, and after substituting the target physical coordinate into the formula (5), the target pixel coordinate of the target object under the image coordinate system can be obtained. On this basis, the pixel position corresponding to the target pixel coordinate, that is, the position of the target object in the image coordinate system, can be located from the target two-dimensional image corresponding to the target time period (that is, the two-dimensional image after the target time period). The two-dimensional camera can recognize image information of an object, and when the information is bound, the three-dimensional information of the object in a system coordinate system is passively projected into the image according to the external participation camera and the internal participation camera of the camera, and then the information is bound.
In one possible implementation manner, a target object is placed on a measurement object, and during uniform linear motion of the measurement object, a target two-dimensional image of the measurement object can be acquired through a two-dimensional camera, and a target three-dimensional point cloud of the measurement object can be acquired through a three-dimensional camera. And then, determining three-dimensional space coordinates corresponding to the target object based on the target three-dimensional point cloud, and converting the three-dimensional space coordinates into target physical coordinates of the target object under a system coordinate system based on camera internal parameters and camera external parameters of the three-dimensional camera.
After the target physical coordinates of the target object in the system coordinate system are obtained, the target pixel coordinates of the target object in the target two-dimensional image can be determined based on the target physical coordinates, so that the target pixel coordinates and the target physical coordinates are associated. And, the three-dimensional space coordinates corresponding to the target pixel coordinates and the target physical coordinates may also be associated, that is, the target pixel coordinates of the target object in the target two-dimensional image and the three-dimensional space coordinates of the target object in the target three-dimensional point cloud may be associated. In order to determine the target pixel coordinates based on the target physical coordinates, the target physical coordinates may be converted into target pixel coordinates of the target object in the image coordinate system based on the camera intrinsic and extrinsic of the two-dimensional camera.
According to the technical scheme, in the embodiment of the application, camera calibration can be performed only by paving the calibration cloth or the calibration plate carrying the optical identifier, so that the manufacturing and carrying cost of the calibration object is effectively reduced. The two-dimensional camera is used for acquiring the calibrated two-dimensional image aiming at the measured object, and the camera internal parameter and the camera external parameter of the two-dimensional camera can be calculated by combining the vertical height between the two-dimensional camera and the measured object, and the camera internal parameter and the camera external parameter are calibrated for the two-dimensional camera, so that the implementation efficiency is greatly improved, and the labor cost is saved. By associating each camera with the system coordinate system, and decoupling between cameras, the scalability of camera calibration is improved. The initial focal length value is configured for the focal length of the camera, and the target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera is determined based on the plurality of coordinate point pairs on the basis of the initial focal length value, so that the calculated amount is reduced, the calculation resource is saved, and the calibration speed is improved. Based on the camera calibration mode, the calibration operation can be simplified, the calibration time is reduced, and the calibration labor cost is saved. The measurement and identification information of all cameras are associated with a system plane described by a calibration cloth, time sequence association is carried out based on the known system movement speed (reading a tachometer or measuring by using a speedometer), the types and the number of the cameras are unlimited, the cameras are independent, and the expansion is convenient and fast.
In one possible implementation, referring to fig. 5, the calibration method of the camera parameters may include:
step 501, erecting a plurality of cameras according to application requirements.
Step 502, measuring the erection height of each two-dimensional camera.
Step 503, laying a calibration cloth with optical identifiers so as to cover the measurement space of all cameras.
Step 504, control each camera to capture a frame of calibration image, such as an RGB image or a grayscale image.
And 505, performing external parameter calibration of the three-dimensional camera.
And step 506, performing internal parameter calibration and external parameter calibration of the two-dimensional camera.
Step 507, obtaining the movement speed of the measurement object.
Step 508, information association of multiple cameras is performed.
Based on the same application concept as the above method, in an embodiment of the present application, a calibration device for camera parameters is provided, where a calibration object is placed on a measurement object, where the calibration object includes an optical identifier and a plurality of calibration points, as shown in fig. 6, which is a schematic structural diagram of the device, the device may include:
an acquisition module 61, configured to acquire a calibrated two-dimensional image for the measurement object by using a two-dimensional camera, determine a pixel coordinate of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determine a physical coordinate of the calibration point under a system coordinate system based on a positional relationship between the optical identifier and the calibration point;
A generating module 62, configured to generate a plurality of coordinate point pairs corresponding to a plurality of calibration points, where for each coordinate point pair corresponding to a calibration point, the coordinate point pair includes a pixel coordinate and a physical coordinate corresponding to the calibration point;
a determining module 63, configured to configure an initial focal length value for a focal length of the camera, and determine a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the plurality of coordinate point pairs; determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
a calibration module 64 is configured to calibrate the target focal length value and the target parameter value for the two-dimensional camera.
Illustratively, the obtaining module 61 is specifically configured to determine physical coordinates of the calibration point in a system coordinate system based on a positional relationship between the optical identifier and the calibration point: resolving physical coordinates of a target position of the optical identifier in a system coordinate system from the optical identifier; wherein the target position of the optical identifier is an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position, or a center position of the optical identifier; and determining the physical coordinates of the target point in the system coordinate system based on the physical coordinates of the target position in the system coordinate system and the position relation between the target position of the optical identifier and the target point.
Illustratively, the determining module 63 is specifically configured to determine a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value: determining a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value; if the difference value between the candidate focal length value and the initial focal length value is smaller than a preset threshold value, determining the candidate focal length value as a target focal length value corresponding to the focal length of the camera; otherwise, determining the candidate focal length value as an initial focal length value, and returning to execute the operation of determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs.
The parameters to be calibrated comprise a camera center, a distortion coefficient, a rotation parameter and a translation parameter, wherein the translation parameter comprises an X-axis translation parameter, a Y-axis translation parameter and a Z-axis translation parameter; the determining module 63 is specifically configured to determine a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value: determining an adjustment factor based on the vertical height and a target parameter value corresponding to the Z-axis translation parameter; and adjusting the initial focal length value based on the adjustment factor to obtain the candidate focal length value.
For example, the parameters to be calibrated include a camera center, a distortion coefficient, a rotation parameter, and a translation parameter, and the determining module 63 is specifically configured to, when determining the target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the plurality of coordinate point pairs: configuring an initial center value for the camera center, and configuring an initial coefficient value for the distortion coefficient; determining a rotation parameter initial value corresponding to the rotation parameter and a translation parameter initial value corresponding to the translation parameter based on the initial focal length value, the initial center value, the initial coefficient value and the coordinate point pairs; and optimizing the initial center value, the initial coefficient value, the rotation parameter initial value and the translation parameter initial value to obtain a target center value corresponding to the camera center, a target coefficient value corresponding to the distortion coefficient, a rotation parameter target value corresponding to the rotation parameter and a translation parameter target value corresponding to the translation parameter.
Illustratively, the obtaining module 61 is further configured to collect, by using a three-dimensional camera, a calibrated three-dimensional point cloud for the measurement object, and determine, based on the calibrated three-dimensional point cloud, three-dimensional space coordinates corresponding to a calibration point; determining physical coordinates of the calibration point in a system coordinate system based on a positional relationship between the optical identifier and the calibration point; the generating module 62 is further configured to generate a plurality of coordinate point pairs corresponding to a plurality of calibration points, where for each coordinate point pair corresponding to a calibration point, the coordinate point pair includes a three-dimensional space coordinate and a physical coordinate corresponding to the calibration point; the calibration module 64 is further configured to determine a target parameter value corresponding to the camera external parameter based on the calibrated camera internal parameter of the three-dimensional camera and the plurality of coordinate point pairs, and calibrate the target parameter value of the camera external parameter for the three-dimensional camera.
Illustratively, when a target object is placed on the measurement object and the measurement object moves linearly at a uniform speed, the acquiring module 61 is further configured to acquire a target two-dimensional image for the measurement object through the two-dimensional camera, and acquire a target three-dimensional point cloud for the measurement object through the three-dimensional camera; the determining module 63 is further configured to determine a three-dimensional space coordinate corresponding to the target object based on the target three-dimensional point cloud, and convert the three-dimensional space coordinate into an initial physical coordinate of the target object in a system coordinate system based on a camera internal parameter and a camera external parameter of the three-dimensional camera; and determining a target physical coordinate of the target object under a system coordinate system based on the moving speed of the measuring object, the target duration and the initial physical coordinate, converting the target physical coordinate into a target pixel coordinate of the target object under the image coordinate system based on the camera internal parameter and the camera external parameter of the two-dimensional camera, and positioning a pixel position corresponding to the target pixel coordinate from a target two-dimensional image corresponding to the target duration.
Based on the same application concept as the above method, an electronic device is provided in the embodiment of the present application, and as shown in fig. 7, the electronic device may include: a processor 71 and a machine-readable storage medium 72, the machine-readable storage medium 72 storing machine-executable instructions executable by the processor 71; the processor 71 is configured to execute machine executable instructions to implement the calibration method of camera parameters disclosed in the above examples of the present application.
Based on the same application concept as the method, the embodiment of the application further provides a machine-readable storage medium, wherein a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the calibration method of the camera parameters disclosed in the above example of the application can be realized.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer entity or by an article of manufacture having some functionality. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method of calibrating camera parameters, wherein a calibration object is placed on a measurement object, the calibration object comprising an optical identifier and a plurality of calibration points, the method comprising:
acquiring a calibrated two-dimensional image aiming at the measuring object through a two-dimensional camera, determining pixel coordinates of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determining physical coordinates of the calibration point under a system coordinate system based on the position relation between the optical identifier and the calibration point;
generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, wherein the coordinate point pairs corresponding to each calibration point comprise pixel coordinates and physical coordinates corresponding to the calibration point;
configuring an initial focal length value for a focal length of a camera, and determining a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs;
determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
calibrating the target focal length value and the target parameter value for the two-dimensional camera.
2. The method of claim 1, wherein the determining physical coordinates of the calibration point in a system coordinate system based on the positional relationship between the optical identifier and the calibration point comprises:
resolving physical coordinates of a target position of the optical identifier in a system coordinate system from the optical identifier; wherein the target position of the optical identifier is an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position, or a center position of the optical identifier;
and determining the physical coordinates of the target point in the system coordinate system based on the physical coordinates of the target position in the system coordinate system and the position relation between the target position of the optical identifier and the target point.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, a vertical height between the two-dimensional camera and the measurement object, and the initial focal length value includes:
determining a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
If the difference value between the candidate focal length value and the initial focal length value is smaller than a preset threshold value, determining the candidate focal length value as a target focal length value corresponding to the focal length of the camera; otherwise, determining the candidate focal length value as an initial focal length value, and returning to execute the operation of determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs.
4. The method of claim 3, wherein the step of,
the parameters to be calibrated comprise a camera center, a distortion coefficient, rotation parameters and translation parameters, wherein the translation parameters comprise an X-axis translation parameter, a Y-axis translation parameter and a Z-axis translation parameter;
the determining a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, a vertical height between the two-dimensional camera and the measurement object, and the initial focal length value includes:
determining an adjustment factor based on the vertical height and a target parameter value corresponding to the Z-axis translation parameter;
and adjusting the initial focal length value based on the adjustment factor to obtain the candidate focal length value.
5. The method of claim 1, wherein the parameters to be calibrated include a camera center, a distortion coefficient, a rotation parameter, and a translation parameter, and wherein determining a target parameter value corresponding to the parameters to be calibrated of the two-dimensional camera based on the initial focal length value and the plurality of coordinate point pairs comprises:
Configuring an initial center value for the camera center, and configuring an initial coefficient value for the distortion coefficient;
determining a rotation parameter initial value corresponding to the rotation parameter and a translation parameter initial value corresponding to the translation parameter based on the initial focal length value, the initial center value, the initial coefficient value and the coordinate point pairs;
and optimizing the initial center value, the initial coefficient value, the rotation parameter initial value and the translation parameter initial value to obtain a target center value corresponding to the center of the camera, a target coefficient value corresponding to the distortion coefficient, a rotation parameter target value corresponding to the rotation parameter and a translation parameter target value corresponding to the translation parameter.
6. The method according to any one of claims 1-5, further comprising:
acquiring a calibrated three-dimensional point cloud aiming at the measurement object through a three-dimensional camera, and determining three-dimensional space coordinates corresponding to the calibrated point based on the calibrated three-dimensional point cloud; determining physical coordinates of the calibration point in a system coordinate system based on a positional relationship between the optical identifier and the calibration point;
generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, wherein the coordinate point pairs comprise three-dimensional space coordinates and physical coordinates corresponding to each calibration point;
And determining a target parameter value corresponding to the camera external parameter based on the calibrated camera internal parameter of the three-dimensional camera and the coordinate point pairs, and calibrating the target parameter value of the camera external parameter for the three-dimensional camera.
7. The method according to claim 6, wherein when a target object is placed on the measurement object and the measurement object moves linearly at a uniform speed, the method further comprises:
acquiring a target two-dimensional image aiming at the measuring object through the two-dimensional camera, and acquiring a target three-dimensional point cloud aiming at the measuring object through the three-dimensional camera;
determining a three-dimensional space coordinate corresponding to the target object based on the target three-dimensional point cloud, and converting the three-dimensional space coordinate into an initial physical coordinate of the target object under a system coordinate system based on a camera internal parameter and a camera external parameter of the three-dimensional camera; determining target physical coordinates of the target object under a system coordinate system based on the movement speed of the measurement object, the target duration and the initial physical coordinates;
and converting the target physical coordinates into target pixel coordinates of the target object under an image coordinate system based on camera internal parameters and camera external parameters of the two-dimensional camera, and positioning pixel positions corresponding to the target pixel coordinates from a target two-dimensional image corresponding to the target duration.
8. A calibration device for camera parameters, characterized in that a calibration object is placed on a measurement object, the calibration object comprising an optical identifier and a plurality of calibration points, the device comprising:
the acquisition module is used for acquiring a calibrated two-dimensional image aiming at the measuring object through a two-dimensional camera, determining pixel coordinates of a calibration point under an image coordinate system based on the calibrated two-dimensional image, and determining physical coordinates of the calibration point under a system coordinate system based on the position relation between the optical identifier and the calibration point;
the generation module is used for generating a plurality of coordinate point pairs corresponding to a plurality of calibration points, and the coordinate point pairs corresponding to each calibration point comprise pixel coordinates and physical coordinates corresponding to the calibration point;
the determining module is used for configuring an initial focal length value for the focal length of the camera, and determining a target parameter value corresponding to a parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs; determining a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value;
And the calibration module is used for calibrating the target focal length value and the target parameter value for the two-dimensional camera.
9. The apparatus of claim 8, wherein the acquisition module is further configured to, when determining physical coordinates of the calibration point in a system coordinate system based on a positional relationship between the optical identifier and the calibration point: resolving physical coordinates of a target position of the optical identifier in a system coordinate system from the optical identifier; wherein the target position of the optical identifier is an upper left corner position, or an upper right corner position, or a lower left corner position, or a lower right corner position, or a center position of the optical identifier; determining physical coordinates of the target point in a system coordinate system based on the physical coordinates of the target position in the system coordinate system and the positional relationship between the target position of the optical identifier and the target point;
the determining module is specifically configured to determine a target focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, a vertical height between the two-dimensional camera and the measurement object, and the initial focal length value: determining a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, the vertical height between the two-dimensional camera and the measurement object, and the initial focal length value; if the difference value between the candidate focal length value and the initial focal length value is smaller than a preset threshold value, determining the candidate focal length value as a target focal length value corresponding to the focal length of the camera; otherwise, determining the candidate focal length value as an initial focal length value, and returning to execute the operation of determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the coordinate point pairs;
The parameters to be calibrated comprise a camera center, a distortion coefficient, a rotation parameter and a translation parameter, wherein the translation parameter comprises an X-axis translation parameter, a Y-axis translation parameter and a Z-axis translation parameter; the determining module is specifically configured to determine a candidate focal length value corresponding to a camera focal length of the two-dimensional camera based on the target parameter value, a vertical height between the two-dimensional camera and the measurement object, and the initial focal length value: determining an adjustment factor based on the vertical height and a target parameter value corresponding to the Z-axis translation parameter; adjusting the initial focal length value based on the adjustment factor to obtain the candidate focal length value;
the determining module is specifically configured to, when determining a target parameter value corresponding to the parameter to be calibrated of the two-dimensional camera based on the initial focal length value and the plurality of coordinate point pairs: configuring an initial center value for the camera center, and configuring an initial coefficient value for the distortion coefficient; determining a rotation parameter initial value corresponding to the rotation parameter and a translation parameter initial value corresponding to the translation parameter based on the initial focal length value, the initial center value, the initial coefficient value and the coordinate point pairs; optimizing the initial center value, the initial coefficient value, the rotation parameter initial value and the translation parameter initial value to obtain a target center value corresponding to the camera center, a target coefficient value corresponding to the distortion coefficient, a rotation parameter target value corresponding to the rotation parameter and a translation parameter target value corresponding to the translation parameter;
The acquisition module is further used for acquiring a calibrated three-dimensional point cloud aiming at the measurement object through a three-dimensional camera and determining three-dimensional space coordinates corresponding to the calibrated point based on the calibrated three-dimensional point cloud; determining physical coordinates of the calibration point in a system coordinate system based on a positional relationship between the optical identifier and the calibration point; the generating module is further configured to generate a plurality of coordinate point pairs corresponding to a plurality of calibration points, where for each coordinate point pair corresponding to a calibration point, the coordinate point pair includes a three-dimensional space coordinate and a physical coordinate corresponding to the calibration point; the calibration module is further used for determining a target parameter value corresponding to the camera external parameter based on the calibrated camera internal parameter of the three-dimensional camera and the coordinate point pairs, and calibrating the target parameter value of the camera external parameter for the three-dimensional camera;
when a target object is placed on the measurement object and the measurement object moves linearly at a uniform speed, the acquisition module is further used for acquiring a target two-dimensional image aiming at the measurement object through the two-dimensional camera and acquiring a target three-dimensional point cloud aiming at the measurement object through the three-dimensional camera; the determining module is further configured to determine a three-dimensional space coordinate corresponding to the target object based on the target three-dimensional point cloud, and convert the three-dimensional space coordinate into an initial physical coordinate of the target object under a system coordinate system based on a camera internal parameter and a camera external parameter of the three-dimensional camera; and determining a target physical coordinate of the target object under a system coordinate system based on the moving speed of the measuring object, the target duration and the initial physical coordinate, converting the target physical coordinate into a target pixel coordinate of the target object under the image coordinate system based on the camera internal parameter and the camera external parameter of the two-dimensional camera, and positioning a pixel position corresponding to the target pixel coordinate from a target two-dimensional image corresponding to the target duration.
10. An electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine executable instructions to implement the method of any of claims 1-7.
CN202310319505.3A 2023-03-28 2023-03-28 Calibration method, device and equipment for camera parameters Pending CN116385558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310319505.3A CN116385558A (en) 2023-03-28 2023-03-28 Calibration method, device and equipment for camera parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310319505.3A CN116385558A (en) 2023-03-28 2023-03-28 Calibration method, device and equipment for camera parameters

Publications (1)

Publication Number Publication Date
CN116385558A true CN116385558A (en) 2023-07-04

Family

ID=86960931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310319505.3A Pending CN116385558A (en) 2023-03-28 2023-03-28 Calibration method, device and equipment for camera parameters

Country Status (1)

Country Link
CN (1) CN116385558A (en)

Similar Documents

Publication Publication Date Title
CN111383279B (en) External parameter calibration method and device and electronic equipment
CN110910459B (en) Camera device calibration method and device and calibration equipment
US20150178910A1 (en) Method and system for calibrating laser measuring apparatus
CN107666546B (en) Image shooting alignment method and system
US10499038B2 (en) Method and system for recalibrating sensing devices without familiar targets
CN112270719B (en) Camera calibration method, device and system
CN109544643A (en) A kind of camera review bearing calibration and device
CN110612428B (en) Three-dimensional measurement method using characteristic quantity and apparatus therefor
CN113204004A (en) Laser radar calibration device and method
CN111540004A (en) Single-camera polar line correction method and device
CN114612447A (en) Image processing method and device based on data calibration and image processing equipment
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN113436267B (en) Visual inertial navigation calibration method, device, computer equipment and storage medium
CN114661049A (en) Inspection method, inspection device and computer readable medium
JP2018179577A (en) Position measuring device
US20210065402A1 (en) Method and device for calibrating depth of 3d camera, and computer device
CN116385558A (en) Calibration method, device and equipment for camera parameters
CN114450552A (en) Correction parameter calculation method, displacement amount calculation method, correction parameter calculation device, and displacement amount calculation device
CN115713564A (en) Camera calibration method and device
CN116385560A (en) Calibration method, device and equipment for camera parameters
CN113192123B (en) Image processing method, device and equipment
CN115631099A (en) Radial distortion parameter measuring method and device and electronic equipment
CN114677429A (en) Positioning method and device of manipulator, computer equipment and storage medium
CN113781581A (en) Depth of field distortion model calibration method based on target loose attitude constraint
CN115810052A (en) Camera calibration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination