CN110827361A - Camera group calibration method and device based on global calibration frame - Google Patents

Camera group calibration method and device based on global calibration frame Download PDF

Info

Publication number
CN110827361A
CN110827361A CN201911060536.1A CN201911060536A CN110827361A CN 110827361 A CN110827361 A CN 110827361A CN 201911060536 A CN201911060536 A CN 201911060536A CN 110827361 A CN110827361 A CN 110827361A
Authority
CN
China
Prior art keywords
camera
calibration
coordinate system
pose
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911060536.1A
Other languages
Chinese (zh)
Other versions
CN110827361B (en
Inventor
周杰
邓磊
陈宝华
邓杰仁
吴垚垚
李健
于恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201911060536.1A priority Critical patent/CN110827361B/en
Publication of CN110827361A publication Critical patent/CN110827361A/en
Application granted granted Critical
Publication of CN110827361B publication Critical patent/CN110827361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a camera group calibration method and a camera group calibration device based on a global calibration frame, wherein the method comprises the following steps: designing the size, ID and geometric position posture of a calibration plate for the view field range of each camera; detecting the cell corner points and acquiring the absolute pose of the camera under a calibration plate coordinate system; and carrying out nonlinear optimization and coordinate system conversion according to the absolute pose of the camera, and estimating to obtain the relative positions and postures among the cameras. The method estimates the relative positions and postures among the cameras by constructing a global calibration frame, and has important theoretical and practical values.

Description

Camera group calibration method and device based on global calibration frame
Technical Field
The invention relates to the technical field of camera calibration, in particular to a camera group calibration method and device based on a global calibration frame.
Background
In recent years, the number of camera-carrying intelligent devices (such as unmanned vehicles, unmanned planes and the like) is increased explosively, the positioning demand on the camera-carrying intelligent devices is more and more strong, and the visual positioning technology has the characteristics of low price and easiness in deployment and has a value of deep research. Visual positioning can be performed in a monocular and multi-ocular mode, a traditional monocular camera is usually limited in visual field, matching loss is easily caused, and the problem of scale uncertainty cannot be solved. In contrast, the multi-view camera set not only can provide a wide visual field, but also can restrict the scale of the constructed map, and can provide more perception capability. However, at the same time, the positions of the cameras are often different, the fields of view are different, the overlapping areas of adjacent cameras are small or even have no overlap (such as a ring shooting system in a vehicle in fig. 1), and it is not guaranteed that all cameras overlap at the same time. The relative position pose determination problem between them cannot be solved by conventional binocular or multi-ocular calibration methods, which require overlapping fields of view.
The camera set refers to a visual device composed of two or more cameras. The optical centers of these cameras are generally not together and the intrinsic parameters of each camera may be different. The camera set is internally consistent with the rigid invariant assumption, and each camera does not move or rotate in the camera set after being calibrated. The ring shooting camera, the array camera and the binocular camera all belong to the category of the camera group.
In the related art, conventionally, a camera group is calibrated usually by a camera manufacturer at the time of factory shipment based on position and orientation measurement, that is, an external measurement method is adopted to perform accurate measurement of the position and orientation of a camera. The work is performed by professional personnel, usually on professional equipment, and the debugging period is long.
Conventionally, however, the camera group is calibrated by a camera manufacturer at the time of shipment, and calibration based on position and orientation measurement is performed. The calibration process needs high-level personnel to carry out long-time debugging and adjustment, has high requirements on production process and installation process, is time-consuming and labor-consuming, and can not be disassembled, assembled and changed after leaving a factory.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present invention is to provide a camera group calibration method based on a global calibration frame, which estimates the relative positions and postures of multiple cameras by constructing the global calibration frame, and has important theoretical and practical values.
The invention also aims to provide a camera group calibration device based on the global calibration frame.
In order to achieve the above object, an embodiment of the present invention provides a camera group calibration method based on a global calibration frame, including the following steps: designing the size, ID and geometric position posture of a calibration plate for the view field range of each camera; detecting the cell corner points and acquiring the absolute pose of the camera under a calibration plate coordinate system; and carrying out nonlinear optimization and coordinate system conversion according to the absolute pose of the camera, and estimating to obtain the relative positions and postures among the cameras.
The camera group calibration method based on the global calibration frame provided by the embodiment of the invention estimates the relative positions and postures among a plurality of cameras by constructing the global calibration frame, has important theoretical and actual values, belongs to a simple and convenient calibration mode after installation, is an internal measurement mode based on the self capacity of the cameras, supports quick re-calibration after factory disassembly and modification, has lower requirements on hardware production and installation processes, and is low in implementation cost.
In addition, the camera group calibration method based on the global calibration rack according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the method further includes: and local unique identification is carried out by adopting codes with local distinctiveness, and the calibration board is formed by arranging and assembling.
Further, in an embodiment of the present invention, the detecting the cell corner point includes: performing threshold segmentation on the image by adopting self-adaptive window thresholding; extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates; mapping the quadrilateral candidates to preset squares, dividing the image into grids, and determining the bit of each cell; and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
Further, in an embodiment of the present invention, the calculation formula of the absolute pose is:
Figure BDA0002257817670000021
where N represents the set of all 3D to 2D point pair subscripts, XiRepresenting the three-dimensional coordinates of a visual marker i in the global coordinate system, P is the set pose of a certain camera in the global coordinate system, xiIs the two-dimensional pixel feature point coordinates obtained by the camera observing the 3D point.
Further, in one embodiment of the present invention, the in-viewpoint pose of the camera satisfies:
wherein the pose of camera i is Ti'.
In order to achieve the above object, an embodiment of another aspect of the present invention provides a camera group calibration apparatus based on a global calibration frame, including: the design module is used for designing the size, the ID and the geometric position posture of the calibration plate for the view field range of each camera; the detection module is used for detecting the cell corner points and acquiring the absolute pose of the camera under a calibration plate coordinate system; and the calculation module is used for carrying out nonlinear optimization and coordinate system conversion according to the absolute pose of the camera and estimating to obtain the relative position and the posture among the cameras.
The camera group calibration device based on the global calibration frame provided by the embodiment of the invention estimates the relative positions and postures among a plurality of cameras by constructing the global calibration frame, has important theoretical and actual values, belongs to a simple and convenient calibration mode after installation, is an internal measurement mode based on the self capacity of the cameras, supports quick re-calibration after factory disassembly and modification, has lower requirements on hardware production and installation processes, and is low in implementation cost.
In addition, the camera group calibration device based on the global calibration stand according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the method further includes: and the identification module is used for carrying out local unique identification by adopting codes with local distinctiveness and assembling the codes to form the calibration board.
Further, in an embodiment of the present invention, the detection module is further configured to perform threshold segmentation on the image by using adaptive window thresholding; extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates; mapping the quadrilateral candidates to preset squares, dividing the image into grids, and determining the bit of each cell; and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
Further, in an embodiment of the present invention, the calculation formula of the absolute pose is:
Figure BDA0002257817670000031
where N represents the set of all 3D to 2D point pair subscripts, XiRepresenting the three-dimensional coordinates of a visual marker i in the global coordinate system, P is the set pose of a certain camera in the global coordinate system, xiIs the two-dimensional pixel feature point coordinates obtained by the camera observing the 3D point.
Further, in one embodiment of the present invention, the in-viewpoint pose of the camera satisfies:
Figure BDA0002257817670000032
wherein the pose of camera i is Ti'.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a group of ring cameras with non-overlapping fields of view according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a global calibration rack, a ring shot calibration rack (up) for an autonomous vehicle, an L-shaped calibration rack (down) for an unmanned forklift, where M is a unique ID calibration board, according to an embodiment of the present invention;
FIG. 3 is a flowchart of a camera group calibration method based on a global calibration stand according to an embodiment of the present invention;
FIG. 4 is a flowchart of a camera group calibration method based on a global calibration stand according to an embodiment of the present invention
FIG. 5 is a schematic diagram of a generic calibration plate according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a calibration plate based on locally unique identification according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the overall design of the calibration plate and the establishment of a calibration plate coordinate system according to an embodiment of the present invention;
fig. 8 is a diagram of a corner detection result according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a camera group calibration apparatus based on a global calibration rack according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In view of the technical problems described in the background art, the solution of the embodiment of the present invention is to design a global calibration frame, which includes a plurality of calibration boards facing the corresponding cameras, and each calibration board includes a plurality of globally unique ID information tags, as shown in fig. 2, wherein the geometric relationship between the tags has been measured in advance when the calibration object is processed. This configuration ensures that the global calibration frame can be seen by each camera with several local markers, which is equivalent to connecting a camera group in series with the global calibration frame. It is then optimized using the solution of the camera position pose and minimizing the reprojection error. The design of the global calibration object should be designed with reference to the approximate positions of the cameras so that each camera can see the local; and performing fine adjustment on the position and the posture of the global calibration object, and shooting by the camera set at the same time, so that a plurality of groups of camera set sample images can be obtained and can be used for calibration.
And calibrating the camera group, namely determining the absolute pose of each camera in the camera group based on the rigid coordinate system of the camera group. Therefore, the camera group calibration method and device based on the global calibration frame have the advantages of being easy to operate and capable of guaranteeing precision, wherein the calibration plate used in the calibration frame is provided with the unique ID mark, the mark is high in detection speed and low in false detection rate, the unique ID mark is provided, and detection and matching are facilitated.
The following describes a camera group calibration method and apparatus based on a global calibration frame according to an embodiment of the present invention with reference to the accompanying drawings, and first, a camera group calibration method based on a global calibration frame according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 3 is a flowchart of a camera group calibration method based on a global calibration stand according to an embodiment of the present invention.
As shown in fig. 3, the camera group calibration method based on the global calibration frame includes the steps of firstly designing a calibration frame with local unique identification capability, establishing a coordinate system on the calibration frame, then detecting cell corner points of a calibration plate on each camera in a camera group, solving a PnP problem through matching information to obtain an absolute pose of the camera on the coordinate system of the calibration plate, and finally completing calibration of the camera group through coordinate transformation, wherein the method specifically includes the following steps:
in step S301, the size, ID, and geometric position orientation of the calibration plate are designed for the field of view range of each camera.
It can be understood that, as shown in fig. 4, the embodiment of the present invention first performs the design of the global calibration rack, specifically, as shown in fig. 2: firstly, according to the hardware design of an actual scene, obtaining the field range of a target camera set; designing the size, ID distribution and geometric position posture of a calibration plate according to the field range of each camera; finally, the design is that the bracket is used for fixing the chain.
Further, in an embodiment of the present invention, the method further includes: and local unique identification is carried out by adopting codes with local distinctiveness, and a calibration board is formed by arranging and assembling.
It can be understood that when calibrating a multiphase unit, it is often difficult to calibrate using a common calibration board, as shown in fig. 5, since there is no distinction between each cell of the common calibration board, it is more difficult to match the corner points of the cells after translation or rotation. Therefore, the embodiment of the present invention designs a calibration board with marks, as shown in fig. 6, a code with local distinctiveness is used to perform local unique marks, and the calibration board is composed by arranging the codes.
Specifically, (a) design local coding
The local uniqueness mark in the calibration board designed by the embodiment of the invention adopts an Aruco marker, is an artificial mark based on binary coding and is formed by Rafael
Figure BDA0002257817670000051
And by Sergio Garrido. The marker has three good characteristics, which are the reasons for the embodiment of the invention: (a) the ID has uniqueness: each of the artco markers has a unique ID, and each Marker provides four distinct corner points whose labels do not change no matter how the Marker is rotated. Thus, when marker is laidEnsuring that no marker with repeated ID exists, and performing global matching; (b) the method is favorable for quick detection: the detection speed is high, the algorithm designed aiming at the code has low complexity and high running speed; (c) low false detection rate: the ArucoMark uses a binary code that uses parity and has a single bit error correction capability with the check bits being at the power of 2, making the algorithm very robust, allowing the possibility of applying error detection and correction techniques, with substantially no false detections.
(b) Design calibration plate overall structure and coordinate system
When designing the position and the posture on the global calibration object, the visual field which can be seen by each camera needs to be considered, and 2-5 visible markers are arranged in the visual field of each camera. On the basis of designing the local codes, the aspect ratio of a common camera is considered, an array type forward-looking camera set is taken as an example, the common camera set is designed into a rectangular overall calibration pattern, coordinates of each marker angular point are obtained through accurate measurement of a distance meter, and an overall design scheme of the calibration plate is formed.
As shown in fig. 7, in the embodiment of the present invention, the angular point of the lower left corner of the whole rectangle is used as the origin of the calibration board, the direction parallel to the edge of the calibration board is used as the direction of X, Y axes, and the normal vector perpendicular to the front surface of the calibration board is used as the direction of Z axis, so as to establish the coordinate system of the calibration board. The calibration plate coordinate system is used for the subsequent integration of all cameras into this coordinate system.
In step S302, cell corner points are detected, and the absolute pose of the camera in the calibration plate coordinate system is acquired.
In an embodiment of the present invention, detecting a cell corner point includes: performing threshold segmentation on the image by adopting self-adaptive window thresholding; extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates; mapping the quadrilateral candidates to a preset square, dividing the image into grids, and determining the bit of each cell; and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
Specifically, the detection process of the cell corner point is mainly divided into four steps of threshold segmentation, contour extraction and polygon approximation, bit extraction, and marker and corner point ID determination, wherein the corner point detection result is shown in fig. 8.
(a) The threshold segmentation Marker is a square of a black-matrix white code, and in order to easily extract the contour of the Marker, the image needs to be subjected to threshold segmentation firstly. In order to avoid directly filtering out possible markers, the embodiment of the invention adopts adaptive window thresholding instead of global binarization during threshold segmentation, and the size of the window is also carefully designed, so that if the window is too large, the adaptive characteristic cannot be embodied; if the window is too small, the marker will be cut and damaged. Therefore, the embodiment of the present invention uses a value in a proper range as the window size, and when a window is subjected to thresholding, an OTSU algorithm with adaptive capability is adopted.
(b) Contour extraction and polygon approximation
The contour extraction adopts a contour detection algorithm of Suzuki, and after the contour is extracted, some contours which are unlikely to be marker need to be filtered. For example, filtering can be performed by defining the maximum and minimum perimeters of the contour, and after filtering, the contour is subjected to polygonal approximation. The RamerDouglas-Peucker algorithm is adopted in the step, the algorithm is a broken line fitting curve algorithm, the idea is that the algorithm determines 'dissimilarity' based on the maximum distance between an original curve and a simplified curve (namely the Hausdorff distance between the curves), then, the Aruco detection algorithm filters the approximation result which is not a quadrangle, and the remaining quadrangle is used as a candidate and serves as the input of a subsequent algorithm.
(c) Bit extraction
After obtaining the quadrilateral candidates, a perspective transformation is first performed to map the quadrilateral to a square. The image is then divided into a grid, the number of which is the same as the number of bits of the marker. In each cell, the bit extraction statistics compare the number of black and white pixels to determine the bit for that cell.
(d) Determining marker and corner ID thereof
When the bits of each cell are obtained, it becomes very simple to determine the ID. Only the dictionary of the input is needed for detection.
In step S303, nonlinear optimization and coordinate system transformation are performed according to the absolute pose of the camera, and the relative positions and postures of the plurality of cameras are estimated.
It can be understood that, as shown in fig. 4, the embodiment of the present invention calculates the absolute pose of the camera in the calibration plate coordinate system, and when the absolute poses of all the cameras in the group in the calibration plate coordinate system are obtained, the absolute poses can be converted into the absolute poses in the camera group.
Specifically, (1) calculating the absolute pose of the camera under the coordinate system of the calibration plate
The absolute pose of the camera can be solved by the three-dimensional coordinates of the above 6 pairs of corner points and the positions of the corner points in the image. Solving the absolute pose of the camera can be understood as a PnP (passive-n-Point) problem, which is a method for solving the motion of a 3D to 2D Point pair, that is, how to estimate the absolute pose of the camera when knowing n 3D spatial points and their projection positions in the camera.
Assuming that a point homogeneous coordinate in the 3-dimensional space is X, and the ray direction thereof in the camera coordinate system with the camera pose P is X, the following are:
[xT1]T=PX
the known quantity of the last action [0,0,0,1] of P, and to reduce the number of equations unknowns, the above formula is abbreviated
sx=P[1:3]X
Where [1:3] refers to the first three rows of the P vector and s is the scale factor. After s is eliminated by substitution, each point pair can provide two constraint equations, written as:
AP*=0
P*representing a 12-dimensional vector generated by the first three rows of P. P has 12 unknown variables in total, so 6 point pairs are needed to solve P, and if the number of the point pairs is more than 6, the least square solution can be solved by using an SVD method.
However, P itself is a solution in lie group SE3, which is found by a direct linear method, and the rotation matrix R in P may not satisfy the properties of the rotation matrix. An effective method is to decompose R by SVD to obtain U, Σ, and V, force Σ into a unit array (this is a sufficient requirement for rotating the matrix), and finally make R equal to UVT. This corresponds to projecting the original matrix onto the manifold of SE 3. Another possible approach is to use an optimization method to find the pose P of the camera by solving an optimization problem:
where N represents the set of all 3D to 2D point pair subscripts. Both methods may have a certain problem, the result solved by the direct linear method is easily affected by noise, and the optimization method needs a good initial solution, otherwise, the optimization method is likely to be optimized into a poor local optimal solution. The embodiment of the invention designs a coarse-to-fine optimization method, which is characterized in that an initial solution is obtained by utilizing a direct linear method, the accuracy of the initial solution is judged, and then the initial solution is used as the initial solution to be accurately solved in the optimization method, so that a better result is obtained.
(2) When the absolute poses of all the cameras in the camera group in the coordinate system of the calibration plate are obtained, the poses can be converted into the absolute poses in the camera group. Generally, the embodiment of the present invention will use the coordinate system of one of the cameras as the coordinate system of the whole camera set. Assume that the pose of the camera i is Ti', i is 0,1,2.. N under the calibration plate coordinate system. And taking the coordinate system of the camera 0 as a viewpoint coordinate system, the in-viewpoint pose Ti of the camera i satisfies the following conditions:
Figure BDA0002257817670000081
to sum up, according to the camera group calibration method based on the global calibration frame provided by the embodiment of the invention, after the camera group calibration is performed, the relative positions and postures of all cameras in the camera group become known, so that the camera group calibration method can be used in the fields of intelligent driving, robots and the like, and supports the cooperative sensing of cameras distributed in multiple directions, such as the fusion of targets sensed among multiple cameras or the splicing of images. Based on the calibration work, the real-time positioning and map construction (SLAM) of the robot, the omnidirectional obstacle avoidance, the omnidirectional human-computer interaction and the like can be supported, the system comprehensive capacity of the robot can be enhanced in more fields, for example, in the process of supporting the real-time positioning and map construction (SLAM), because the map constructed based on the multiple cameras inherently has baseline information, and the map scale basis can be determined according to the baseline of the camera set, the map with scale information can be provided, and the availability of the map is greatly improved. Furthermore, the method has strong basic actual value in the field of visual perception.
The camera group calibration device based on the global calibration stand according to the embodiment of the invention is described next with reference to the attached drawings.
Fig. 9 is a schematic structural diagram of a camera group calibration apparatus based on a global calibration stand according to an embodiment of the present invention.
As shown in fig. 9, the camera group calibration apparatus 10 based on the global calibration stand includes: a design module 100, a detection module 200, and a calculation module 300.
The design module 100 is configured to design a size, an ID, and a geometric position pose of the calibration plate for the field range of each camera; the detection module 200 is configured to detect a cell corner point and acquire an absolute pose of the camera in the calibration plate coordinate system; the calculation module 300 is configured to perform nonlinear optimization and coordinate system transformation according to the absolute pose of the camera, and estimate the relative positions and poses between the multiple cameras. The device 10 of the embodiment of the invention estimates the relative positions and postures of a plurality of cameras by constructing a global calibration frame, and has important theoretical and practical values.
Further, in an embodiment of the present invention, the method further includes: and the identification module is used for carrying out local unique identification by adopting codes with local distinctiveness and assembling the codes to form a calibration board.
Further, in an embodiment of the present invention, the detection module is further configured to perform threshold segmentation on the image by using adaptive window thresholding; extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates; mapping the quadrilateral candidates to a preset square, dividing the image into grids, and determining the bit of each cell; and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
Further, in one embodiment of the present invention, the calculation formula of the absolute pose is:
Figure BDA0002257817670000082
where N represents the set of all 3D to 2D point pair subscripts, XiRepresenting the three-dimensional coordinates of a visual marker i in the global coordinate system, P is the set pose of a certain camera in the global coordinate system, xiIs the two-dimensional pixel feature point coordinates obtained by the camera observing the 3D point.
Further, in one embodiment of the present invention, the in-viewpoint pose of the camera satisfies:
Figure BDA0002257817670000091
wherein the pose of camera i is Ti'.
It should be noted that the foregoing explanation on the embodiment of the camera group calibration method based on the global calibration rack is also applicable to the camera group calibration device based on the global calibration rack in this embodiment, and details are not repeated here.
According to the camera group calibration device based on the global calibration frame, provided by the embodiment of the invention, after the camera group calibration is carried out, the relative positions and postures of all cameras in the camera group become known, so that the camera group calibration device can be used in the fields of intelligent driving, robots and the like, and supports the cooperative sensing of cameras distributed in multiple directions, such as the fusion of the sensed targets among multiple cameras or the splicing of images. Based on the calibration work, the real-time positioning and map construction (SLAM) of the robot, the omnidirectional obstacle avoidance, the omnidirectional human-computer interaction and the like can be supported, the system comprehensive capacity of the robot can be enhanced in more fields, for example, in the process of supporting the real-time positioning and map construction (SLAM), because the map constructed based on the multiple cameras inherently has baseline information, and the map scale basis can be determined according to the baseline of the camera set, the map with scale information can be provided, and the availability of the map is greatly improved. Furthermore, the method has strong basic actual value in the field of visual perception.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A camera group calibration method based on a global calibration frame is characterized by comprising the following steps:
designing the size, ID and geometric position posture of a calibration plate for the view field range of each camera;
detecting the cell corner points and acquiring the absolute pose of the camera under a calibration plate coordinate system; and
and carrying out nonlinear optimization and coordinate system conversion according to the absolute pose of the camera, and estimating to obtain the relative positions and postures among the cameras.
2. The method of claim 1, further comprising:
and local unique identification is carried out by adopting codes with local distinctiveness, and the calibration board is formed by arranging and assembling.
3. The method of claim 2, wherein the detecting the cell corner point comprises:
performing threshold segmentation on the image by adopting self-adaptive window thresholding;
extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates;
mapping the quadrilateral candidates to preset squares, dividing the image into grids, and determining the bit of each cell;
and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
4. The method according to claim 1, characterized in that the calculation formula of the absolute pose is:
Figure FDA0002257817660000011
where N represents the set of all 3D to 2D point pair subscripts, XiRepresenting the three-dimensional coordinates of the visual marker i in the global scale frame coordinate system,p is a set pose of a camera in a global coordinate system, xiIs the two-dimensional pixel feature point coordinates obtained by the camera observing the 3D point.
5. The method of claim 4, wherein the in-view pose of the camera satisfies:
Figure FDA0002257817660000012
wherein the pose of camera i is Ti'.
6. A camera group calibration device based on a global calibration frame is characterized by comprising:
the design module is used for designing the size, the ID and the geometric position posture of the calibration plate for the view field range of each camera;
the detection module is used for detecting the cell corner points and acquiring the absolute pose of the camera under a calibration plate coordinate system; and
and the calculation module is used for carrying out nonlinear optimization and coordinate system conversion according to the absolute pose of the camera and estimating to obtain the relative position and the posture among the cameras.
7. The apparatus of claim 6, further comprising:
and the identification module is used for carrying out local unique identification by adopting codes with local distinctiveness and assembling the codes to form the calibration board.
8. The apparatus of claim 7, wherein the detection module is further configured to threshold the image using an adaptive window thresholding process; extracting the segmented contours by adopting a contour detection algorithm, and filtering out the contours which do not meet preset conditions based on the maximum distance between the original curve and the simplified curve to obtain quadrilateral candidates; mapping the quadrilateral candidates to preset squares, dividing the image into grids, and determining the bit of each cell; and detecting by using the input dictionary according to the bit of each cell to obtain a marker and an angular point ID thereof.
9. The apparatus according to claim 6, wherein the calculation formula of the absolute pose is:
where N represents the set of all 3D to 2D point pair subscripts, XiRepresenting the three-dimensional coordinates of a visual marker i in the global coordinate system, P is the set pose of a certain camera in the global coordinate system, xiIs the two-dimensional pixel feature point coordinates obtained by the camera observing the 3D point.
10. The apparatus of claim 9, wherein the in-point pose of the camera satisfies:
wherein the pose of camera i is Ti'.
CN201911060536.1A 2019-11-01 2019-11-01 Camera group calibration method and device based on global calibration frame Active CN110827361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911060536.1A CN110827361B (en) 2019-11-01 2019-11-01 Camera group calibration method and device based on global calibration frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911060536.1A CN110827361B (en) 2019-11-01 2019-11-01 Camera group calibration method and device based on global calibration frame

Publications (2)

Publication Number Publication Date
CN110827361A true CN110827361A (en) 2020-02-21
CN110827361B CN110827361B (en) 2023-06-23

Family

ID=69551987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911060536.1A Active CN110827361B (en) 2019-11-01 2019-11-01 Camera group calibration method and device based on global calibration frame

Country Status (1)

Country Link
CN (1) CN110827361B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784775A (en) * 2020-07-13 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 Identification-assisted visual inertia augmented reality registration method
CN111968185A (en) * 2020-09-01 2020-11-20 深圳辰视智能科技有限公司 Calibration board, nine-point calibration object grabbing method and system based on code definition
CN112734857A (en) * 2021-01-08 2021-04-30 香港理工大学深圳研究院 Calibration method for camera internal reference and camera relative laser radar external reference and electronic equipment
CN113706479A (en) * 2021-08-12 2021-11-26 北京三快在线科技有限公司 Unmanned vehicle distance measuring method and device, storage medium and unmanned vehicle
CN114415901A (en) * 2022-03-30 2022-04-29 深圳市海清视讯科技有限公司 Man-machine interaction method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
WO2018076154A1 (en) * 2016-10-25 2018-05-03 成都通甲优博科技有限责任公司 Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN108765496A (en) * 2018-05-24 2018-11-06 河海大学常州校区 A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN110047109A (en) * 2019-03-11 2019-07-23 南京航空航天大学 A kind of camera calibration plate and its recognition detection method based on self-identifying label
US20190278288A1 (en) * 2018-03-08 2019-09-12 Ubtech Robotics Corp Simultaneous localization and mapping methods of mobile robot in motion area

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
WO2018076154A1 (en) * 2016-10-25 2018-05-03 成都通甲优博科技有限责任公司 Spatial positioning calibration of fisheye camera-based panoramic video generating method
US20190278288A1 (en) * 2018-03-08 2019-09-12 Ubtech Robotics Corp Simultaneous localization and mapping methods of mobile robot in motion area
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN108765496A (en) * 2018-05-24 2018-11-06 河海大学常州校区 A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN110047109A (en) * 2019-03-11 2019-07-23 南京航空航天大学 A kind of camera calibration plate and its recognition detection method based on self-identifying label

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHEN LIU等: "A global calibration method for multiple vision sensors based on multiple targets", 《MEASUREMENT SCIENCE AND TECHNOLOGY》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784775A (en) * 2020-07-13 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 Identification-assisted visual inertia augmented reality registration method
CN111784775B (en) * 2020-07-13 2021-05-04 中国人民解放军军事科学院国防科技创新研究院 Identification-assisted visual inertia augmented reality registration method
CN111968185A (en) * 2020-09-01 2020-11-20 深圳辰视智能科技有限公司 Calibration board, nine-point calibration object grabbing method and system based on code definition
CN111968185B (en) * 2020-09-01 2024-02-02 深圳辰视智能科技有限公司 Calibration plate, nine-point calibration object grabbing method and system based on coding definition
CN112734857A (en) * 2021-01-08 2021-04-30 香港理工大学深圳研究院 Calibration method for camera internal reference and camera relative laser radar external reference and electronic equipment
CN112734857B (en) * 2021-01-08 2021-11-02 香港理工大学深圳研究院 Calibration method for camera internal reference and camera relative laser radar external reference and electronic equipment
CN113706479A (en) * 2021-08-12 2021-11-26 北京三快在线科技有限公司 Unmanned vehicle distance measuring method and device, storage medium and unmanned vehicle
CN113706479B (en) * 2021-08-12 2022-09-09 北京三快在线科技有限公司 Unmanned vehicle distance measuring method and device, storage medium and unmanned vehicle
CN114415901A (en) * 2022-03-30 2022-04-29 深圳市海清视讯科技有限公司 Man-machine interaction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110827361B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN110827361B (en) Camera group calibration method and device based on global calibration frame
Alismail et al. Automatic calibration of a range sensor and camera system
Ishikawa et al. Lidar and camera calibration using motions estimated by sensor fusion odometry
Heng et al. Leveraging image‐based localization for infrastructure‐based calibration of a multi‐camera rig
Moghadam et al. Line-based extrinsic calibration of range and image sensors
Pandey et al. Visually bootstrapped generalized ICP
Xie et al. Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags
CN111815716A (en) Parameter calibration method and related device
CN112396656B (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN110763204B (en) Planar coding target and pose measurement method thereof
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
Liang et al. Automatic registration of terrestrial laser scanning data using precisely located artificial planar targets
CN111964680B (en) Real-time positioning method of inspection robot
CN110702028B (en) Three-dimensional detection positioning method and device for orchard trunk
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN112464812A (en) Vehicle-based sunken obstacle detection method
Förstner Optimal vanishing point detection and rotation estimation of single images from a legoland scene
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN112348869A (en) Method for recovering monocular SLAM scale through detection and calibration
Miksch et al. Automatic extrinsic camera self-calibration based on homography and epipolar geometry
Deng et al. Joint calibration of dual lidars and camera using a circular chessboard
CN102542563A (en) Modeling method of forward direction monocular vision of mobile robot
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN111964681B (en) Real-time positioning system of inspection robot
CN113066133A (en) Vehicle-mounted camera online self-calibration method based on pavement marking geometrical characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant