CN111709999A - Calibration plate, camera calibration method and device, electronic equipment and camera system - Google Patents

Calibration plate, camera calibration method and device, electronic equipment and camera system Download PDF

Info

Publication number
CN111709999A
CN111709999A CN202010401379.2A CN202010401379A CN111709999A CN 111709999 A CN111709999 A CN 111709999A CN 202010401379 A CN202010401379 A CN 202010401379A CN 111709999 A CN111709999 A CN 111709999A
Authority
CN
China
Prior art keywords
calibration
camera
coding region
coding
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010401379.2A
Other languages
Chinese (zh)
Inventor
苏丹
卞新光
皮金柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN202010401379.2A priority Critical patent/CN111709999A/en
Publication of CN111709999A publication Critical patent/CN111709999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application is applicable to the technical fields of image processing, computer vision, camera calibration and the like, and particularly relates to a calibration plate, a camera calibration method, a camera calibration device, electronic equipment and a camera system, wherein the calibration plate comprises: a body; the coding regions are distributed on the body, at least one or more coding regions can be collected in the camera calibration process, and the coding regions are used for positioning the position of the calibration image on the calibration plate; the non-coding regions are distributed on the body, and line characteristics of the non-coding regions are extracted in the camera calibration process and used for testing distortion parameters of the calibration image; wherein the coding regions are present at a Hamming distance from each other. The camera calibration method and the camera calibration device can realize the quick positioning of the camera calibration process and the position of the calibration image in the calibration plate, and do not need to shoot the complete calibration plate image.

Description

Calibration plate, camera calibration method and device, electronic equipment and camera system
Technical Field
The application belongs to the technical fields of image processing, computer vision, camera calibration and the like, and particularly relates to a calibration plate, a camera calibration method and device, electronic equipment and a camera system.
Background
The calibration plate has wide application in machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like. The camera is used for shooting an image of a calibration plate with a pattern array with a fixed interval, and a geometric model of the camera can be obtained through calculation of a calibration algorithm, so that high-precision measurement and reconstruction results are obtained.
The calibration of camera parameters is a very critical link, and the accuracy of the calibration result and the stability of the algorithm directly influence the accuracy of the result generated by the camera. Therefore, the camera calibration is a precondition for subsequent work, and the improvement of the calibration precision is a key point of scientific research.
At present, a calibration plate with checkerboard patterns or solid circle array patterns is usually adopted for camera calibration, a camera must shoot a complete calibration plate image, shielding cannot occur, and otherwise a complete plane angle cannot be extracted.
Disclosure of Invention
The embodiment of the application provides a calibration plate, a camera calibration method, a camera calibration device, electronic equipment and a camera system, and can solve the technical problem that in the camera calibration process of the related art, a complete calibration plate image needs to be shot.
In a first aspect, an embodiment of the present application provides a calibration plate, including:
a body;
the coding regions are distributed on the body, at least one or more coding regions can be collected in the camera calibration process, and the coding regions are used for positioning the position of the calibration image on the calibration plate;
the non-coding regions are distributed on the body, and line characteristics of the non-coding regions are extracted in the camera calibration process and used for testing distortion parameters of the calibration image;
wherein the coding regions are present at a Hamming distance from each other.
According to the embodiment provided by the first aspect of the application, the coding regions and the non-coding regions are designed on the calibration plate, wherein hamming distances exist between different coding regions, so that the coding pattern of each coding region can be ensured to be unique on the calibration plate, the position of a calibration image in the calibration plate can be quickly positioned in the calibration process of a camera, and a complete calibration plate image does not need to be shot.
In one implementation of the first aspect, the encoded region includes two characteristic tiles, and the non-encoded region includes one of the two characteristic tiles.
In an implementation manner of the first aspect, one of the two feature grids comprises white-on-black concentric circles, and the other feature grid comprises black-on-white concentric circles.
In an implementation of the first aspect, the feature tiles are arranged equidistant from each other.
In one implementation manner of the first aspect, the feature grid including concentric circles of white and black sequentially includes, from inside to outside, a first circular area, a first annular area surrounding the first circular area, and a first edge area surrounding the first annular area, where the first circular area and the first edge area are white, and the first annular area is black;
the characteristic square grids of the white concentric circles of the black matrix sequentially comprise a second circular area, a second annular area surrounding the second circular area and a second edge area surrounding the second annular area from inside to outside, the second circular area and the second edge area are black, and the second annular area is white.
In the implementation mode of the first aspect of the application, the linear characteristics of the straight lines on the four sides of the non-coding region are extracted in the calibration process to test the distortion correction performance; in addition, the coding region and the non-coding region comprise concentric circle patterns, which can be used for compensating errors caused by perspective deformation. In the prior art, the scheme of calibrating the camera by adopting the calibration plate with the solid circular array pattern has the problem of perspective deformation, and the distortion correction performance cannot be tested at the same time.
In an implementation manner of the first aspect, the boundary of the feature grid is a quadrilateral. Optionally, the quadrilateral is a square.
In an implementation manner of the first aspect, the observation angle corresponding to any two coding region centers is smaller than half of the camera field of view.
In an implementation manner of the first aspect, the number of the coding regions and the distribution of the coding regions are determined by combining the minimum distance between the camera and the calibration board and the field of view of the camera.
In an implementation of the first aspect, each block of the coding region comprises ceil [ log ] if the total number of coding regions is M2(N)]A second characteristic square grid, wherein ceil means taking more than log2(N), the second feature square is a feature square comprising concentric circles of black and white.
In an implementation manner of the first aspect, the encoding region further comprises check bits. Alternatively, the check bits may employ a second characteristic square.
In an implementation of the first aspect, the feature squares comprising concentric circles with black and white and the feature squares comprising concentric circles with white and black represent 0 and 1 in a binary system, respectively.
In a second aspect, an embodiment of the present application provides a camera calibration method, including:
acquiring a plurality of calibration images with calibration plates shot by a camera, wherein the calibration plates are the calibration plates in the first aspect and any implementation manner of the first aspect;
detecting a coding region and a non-coding region of each calibration image, and extracting coding information of the coding region and the non-coding region of each calibration image; and matching the coding information of the coding region and the non-coding region of each calibration image with a preset calibration plate, and calculating calibration data of the camera.
In an implementation manner of the second aspect, the detecting an encoded region and a non-encoded region of each of the calibration images, and extracting encoded information of the encoded region and the non-encoded region of each of the calibration images includes:
detecting a coding region and a non-coding region of each calibration image, and gradually growing from the coding region to locate the non-coding region; and detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids included by the coding region to obtain the coding information of the non-coding region.
In one implementation manner of the second aspect, the coding region and the non-coding region of each calibration image are detected, and the non-coding region is located by gradually growing the coding region; detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids included by the coding region to obtain the coding information of the non-coding region, wherein the method comprises the following steps:
detecting a coding region and a non-coding region of each calibration image, and gradually growing the coding region to locate the non-coding region; detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids of the coding region to obtain coding information such as azimuth information of the non-coding region;
and matching the extracted coding information of the calibration board in the plurality of calibration images with the coding information of the preset calibration board, and solving calibration data according to the size information, the azimuth information and the like of the characteristic grids on the coding region of the preset calibration board if the matching is successful.
In one implementation manner of the second aspect, the coding region and the non-coding region of each calibration image are detected, and the non-coding region is located by gradually growing the coding region; detecting the position of the non-coding area by using the arrangement sequence and the coordinate values of the characteristic grids of the coding area to obtain coding information such as azimuth information of the non-coding area, and the like, wherein the coding information comprises the following steps:
aiming at each calibration image, selecting three feature squares P1, P2 and P3 in a coding region, and obtaining a vector V1 between the feature square P1 and the feature square P2 and a vector V2 between the feature square P2 and the feature square P3 according to pixel coordinates of the three feature squares so as to obtain an angle between each of the vector V1 and the vector V2 and an X axis of a pixel coordinate system, a distance between the feature square P1 and the feature square P2 and a distance between the feature square P2 and the feature square P3; therein, the Angle between the vector V1 and the X-axis can be denoted as Angle _ V1, and the Angle between the vector V2 and the X-axis can be denoted as Angle _ V2. The distance between feature squares may be expressed as the distance between the centers of the feature squares, and may be a euclidean distance. The distance between feature pane P1 and feature pane P2 may be designated Dist _ V1, and the distance between feature pane P2 and feature pane P3 may be designated Dist _ V2;
predicting the next feature cell P on the same queue as the feature cells P1, P2 and P3, the Angle _ V3 between the vector V3 between the feature cell P and the feature cell P3 and the X axis, and the distance Dist _ V3 between the feature cell P and the feature cell P3, according to the angles Angle _ V1 and Angle _ V2, Angle _ V1 between the vector V1 and vector V2 and the X axis, respectively, the distance Dist _ V2 between the feature cell P1 and the feature cell P2, and the distance Dist _ V2 between the feature cell P2 and the feature cell P3;
traversing all the feature grids on the calibration image, considering the feature grid with the minimum distance with the predicted feature grid P as a matched feature grid P _ match, judging whether the matched feature grid P _ match is a valid point, and if the matched feature grid is an invalid point, continuing to grow the next feature grid;
and circulating the three steps until all the effective characteristic grids are grown.
In a third aspect, an embodiment of the present application provides a camera calibration apparatus, including:
an obtaining unit, configured to obtain a plurality of calibration images with a calibration plate captured by a camera, where the calibration plate is the calibration plate described in any one of the first aspect and the first implementation manner of the first aspect;
the calculation unit is used for detecting the coding region and the non-coding region of each calibration image and extracting the coding information of the coding region and the non-coding region of each calibration image; and matching the coding information of the coding region and the non-coding region of each calibration image with a preset calibration plate, and calculating calibration data of the camera.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the camera calibration method as described in any implementation manner of the second aspect and the second aspect thereof.
In a fifth aspect, an embodiment of the present application provides a camera system, including:
a camera, and
the calibration plate as described in any implementation manner of the first aspect and the first aspect thereof for calibrating the camera.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the camera calibration method as described in the second aspect and any implementation manner of the second aspect is implemented.
In a seventh aspect, an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute the camera calibration method as described in any implementation manner of the second aspect and the second aspect thereof.
It is to be understood that, the beneficial effects of the second to seventh aspects may be referred to the relevant description of the first aspect, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic view of a calibration plate provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a camera calibration method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a camera calibration apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
It should be noted that the drawings provided in the following embodiments are only schematic and illustrative of the basic concept of the present invention, and only the relevant components of the present invention are shown in the drawings rather than the layout according to the number, shape and size of the components in actual implementation, the shape, number and proportion of the components in actual implementation can be changed freely, and the components may be more complex in shape.
Fig. 1 is a schematic structural diagram of a calibration board according to an embodiment of the present application. As shown in fig. 1, the calibration plate includes an ontology 100, and a coding region 101 and a non-coding region 102 disposed on the ontology 100.
The coding regions 101 are uniformly distributed on the body 100, so that at least one or more coding regions 101 can be acquired in the camera calibration process, and the coding regions 101 are used for quickly positioning the position of a calibration image on a calibration plate; the non-coding region 102 is used for testing the distortion parameters of the calibration board 100 by extracting line features of the non-coding region 102 in the camera calibration process.
In the present embodiment, the coding regions 101 are hamming distance from each other so that each coding region is unique in the calibration plate. That is, in the case where the calibration plate includes multiple coding regions, there is a hamming distance between any two coding regions.
Both the encoded region 101 and the non-encoded region 102 include characteristic squares arranged in an array on the ontology 100. The characteristic grids are arranged at equal intervals. The feature grid is bounded by a quadrilateral, e.g., a square.
The encoded region 101 includes two characteristic panes, a first characteristic pane 103 and a second characteristic pane 104. In the example shown in fig. 1, the calibration plate includes 4 encoded regions 101, each encoded region 101 including a 3 x 3 array of characteristic squares. In other examples, the calibration plate may include one or more encoded regions. The coding region may include characteristic squares in an array having other sizes, such as 2 x 2 or 4 x 4, etc.
The non-encoded region 102 includes a feature cell, namely a first feature cell 103. The region of the ontology 100 other than the coding region 101 is a non-coding region 102.
In the design process of the calibration plate, the length and the area of the calibration plate are determined according to actual requirements, and the number of the coding regions and the distribution of the coding regions are determined by combining the parameters such as the minimum distance between the camera and the calibration plate, the field of view of the camera and the like, so that at least one complete coding region is acquired when the camera acquires a calibration image. It should be understood that, in the case of the minimum observation distance of the camera, when the distribution of the coding regions on the calibration plate is designed, the observation angle corresponding to the centers of two adjacent coding regions is less than half of the field of view of the camera.
Determining the size of each coding region and the number of check bits according to the obtained number of the coding regions, and arranging a first characteristic grid and a second characteristic grid in the coding regions to ensure that a Hamming distance exists between any two coding regions so as to ensure that any coding region is unique in the calibration plate. Optionally, any two coding regions have a larger hamming distance, e.g., a hamming distance greater than 3 or 4, etc., such that either coding region is unique in the calibration plate.
If the total number of coding regions is determined to be N, then ceil [ log ] needs to be designed for each coding region2(N)]A second characteristic square grid, i.e. greater than log2The value of the smallest integer of (N). Wherein N is a positive integer.
During the camera calibration process, the feature grid needs to be encoded with 0 or 1. The first feature cell 103 and the second feature cell 104 represent 0 and 1 in binary, respectively. For example, the first feature box 103 represents a 0 and the second feature box 104 represents a 1.
In some embodiments of the present application, to ensure the correctness of the encoding and thus the accuracy of the calibration, the encoding region may include check bits. For example, check bits 105 in the black dashed box of the encoded region shown in FIG. 1, and in the example shown in FIG. 1, the encoded region includes two check bits. Designing the number of check bits in the coding regions through actual conditions, and if the check bits of each coding region are M, M is a positive integer, and the check bits adopt second characteristic grids, the second characteristic grids of each coding region need M + ceil [ log ] in total2(N)]And (4) respectively. According to the size of the coding region, the second characteristic grid ruleOr irregularly arranged in the coding region, and the first characteristic square is adopted as the rest characteristic square of the coding region.
In some embodiments of the present application, as shown in FIG. 1, the first feature pane 103 comprises concentric circles of black on white and the second feature pane 104 comprises concentric circles of white on black. The first feature cell 103 and the second feature cell 104 are arranged in concentric circles to compensate for errors caused by perspective distortion.
In some embodiments of the present application, as shown in fig. 1, the first feature cell 103 comprises, in order from the inside to the outside, a first circular area, a first annular area surrounding the first circular area, and a first edge area surrounding the first annular area. Wherein the first circular area and the first edge area are white, and the first annular area is black.
In some embodiments of the present application, as shown in fig. 1, the second feature cell 104 includes, in order from the inside to the outside, a second circular region, a second annular region surrounding the second circular region, and a second edge region surrounding the second annular region. The second circular area and the second edge area are black, and the second annular area is white.
In some embodiments of the present application, the body 100 may be made of ceramic, glass, aluminum alloy, film, or quartz. The encoded region 101 and the non-encoded region 102 may be formed on the body 100 by printing or imprinting, or may be formed by other methods.
In some embodiments of the present application, in the camera calibration process, if there is distortion in the acquired calibration image, the straight line of the first feature grid in the calibration image may be curved, and the line feature of the first feature grid of the non-coding region of the calibration plate in the calibration image is extracted by using a hough algorithm or other algorithms, and the residual error of the straight line fitting in different regions is calculated, so as to test the distortion parameter of the calibration image.
In the embodiment of the application, the coding region and the non-coding region are designed on the calibration plate, wherein the patterns of different coding regions are unique on the calibration plate, so that the position of a calibration image in the calibration plate can be quickly positioned in the calibration process of a camera, and a complete calibration plate image does not need to be shot. In addition, the distortion correction performance can be tested by extracting line characteristics of straight lines on four sides of the non-coding region; the encoded and non-encoded regions comprise concentric circular patterns that can be used to compensate for errors caused by perspective distortion.
Another embodiment of the present application provides a camera calibration method, which implements camera calibration based on the calibration board, and the camera calibration method can be applied to a camera, an electronic device with a camera, or an electronic device communicatively coupled to a camera. Electronic devices include, but are not limited to: a mobile phone, a video camera, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like. The embodiment of the present application does not set any limit to the specific type of the electronic device.
The following describes a camera calibration method provided in an embodiment of the present application in detail.
Fig. 2 shows a schematic flowchart of a camera calibration method provided in the present application. The method is executed by a camera calibration device and can be realized by software and/or hardware. As shown in fig. 2, the method includes steps S210 to S220, and the implementation process and principle of each step are as follows.
S210, acquiring a plurality of calibration images of the calibration plate with any one of the embodiments, which are shot by a camera;
in step S210, the calibration plate is placed in the field of view of the camera, and imaging processing is performed at a plurality of preset distances by the camera to obtain a plurality of calibration images containing the calibration plate.
In an implementation manner of the present application, a plurality of calibration images with rich coordinate information can be obtained by adjusting at least one of a direction of a calibration plate, a direction of a camera, and a distance between the calibration plate and the camera at different positions, different angles, or different postures. The calibration image may include a complete calibration plate image or an incomplete calibration plate image.
It should be understood that, in the embodiment of the present application, the camera may also be a video camera or a camera, which may be a single imaging device, or a plurality of imaging devices may be arranged in parallel, and only the calibration plate needs to be located in a field of view of each of the plurality of imaging devices, where the type and number of the imaging devices are not limited herein.
S220, detecting the coding region and the non-coding region of each calibration image, extracting the coding information of the coding region and the non-coding region of each calibration image, matching the coding information of the coding region and the non-coding region of each calibration image with a preset calibration plate, and calculating the calibration data of the camera.
In one implementation of the present application, step S220 includes the following steps S221 and S222.
S221, detecting a coding region and a non-coding region of each calibration image, and gradually growing and positioning the non-coding region from the coding region; and detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids of the coding region to obtain coding information such as azimuth information of the non-coding region.
Optionally, in an implementation manner of the present application, step S221 includes the following steps S2211 to S2224.
S2211: and aiming at each calibration image, selecting three characteristic grids in the coding region: p1, P2 and P3 obtain, from the pixel coordinates of the three feature squares, a vector V1 between the feature square P1 and the feature square P2, and a vector V2 between the feature square P2 and the feature square P3, and further obtain the angle between each of the vector V1 and the vector V2 and the X-axis of the pixel coordinate system, the distance between the feature square P1 and the feature square P2, and the distance between the feature square P2 and the feature square P3. Therein, the Angle between the vector V1 and the X-axis can be denoted as Angle _ V1, and the Angle between the vector V2 and the X-axis can be denoted as Angle _ V2. The distance between feature squares may be expressed as the distance between the centers of the feature squares, and may be a euclidean distance. The distance between feature pane P1 and feature pane P2 may be designated Dist _ V1, and the distance between feature pane P2 and feature pane P3 may be designated Dist _ V2.
As an example of the present application, three feature tiles may be selected on the same queue, e.g., the same vertical queue or the same horizontal queue, of the encoding area, and the three feature tiles are numbered in sequence.
In other implementations of the present application, two feature squares may also be selected in the same queue of coding regions, and gradually grown to locate non-coding regions.
S2212: from the angles Angle _ V1 and Angle _ V2 between vector V1 and vector V2 and the X axis, respectively, the distance Dist _ V1 between feature pane P1 and feature pane P2, and the distance Dist _ V2 between feature pane P2 and feature pane P3, the next feature pane P on the same queue as feature panes P1, P2 and P3 is predicted, the Angle _ V3 between vector V3 between feature pane P and feature pane P3 and the X axis, and the distance Dist _ V3 between feature pane P and feature pane P3.
Specifically, the Angle _ V3 and Dist _ V3 of the next feature cell P are predicted according to Angle _ V1 and Angle _ V2 as shown in the following formula:
Angle_V3=2×Angle_V2-Angle_V1;
Dist_V3=2×Dist_V2-Dist_V1。
the abscissa P of the characteristic square P can be further obtainedxAnd ordinate PyComprises the following steps:
Px=P3_x+λ×cos(Angle_V3)×Dist_V3;
Py=P3_y+λ×cos(Angle_V3)×Dist_V3。
where λ is the control factor for the predicted feature grid. The setting of the control factor lambda can ensure that the predicted characteristic grids are in an effective range on one hand, and has certain robustness for a calibration image with large distortion on the other hand.
It should be understood that the control factor λ is an empirical value, and as an example of the present application, λ may be set to about 0.1 to 0.5 to cope with distortion of a large point, and the control factor may be set to a larger number when dealing with small distortion, and the present application does not specifically limit the value of λ.
S2213: traversing all the feature grids on the calibration image, regarding the feature grid with the minimum distance with the predicted feature grid P as a matched feature grid P _ match, judging whether the matched feature grid P _ match is a valid point, and if the matched feature grid is an invalid point, continuing to grow the next feature grid.
In the embodiment of the present application, since there may be interference points within the feature cell, it is necessary to check whether the matching feature cell P _ match is a valid point.
In an implementation of the present application, a growth effectiveness threshold σ, σ ═ norm (P _ match + P2-2 × P3)/norm (P _ match-P2) of the predicted feature grid P is calculated. In the formula for calculating σ, P _ match, P2, and P3 sequentially represent coordinate values of feature squares P _ match, P2, and P3, and norm (x, y) represents a modulo length of the vector (x, y).
And if the sigma is larger than the preset threshold value, the matched feature square is considered as an invalid point, and the next feature square is continuously grown. It should be understood that the preset threshold is an empirical value.
S2214: and (4) circulating the steps S2211 to S2213 until all the effective feature grids are grown.
In the embodiment of the present application, when all the valid feature squares in the calibration image have been grown, it means that all the feature squares captured in the calibration image have been detected. And detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids of the coding region to obtain the azimuth information and the like of the non-coding region so as to complete the calculation of the calibration parameters.
It should be noted that the preset calibration board in step S220 has the same size and the same coding information as the calibration board, that is, the calibration board shot by the camera, and the preset calibration board has the size information and the orientation information of the characteristic grid.
And S222, matching the extracted coding information of the calibration board in the plurality of calibration images with the coding information of the preset calibration board, and solving calibration data according to the size information, the azimuth information and the like of the characteristic grids on the coding region of the preset calibration board if the matching is successful.
In some embodiments of the present application, the calibration data of the camera includes, for example, camera internal and external parameters and lens distortion coefficients.
In some embodiments of the present application, the calibration data may be solved using the Zhang calibration method.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 3 shows a structural block diagram of a camera calibration apparatus provided in the embodiment of the present application, which corresponds to the camera calibration method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 3, the camera calibration apparatus includes:
an acquiring unit 31, configured to acquire a plurality of calibration images of the calibration plate provided in any one of the foregoing embodiments and captured by a camera;
and the calculating unit 32 is used for detecting the coding region and the non-coding region of the calibration image, extracting the coding information of the coding region and the non-coding region of the plurality of calibration images, matching the coding information with a preset calibration plate, and calculating the calibration data of the camera.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides an electronic device, and as shown in fig. 4, a schematic structural diagram of the electronic device provided in an embodiment of the present application is shown. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: at least one processor 40 (only one shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, wherein the processor 40 executes the computer program 42 to implement the steps in any of the various camera calibration method embodiments described above.
The electronic device 4 may be a camera, an electronic device communicatively coupled to a camera, or the like. The electronic device to which the camera is communicatively coupled includes an electronic device with a camera itself, or a device in which the camera and the electronic device are physically independent but interact with each other for data. For example, computing devices such as mobile phones, desktop computers, notebooks, palmtop computers, and cloud servers. The electronic device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of the electronic device 4, and does not constitute a limitation of the electronic device 4, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the electronic device 4, such as a hard disk or a memory of the electronic device 4. The memory 41 may also be an external storage device of the electronic device 4 in other embodiments, such as a plug-in hard disk provided on the electronic device 4, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 41 may also include both an internal storage unit and an external storage device of the electronic device 4. The memory 41 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 41 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application further provides a camera system, including: a camera, and a calibration plate as in any of the previous embodiments for calibrating the camera.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A calibration plate, comprising:
a body;
the coding regions are distributed on the body, at least one or more coding regions can be collected in the camera calibration process, and the coding regions are used for positioning the position of the calibration image on the calibration plate;
the non-coding regions are distributed on the body, and line characteristics of the non-coding regions are extracted in the camera calibration process and used for testing distortion parameters of the calibration image;
wherein the coding regions are present at a Hamming distance from each other.
2. The calibration plate of claim 1, wherein said encoded region comprises two characteristic squares and said non-encoded region comprises one of said two characteristic squares.
3. The calibration plate of claim 2 wherein one of said two feature squares comprises concentric circles of black on white and the other feature square comprises concentric circles of white on black.
4. Calibration plate according to claim 2 or 3, characterized in that the characteristic squares are arranged at equal distances from each other.
5. The calibration plate according to claim 2 or 3, wherein the characteristic grid comprising concentric circles of white and black comprises, in order from the inside towards the outside, a first circular zone, a first annular zone surrounding said first circular zone, and a first edge zone surrounding said first annular zone, said first circular zone and said first edge zone being white, said first annular zone being black;
including the white concentric circles's of black matrix characteristic square from inside to outside in proper order including the circular district of second, surround the second annular region of the circular district of second, and surround the second marginal zone of second annular region, the second circular region with second marginal zone is black, the second annular region is white.
6. A camera calibration method is characterized by comprising the following steps:
acquiring a plurality of calibration images with a calibration plate taken by a camera, wherein the calibration plate is the calibration plate according to any one of claims 1 to 5;
detecting a coding region and a non-coding region of each calibration image, and extracting coding information of the coding region and the non-coding region of each calibration image; and matching the coding information of the coding region and the non-coding region of each calibration image with a preset calibration plate, and calculating calibration data of the camera.
7. The camera calibration method according to claim 6, wherein the detecting of the coding region and the non-coding region of each calibration image comprises:
detecting a coding region and a non-coding region of each calibration image, and gradually growing from the coding region to locate the non-coding region; and detecting the position of the non-coding region by using the arrangement sequence and the coordinate values of the characteristic grids included by the coding region to obtain the coding information of the non-coding region.
8. A camera calibration device is characterized by comprising:
an acquisition unit for acquiring a plurality of calibration images taken by a camera with a calibration plate, the calibration plate being as claimed in any one of claims 1 to 5;
the calculation unit is used for detecting the coding region and the non-coding region of each calibration image and extracting the coding information of the coding region and the non-coding region of each calibration image; and matching the coding information of the coding region and the non-coding region of each calibration image with a preset calibration plate, and calculating calibration data of the camera.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the camera calibration method as claimed in claim 6 or 7 when executing the computer program.
10. A camera system, comprising:
a camera, and
calibration plate according to any one of claims 1 to 5 for calibrating said camera.
CN202010401379.2A 2020-05-13 2020-05-13 Calibration plate, camera calibration method and device, electronic equipment and camera system Pending CN111709999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401379.2A CN111709999A (en) 2020-05-13 2020-05-13 Calibration plate, camera calibration method and device, electronic equipment and camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401379.2A CN111709999A (en) 2020-05-13 2020-05-13 Calibration plate, camera calibration method and device, electronic equipment and camera system

Publications (1)

Publication Number Publication Date
CN111709999A true CN111709999A (en) 2020-09-25

Family

ID=72537700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401379.2A Pending CN111709999A (en) 2020-05-13 2020-05-13 Calibration plate, camera calibration method and device, electronic equipment and camera system

Country Status (1)

Country Link
CN (1) CN111709999A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419422A (en) * 2020-10-23 2021-02-26 福瑞泰克智能系统有限公司 Camera calibration method, device, equipment and storage medium
CN113112549A (en) * 2020-12-23 2021-07-13 合肥工业大学 Monocular camera rapid calibration method based on coding stereo target
CN113129388A (en) * 2020-12-23 2021-07-16 合肥工业大学 Coding stereo target for quickly calibrating internal and external parameters of camera and coding method thereof
WO2022171003A1 (en) * 2021-02-09 2022-08-18 深圳市汇顶科技股份有限公司 Camera calibration method and apparatus, and electronic device
CN115880373A (en) * 2022-12-28 2023-03-31 常熟理工学院 Calibration plate and calibration method of stereoscopic vision system based on novel coding characteristics

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN101727670A (en) * 2009-11-10 2010-06-09 西安交通大学 Flexible calibrating method and device for variable-format multiple-camera system
CN104036542A (en) * 2014-05-21 2014-09-10 北京信息科技大学 Spatial light clustering-based image surface feature point matching method
CN107481287A (en) * 2017-07-13 2017-12-15 中国科学院空间应用工程与技术中心 It is a kind of based on the object positioning and orientation method and system identified more
CN107918790A (en) * 2016-10-09 2018-04-17 狒特科技(北京)有限公司 Optical alignment mark, device and method
CN209328083U (en) * 2018-11-16 2019-08-30 熵智科技(深圳)有限公司 A kind of calibration target for 3D camera
DE102018008539A1 (en) * 2018-11-01 2019-12-05 Baumer Optronic Gmbh Calibration device and method for camera calibration for photogrammetry
CN110827357A (en) * 2019-09-30 2020-02-21 深圳市安思疆科技有限公司 Combined pattern calibration board and structured light camera parameter calibration method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014940A (en) * 2006-06-08 2008-01-24 Fast:Kk Camera calibration method for camera measurement of planar subject and measuring device applying same
CN101727670A (en) * 2009-11-10 2010-06-09 西安交通大学 Flexible calibrating method and device for variable-format multiple-camera system
CN104036542A (en) * 2014-05-21 2014-09-10 北京信息科技大学 Spatial light clustering-based image surface feature point matching method
CN107918790A (en) * 2016-10-09 2018-04-17 狒特科技(北京)有限公司 Optical alignment mark, device and method
CN107481287A (en) * 2017-07-13 2017-12-15 中国科学院空间应用工程与技术中心 It is a kind of based on the object positioning and orientation method and system identified more
DE102018008539A1 (en) * 2018-11-01 2019-12-05 Baumer Optronic Gmbh Calibration device and method for camera calibration for photogrammetry
CN209328083U (en) * 2018-11-16 2019-08-30 熵智科技(深圳)有限公司 A kind of calibration target for 3D camera
CN110827357A (en) * 2019-09-30 2020-02-21 深圳市安思疆科技有限公司 Combined pattern calibration board and structured light camera parameter calibration method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419422A (en) * 2020-10-23 2021-02-26 福瑞泰克智能系统有限公司 Camera calibration method, device, equipment and storage medium
CN112419422B (en) * 2020-10-23 2024-03-15 福瑞泰克智能系统有限公司 Camera calibration method, device, equipment and storage medium
CN113112549A (en) * 2020-12-23 2021-07-13 合肥工业大学 Monocular camera rapid calibration method based on coding stereo target
CN113129388A (en) * 2020-12-23 2021-07-16 合肥工业大学 Coding stereo target for quickly calibrating internal and external parameters of camera and coding method thereof
CN113112549B (en) * 2020-12-23 2022-08-23 合肥工业大学 Monocular camera rapid calibration method based on coding stereo target
WO2022171003A1 (en) * 2021-02-09 2022-08-18 深圳市汇顶科技股份有限公司 Camera calibration method and apparatus, and electronic device
CN115880373A (en) * 2022-12-28 2023-03-31 常熟理工学院 Calibration plate and calibration method of stereoscopic vision system based on novel coding characteristics
CN115880373B (en) * 2022-12-28 2023-11-03 常熟理工学院 Calibration plate and calibration method of stereoscopic vision system based on novel coding features

Similar Documents

Publication Publication Date Title
CN111709999A (en) Calibration plate, camera calibration method and device, electronic equipment and camera system
CN111179339B (en) Coordinate positioning method, device, equipment and storage medium based on triangulation
CN108182708B (en) Calibration method and calibration device of binocular camera and terminal equipment
CN109754434A (en) Camera calibration method, apparatus, user equipment and storage medium
CN108364313B (en) Automatic alignment method, system and terminal equipment
CN112802124A (en) Calibration method and device for multiple stereo cameras, electronic equipment and storage medium
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN111612841A (en) Target positioning method and device, mobile robot and readable storage medium
CN112085798A (en) Camera calibration method and device, electronic equipment and storage medium
CN116188594B (en) Calibration method, calibration system, calibration device and electronic equipment of camera
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN113362445B (en) Method and device for reconstructing object based on point cloud data
CN108734745B (en) Calibration method and device and projection equipment
CN113870350A (en) Image sample annotation method and device, terminal equipment and image annotation system
CN114049401A (en) Binocular camera calibration method, device, equipment and medium
CN113962877B (en) Pixel distortion correction method, correction device and terminal
CN115965697A (en) Projector calibration method, calibration system and device based on Samm's law
CN113635299B (en) Mechanical arm correction method, terminal device and storage medium
CN114387353A (en) Camera calibration method, calibration device and computer readable storage medium
CN113935912A (en) Method and device for correcting distortion-removed circle center of camera, terminal equipment and medium
CN111223139B (en) Target positioning method and terminal equipment
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN116862999B (en) Calibration method, system, equipment and medium for three-dimensional measurement of double cameras
CN111462309B (en) Modeling method and device for three-dimensional head, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co., Ltd

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.