CN113052911A - Calibration board, camera calibration method and device - Google Patents

Calibration board, camera calibration method and device Download PDF

Info

Publication number
CN113052911A
CN113052911A CN201911376806.XA CN201911376806A CN113052911A CN 113052911 A CN113052911 A CN 113052911A CN 201911376806 A CN201911376806 A CN 201911376806A CN 113052911 A CN113052911 A CN 113052911A
Authority
CN
China
Prior art keywords
target
column
features
dividing
calibration plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911376806.XA
Other languages
Chinese (zh)
Inventor
单一
秦勇
赵春宇
陈元吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201911376806.XA priority Critical patent/CN113052911A/en
Publication of CN113052911A publication Critical patent/CN113052911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Abstract

The calibration plate comprises a substrate and a non-target feature pattern array, wherein the non-target feature pattern array is printed on the surface of the substrate, is different from the non-target feature pattern, and comprises at least two target feature patterns which are distributed in different rows and different columns of the non-target feature pattern array. The camera calibration method based on the calibration plate is further disclosed, and the effective range of the calibration plate image is determined according to the target characteristic information in the calibration plate image; calculating world coordinate point information corresponding to the features in the effective range; and calibrating and calculating the camera according to the characteristic information of the calibration plate image and the corresponding world coordinate point information. When the calibration plate is used for calibration, the effective range of the calibration plate image can still be determined by using the target characteristics under the condition that the calibration plate image is incomplete, so that the requirement of a camera on the image during calibration is reduced.

Description

Calibration board, camera calibration method and device
Technical Field
The invention relates to the field of machine vision, in particular to a calibration plate and a camera calibration method.
Background
In image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of an object in space and the corresponding point in the image, a geometric model of camera imaging must be established, and the parameters of the geometric model are the parameters of the camera. Under most conditions, the parameters must be obtained through experiments and calculation, and the process of solving the parameters is called camera calibration (or video camera calibration). In image measurement or machine vision application, calibration of camera parameters is a very critical link, and accuracy of calibration results and stability of algorithms directly influence accuracy of results generated by camera work. Therefore, the camera calibration is a precondition for the subsequent work, and the improvement of the calibration accuracy is a key point of scientific research.
In the applications of machine vision, image measurement, photogrammetry, three-dimensional reconstruction and the like, a geometric model of camera imaging needs to be established for correcting lens distortion, determining a conversion relation between a physical size and a pixel and determining a mutual relation between a three-dimensional geometric position of a certain point on the surface of a space object and a corresponding point in an image. The camera shoots the array flat plate with the fixed-spacing pattern, and a geometric model of the camera can be obtained through calculation of a calibration algorithm, so that high-precision measurement and reconstruction results are obtained. And the flat plate with the fixed pitch pattern array is a Calibration plate (Calibration Target).
Common calibration board patterns include solid circle array patterns and checkerboard patterns. When the calibration plate with the existing pattern is used for calibrating the camera, a complete image of the whole calibration plate needs to be acquired, and an incomplete calibration plate image cannot be used for calibrating the camera on the calibration plate, so that the difficulty of calibrating the camera is increased.
Disclosure of Invention
The invention provides a calibration plate, which is used for realizing camera calibration based on an incomplete calibration plate image.
The invention provides a calibration plate, which comprises,
a substrate, a first electrode and a second electrode,
a non-target feature pattern array printed on a surface of a substrate,
and at least two target feature patterns which are different from the non-target feature patterns and distributed in different rows and different columns of the non-target feature pattern array.
The invention provides a camera calibration method, which comprises the following steps,
acquiring a calibration plate image containing the calibration plate, wherein the calibration plate image at least comprises a target feature,
determining an effective range of the calibration plate image according to target characteristic information in the calibration plate image, wherein the effective range comprises a row segmentation area and/or a column segmentation area;
calculating world coordinate point information corresponding to the features in the effective range, wherein the features comprise target features and/or non-target features;
and calibrating and calculating the camera according to the characteristic information of the calibration plate image and the corresponding world coordinate point information.
Preferably, the determining the effective range of the calibration board image according to the target feature information in the calibration board image includes:
for a calibration plate image containing a portion of the calibration plate, wherein the calibration plate image includes at least a target feature located in a central region,
at least a row division threshold for dividing each row of the calibration plate array pattern, a column division threshold for dividing each column of the calibration plate array pattern, and a central region of the calibration plate are determined based on imaging information of a target feature located in the central region of the calibration plate,
utilizing the row segmentation threshold and the column segmentation threshold to segment the features in the calibration plate image to obtain more than one segmentation area,
determining a valid range based on the features in each segmented region;
and acquiring the characteristic points from the effective range of the central area of the calibration plate.
Preferably, the determining at least a row division threshold for dividing each row of the calibration plate array pattern and a column division threshold for dividing each column of the calibration plate array pattern based on imaging information of a target feature located in a central region of the calibration plate includes,
obtaining a front view of the calibration plate image through transmission transformation,
identifying at least two target features in the elevation view that are neither in the same row nor in the same column,
determining the position of the identified target feature, and taking the area in a certain range of the position of the identified target feature as the central area of the calibration plate;
determining the row segmentation threshold and the column segmentation threshold according to the position geometric relationship among the target features,
the method comprises the steps of utilizing the row segmentation threshold and the column segmentation threshold to segment the features in the calibration plate image to obtain more than one segmentation region,
dividing the front view into more than one divided region according to the row division threshold and the column division threshold;
determining the effective range based on the features in each partitioned area, including determining the effective range according to whether each partitioned area contains particles with features;
the obtaining of the feature point from the effective range of the central region of the calibration plate includes selecting the feature in the effective range, inversely transforming the selected feature to a coordinate point in the calibration plate image through inverse transformation of transmission transformation, and taking the coordinate point as the feature point.
Preferably, said obtaining, by transmission transformation, a front view of the current calibration plate image comprises,
determining rows and columns of a rectangular array according to the outline of the array pattern of the calibration plate;
constructing rows and columns parallel to the rectangular array to accommodate quadrilaterals that accommodate all features in the calibration plate image;
if a certain vertex in the quadrangle is not in the calibration board image, screening out an outermost row of features and an outermost column of features which are closest to the image edge at the edge of the image adjacent to the vertex in the calibration board image, constructing row parallel lines which are parallel to the outermost row of features outside the outermost row of features, constructing column parallel lines which are parallel to the outermost column of features outside the outermost column of dots, and taking the intersection points of the row parallel lines and the column parallel lines as the vertices which are not in the calibration board image in the quadrangle to obtain a rectangle;
and setting a target point corresponding to the source point after transmission transformation by taking the vertex of the rectangle as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
According to the invention, the target characteristic pattern is embedded in the row and column in the calibration plate array as the mark, and when the calibration plate is applied for calibration, under the condition that the calibration plate image is incomplete, the effective range of the calibration plate image can still be determined by using the target characteristic, and the space coordinate information of the calibration plate image is obtained based on the characteristic points in the effective range, so that the requirement of a camera on the image during calibration is reduced, the target characteristic is different from the non-target characteristic in the calibration plate array, and the target characteristic is convenient to identify. Furthermore, the particle of the characteristic is determined by adopting a tangent method, so that the calculation precision during camera calibration is improved.
Drawings
FIG. 1 is a schematic view of an embodiment of a solid circle array pattern calibration plate of the present application.
Fig. 2 is a schematic flow chart of camera calibration based on the calibration board shown in fig. 1.
FIG. 3 is an exemplary illustration of a partial calibration plate image in an embodiment of the invention;
FIG. 4 is an exemplary illustration of another partial calibration plate image in an embodiment of the present invention;
FIG. 5 is a front view of a calibration plate in an alternative embodiment of an embodiment of the present invention;
FIG. 6 is a schematic diagram of a minimum bounding rectangle in an embodiment of the present invention;
FIG. 7 is a diagram illustrating feature points screened by segmenting a calibration plate image into a plurality of segmented regions according to a determined target feature in an embodiment of the present invention;
FIG. 8 is a schematic diagram of an arrangement of target features.
FIG. 9 is a schematic flow chart of camera calibration based on a calibration plate pattern array including non-target features and target features.
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The calibration plate pattern of the present invention includes at least two target features that are not in the same row or column and that are distinct from the array pattern in the calibration plate, the target features having a pattern that is distinct from the non-target features and being located in a central region of the calibration plate. When the calibration plate is used for camera positioning, an effective range of the calibration plate image is determined according to characteristic information in the calibration plate image, characteristic points serving as characteristics are selected in the effective range, position information of the characteristic points in a world coordinate system is calculated, and internal parameters and external parameters of the camera are calibrated by using a Zhang Zhengyou algorithm according to the characteristic information of the calibration plate image and corresponding world coordinate point information.
Referring to fig. 1, fig. 1 is a schematic view of an embodiment of a solid circle array pattern calibration plate according to the present application. The calibration plate comprises a solid circle array taking solid circles as non-target features, at least two target features which are not in the same row or column are arranged in the central area of the calibration plate, and the pattern of the target features is a solid circle with a radius larger than that of the solid circles of the non-target features. In this embodiment, as shown in fig. 1, the embodiment shown in fig. 1 adopts a 7 × 11 dot array, five large dots are selected as target features and distributed in a certain order in the central area of the calibration plate, and other small dots are non-target features. In other embodiments, the target features may also be replaced with triangles, squares, or other arbitrarily defined marks that are distinct from the calibration plate body pattern (other dots). The arrangement of the five large dots makes it possible to provide information other than the shape characteristics, such as a row division threshold for dividing each row of the calibration plate array, a column division threshold for dividing each column of the calibration plate array, and calibration plate center coordinates. The non-target features and the target features are printed on a surface of the substrate.
A method of performing camera calibration based on the calibration board image will be described below by taking the calibration board shown in fig. 1 as an example. Referring to fig. 2, fig. 2 is a schematic flow chart of camera calibration based on the calibration board shown in fig. 1. The camera calibration method comprises the following steps:
and e1, acquiring a calibration plate image.
The calibration board image is captured and saved, and particularly, the calibration board image may only include a part of the calibration board content, and may be referred to as fig. 3, where fig. 3 is a schematic diagram illustrating a situation where the calibration board image is incomplete. Wherein the incomplete calibration plate image includes at least the target feature in the central region.
And e2, carrying out binarization processing on the calibration board image, and extracting the outline of each circle point in the circle point matrix of the calibration board.
The process of step e2 can be implemented by using the prior art, and will not be described herein.
And e3, determining a source point and a target point of the transmission transformation according to the outline of each dot in the dot matrix of the calibration plate, and transmitting and transforming the image of the calibration plate into the front view of the calibration plate.
In one embodiment, the calibration plate image is subjected to binarization processing, pattern profiles of at least four features in the calibration plate image are extracted, and mass points of each pattern profile are calculated; the at least four characteristics satisfy: the shape formed by sequentially connecting the four characteristics is a rectangle;
and setting a target point corresponding to the source point after transmission transformation by taking the mass point as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
In a second embodiment, in this step, the source point and the target point are obtained by:
determining rows and columns of the dot matrix according to the outlines of all dots in the dot matrix of the calibration board extracted in the step e 2;
constructing rows and columns parallel to the dot matrix to form a quadrilateral that can accommodate all dots in the calibration plate image;
when a vertex in the quadrangle is not in the calibration board image, at the edge of the image adjacent to the vertex in the calibration board image, an outermost row of dots and an outermost column of dots closest to the edge of the image are screened, a row parallel line parallel to the outermost row of dots is formed outside the outermost row of dots, a column parallel line parallel to the outermost column of dots is formed outside the outermost column of dots, and the intersection of the row parallel line and the column parallel line is defined as a vertex in the quadrangle which is not in the calibration board image.
An exemplary illustration of a partial calibration plate image is shown in fig. 3. In this figure, the calibration plate is located in the lower middle of the calibration plate image for which the rows and columns parallel to the dot matrix therein construct a quadrilateral that can accommodate all the dots in the calibration plate image, as shown by the square box in fig. 3. Where the lower left point of the box is not in the calibration plate image, this point may be determined by the following method.
And screening an outermost row of dots and an outermost column of dots which are closest to the image edge, wherein the outermost row of dots is a lowermost row of dots extending from a lower right point to a lower edge of the calibration board image in fig. 3, and the outermost column of dots is a leftmost column of dots extending from an upper left point to the lower edge of the calibration board image in fig. 3. In fig. 3, a row parallel line parallel to the lowermost row of dots extending from the lower right point to the lower edge of the calibration board image is formed on the outer side of the lowermost row of dots, in fig. 3, a column parallel line parallel to the column of dots is formed on the outer side of the leftmost column of dots extending from the upper left point to the lower edge of the calibration board image, and the intersection of the row parallel line and the column parallel line is defined as the lower left point of the block.
It should be further noted that, in fig. 3, two points appear in the calibration board image from the circle point at the lowermost side of the line extending from the lower right point to the lower edge of the calibration board image, and a parallel line parallel to the line, i.e., a line from the lower right point to the lower left point, can be drawn according to the theorem of the line between the two points. However, if only one dot appears in the calibration board image from the lowermost row of dots extending from the lower right point to the lower edge of the calibration board image, the parallel line parallel to the row of dots cannot be drawn according to the theorem of forming a line of two dots, at this time, the one dot is discarded, another row of dots closest to the lower edge and appearing more than one dot is selected, and the parallel line parallel to the other row of dots can be drawn according to the theorem of forming a line of two dots.
Another exemplary illustration of a partial calibration plate image is shown in fig. 4. In this figure, the calibration plate is located at the lower left of the calibration plate image for which the row and column configuration parallel to the matrix of dots therein accommodates a quadrilateral of all the dots in the calibration plate image, as shown by the square box in fig. 4. Wherein, the upper left point and the lower left point of the square frame are not in the calibration board image, and the two points can be determined by the following method.
For the left edge of the calibration board image, an outermost row of dots and an outermost column of dots are screened, the outermost row of dots is the top row of dots extending from the upper right point in fig. 4 to the left edge of the calibration board image, and the outermost column of dots is the white column of dots in fig. 4 (the white dots in fig. 4 are only used for explaining and distinguishing other columns of dots, and do not represent that the dots are white per se). The outer side of the uppermost row of dots extending to the left edge of the calibration board image from the upper right point in fig. 4 is constructed with row parallel lines parallel to the row of dots, the outer side of a white column of dots in fig. 4 is constructed with column parallel lines parallel to the column of dots, and the intersection point of the row parallel lines and the column parallel lines is taken as the upper left point of the box.
For the lower edge of the calibration board image, an outermost row of dots and an outermost column of dots are screened, wherein the outermost row of dots is the row of dots on the lowest side, extending from the lower right point in fig. 4 to the lower edge of the calibration board image, and the outermost column of dots is the column of dots in white in fig. 4. In fig. 4, the outer side of the lowermost row of dots extending to the lower edge of the calibration board image from the lower right point is provided with row parallel lines parallel to the row of dots, the outer side of the white row of dots in fig. 4 is provided with column parallel lines parallel to the column of dots, and the intersection point of the row parallel lines and the column parallel lines is taken as the lower left point of the box.
After the quadrangle is determined by the method, the vertex of the quadrangle is used as a source point, and a target point which corresponds to the source point and is subjected to transmission transformation is set, so that the image of the calibration plate in the quadrangle can be transmitted and transformed into the front view of the calibration plate.
In an alternative embodiment, FIG. 5 illustrates a front view of a calibration plate of an alternative embodiment. As shown in connection with fig. 3 and 4.
Firstly, regarding an upper left point of a quadrangle, taking the current position of the upper left point as a source point of projection transformation of the quadrangle, and taking a coordinate origin of a front view of a calibration board (namely, an upper left corner point of the front view of the calibration board) as a target point of transmission transformation of the quadrangle, namely, the target point coordinate of the upper left point of the quadrangle is (0, 0);
regarding the upper right point of the quadrangle, taking the current position as the source point of the projection transformation, setting the distance (for example, a) between the upper left point and the upper right point of the quadrangle as the width of the transmission transformation target area, and setting the target point of the upper right point of the quadrangle on the x axis of the front view of the calibration board by taking the width of the transmission transformation target area as a constraint, namely setting the target point coordinate of the upper right point of the quadrangle as (a, 0);
regarding a lower left point of the quadrangle, taking the current position as a source point of projection transformation, setting the distance (for example, b) between the upper left point and the lower left point of the quadrangle as the height of a transmission transformation target area, and setting a target point of the lower left point of the quadrangle on a y axis of a front view of a calibration board by taking the height of the transmission transformation target area as a constraint, namely setting the target point coordinate of the lower left point of the quadrangle as (0, b);
regarding the lower right point of the quadrangle, the current position of the lower right point is used as the source point of the projection transformation, the height and the width of the transmission transformation target area are used as constraints, the target point of the lower right point of the quadrangle is arranged on the point which is the same as the abscissa of the upper right point and the same as the ordinate of the lower left point in the front view of the calibration board, namely the target point coordinate of the lower right point of the quadrangle is (a, b).
The width a and the height b can be obtained from the following equations:
Figure BDA0002341190660000071
Figure BDA0002341190660000072
the calibration board image comprises a calibration board image, a lefttop point abscissa, a righttop point abscissa, a leftbottom point ordinate and a leftbottom point ordinate, wherein the lefttop point abscissa in the calibration board image is used as the lefttop point abscissa, the righttop point abscissa in the calibration board image is used as the righttop point abscissa, the leftbottom point ordinate in the calibration board image is used as the leftbottom point ordinate, and the leftbottom point ordinate in the calibration board image is used as the rightbottom point ordinate.
After the source points and the target points of the four vertexes of the quadrangle are determined, the image of the calibration plate in the quadrangle can be transmitted and transformed into the front view of the calibration plate. The specific process and algorithm for transforming the calibration plate image transmission into the calibration plate front view can be implemented by the prior art, and will not be described herein.
And e4, performing binarization processing on the front view of the calibration plate, and obtaining the radius of each circular point and the center coordinate of each circular point in the front view of the calibration plate so as to extract the circular arc contour at the edge of the image and improve the coverage rate of the edge of the image.
In this step, the minimum bounding rectangle method is used to obtain the center coordinates of each complete dot and incomplete dot in the front view of the calibration board. The method is that a dot is surrounded by a minimum surrounding rectangle, at least two opposite sides of the minimum surrounding rectangle are tangent to the surrounding dot, so that the radius of the dot can be obtained through the distance between the two opposite sides tangent to the surrounding dot whether the dot is complete or incomplete, and the center coordinate of the dot can be obtained from the positions of the two tangent points. For incomplete dots, the center coordinates of the dots obtained by the method of minimum enclosing a rectangle are more accurate than those obtained by the existing centroid method.
Fig. 6 shows a schematic diagram of a minimum bounding rectangle. In the embodiment of fig. 6, the smallest bounding rectangle encloses a partial dot. The dots are usually located at the edge of the calibration plate image, and incomplete dots are caused after the dots are transformed into the calibration plate front view through transmission. As shown in fig. 6, the incomplete dot is truncated at the edge d, but its center (center) does not fall outside the edge of the front view of the calibration board, and if the two opposite sides of the smallest bounding rectangle cannot be tangent to the enclosed incomplete dot, for example, the sides a and c cannot be tangent to the incomplete dot, it indicates that the center (center) of the incomplete dot falls outside the edge of the front view of the calibration board, and therefore the center of the incomplete dot cannot be determined, so the incomplete dot is discarded. In the smallest enclosing rectangle drawn for the incomplete dot in fig. 6, the sides a, b, c are all tangent to the incomplete dot, the opposite sides of b are located on the edge d, the opposite sides a, c are the two opposite sides of the smallest enclosing rectangle, and the distance between the sides a, c and the tangent point A, C of the incomplete dot is the diameter of the incomplete dot, so that half of the distance between the sides a, c and the tangent point A, C of the incomplete dot is determined as the radius of the incomplete dot enclosed by the smallest enclosing rectangle, and the position coordinate of the midpoint between the sides a, c and the tangent point of the incomplete dot is determined as the center point (center) O coordinate of the incomplete dot. Fig. 6 also shows the center O' of the incomplete circle point obtained by the centroid method, which deviates from the center of the real incomplete circle point due to the missing of part of the pattern of the circle point, and the center O of the circle point obtained by the method of enclosing the rectangle the smallest in the embodiment of the present invention is the center of the real incomplete circle point, so that the method of enclosing the rectangle the smallest in the embodiment of the present invention improves the accuracy of positioning the center position of the circle point in the calibration board, and thus the accuracy of the row segmentation threshold, the column segmentation threshold, and the calibration board center coordinate can be improved, so that the circular mass point extracted from the image by the method of enclosing the rectangle the smallest can be more accurately matched with the physical circle point in the corresponding world coordinate system, which is beneficial to improving the accuracy of camera calibration.
And e5, in the front view of the calibration board, screening five large dots with the radius larger than that of other dots as target features and taking the other dots as non-target features by comparing the radii of the dots.
In an alternative embodiment, five large dots are screened out by the following method.
In the front view of the calibration board, the determined radiuses of all the complete dots and the incomplete dots are added to obtain the sum of the radiuses, then the sum of the radiuses is divided by the total number of all the complete dots and the incomplete dots, and the obtained average radius of all the complete dots and the incomplete dots is used as a radius screening threshold. And comparing the radiuses of all the complete dots and the incomplete dots with a radius screening threshold, taking five dots with the radiuses larger than the radius screening threshold as the dots to be selected, wherein due to the existence of errors, the number of the dots with the radiuses larger than the radius screening threshold is more than five, and at the moment, taking the five dots with the largest radiuses as the dots to be selected.
After the five dots to be selected are obtained, the distance between the center of the five dots to be selected and the origin of coordinates (located at the upper left corner of the image and having coordinates of (0,0)) of the front view of the calibration board is calculated, the dot to be selected which is closest to the origin of the front view of the calibration board is determined as the second target feature, and the center coordinate of the big dot of the second target feature is recorded.
And determining the spot to be selected positioned below the second target feature as a first target feature, determining another spot to be selected (the spot to be selected positioned at the right of the second target feature) as a third target feature, and recording the central coordinates of the large spot of the first target feature and the central coordinates of the large spot of the third target feature.
And determining a dot to be selected closest to the first target feature (a dot to be selected below the right of the first target feature) as a fourth target feature, determining another dot to be selected (a dot to be selected below the right of the third target feature) as a fifth target feature, and recording the central coordinate of the large dot of the fourth target feature and the central coordinate of the large dot of the fifth target feature.
And e6, determining the effective dot range according to the target characteristics.
In an alternative embodiment, step e6 further includes the following process.
As can be seen with reference to fig. 7, in the calibration plate front view:
using one quarter of the difference between the ordinate of the centre point of the third target feature and the centre point of the fourth target feature as a line segmentation threshold, i.e.
rowthre=(pos3.y-pos4.y)/4
Wherein rowthreIs a line segmentation threshold, pos3Y is the center point ordinate, pos, of the third target feature4Y is the center point ordinate of the fourth target feature, and 4 can be regarded as the number of line segmentation thresholds included between the center point ordinate of the third target feature and the center point ordinate of the fourth target feature.
A first transverse division line (i.e. a transverse broken line on the upper side of the third target feature and adjacent to the third target feature) is made at a 1-time line division threshold value moving to the upper side of the front view of the calibration plate from the center point (reference point for line division) of the third target feature, a second transverse division line (i.e. a transverse broken line on the lower side of the third target feature and adjacent to the third target feature) is made at a 1-time line division threshold value moving to the lower side of the front view of the calibration plate from the center point of the third target feature, the region between the first and second transverse division lines is the line division area where the third target feature is located, since at least the third target feature falls within the row partition between the first transverse partition line and the second transverse partition line, thereby demarcating the row partition between the first and second lateral partition lines as an active row partition where dots are present.
Further, a third transverse division line (namely a transverse broken line on the upper side of the third target feature and next adjacent to the third target feature) is made at a 2-time row division threshold value moving to the upper side of the front view of the calibration plate from the first transverse division line, whether a dot falls into the row division area is judged in the row division area between the third transverse division line and the first transverse division line, if so, the row division area between the third transverse division line and the first transverse division line is calibrated as an effective row division area with the dot, and if not, the row division area is calibrated as an ineffective row division area. In this way, the horizontal dividing line is continuously made to the upper side of the front view of the calibration plate and whether the newly divided line dividing area is the effective line dividing area is judged.
The method for judging whether the dots fall into the row partition area or not can be characterized in that the vertical coordinate of the dots is compared with the vertical coordinates of the upper transverse partition line and the lower transverse partition line of the row partition area, if the vertical coordinate of the dots falls between the upper transverse partition line and the lower transverse partition line of the row partition area, the dots fall into the row partition area, and if the vertical coordinate of no dots falls between the upper transverse partition line and the lower transverse partition line of the row partition area, the dots do not fall into the row partition area.
A fourth transverse division line (namely a transverse broken line which is under the fourth target feature and is adjacent to the fourth target feature) is made at a 1-time line division threshold value which is moved to the lower side of the front view of the calibration plate from the central point (reference point for line division) of the fourth target feature, a fifth transverse division line (namely a transverse broken line which is under the fourth target feature and is adjacent to the fourth target feature) is made at a 1-time line division threshold value which is moved to the lower side of the front view of the calibration plate from the central point of the fourth target feature, the region between the fourth transverse division line and the fifth transverse division line is the row division region where the fourth target feature is located, since at least the fourth target feature falls in the row partition between the fourth transverse partition line and the fifth transverse partition line, thereby demarcating the row partition between the fourth and fifth lateral partition lines as an effective row partition where dots are present.
Further, a sixth transverse division line (namely a transverse broken line which is arranged below the fourth target feature and is next adjacent to the fourth target feature) is made at a line division threshold value which is 2 times of the movement of the fifth transverse division line to the lower side of the front view of the calibration plate, whether a dot falls into the line division region is judged in the line division region between the sixth transverse division line and the fifth transverse division line, if so, the line division region between the sixth transverse division line and the fifth transverse division line is calibrated as an effective line division region with the dot, and if not, the line division region is calibrated as an ineffective line division region. In this way, the horizontal dividing line is continuously made to the lower side of the front view of the calibration plate and whether the new divided line dividing area is the effective line dividing area is judged.
Through the above process, all the effective row partitions with dots can be determined.
Using one quarter of the difference between the abscissas of the center point of the first target feature and the center point of the fifth target feature as a column division threshold, i.e.
colthre=(pos1.x-pos5.x)/4
Wherein colthreFor column division threshold, pos1X is the center point abscissa, pos, of the first target feature5X is the center point abscissa of the fifth target feature, and 4 can be regarded as the number of column division thresholds included between the center point abscissa of the first target feature and the center point abscissa of the fifth target feature.
A first longitudinal division line (namely, a longitudinal dotted line at the left side of the first target feature and adjacent to the first target feature) is made at a 1-time column division threshold value moving to the left side of the front view of the calibration plate from the central point (reference point for column division) of the first target feature, a second longitudinal division line (namely, a longitudinal dotted line at the right side of the first target feature and adjacent to the first target feature) is made at a 1-time row division threshold value moving to the right side of the front view of the calibration plate from the central point of the first target feature, the region between the first longitudinal partition line and the second longitudinal partition line is the column partition in which the first target feature is located, because at least the first target feature falls within the column division between the first longitudinal division line and the second longitudinal division line, thereby demarcating the column division between the first longitudinal division line and the second longitudinal division line as an effective column division with dots present.
Further, a third longitudinal division line (namely, a longitudinal dotted line on the left side of the first target feature and next adjacent to the first target feature) is made at a position of a column division threshold value which is 2 times of the movement of the first transverse division line to the left side of the front view of the calibration plate, whether a dot falls into the column division region is judged in a column division region between the third longitudinal division line and the first longitudinal division line, if so, the column division region between the third longitudinal division line and the first longitudinal division line is calibrated as an effective column division region with the dot, and if not, the column division region is calibrated as an ineffective row division region. In this way, longitudinal dividing lines are continuously made to the left side of the front view of the calibration plate and whether the newly divided column dividing regions are effective column dividing regions is judged. For example, as shown in fig. 7, a row on the left side of the first target feature has no dots, and thus after the third longitudinal dividing line is made, no dots fall into the row dividing region between the third longitudinal dividing line and the first longitudinal dividing line, and therefore, the row dividing region is an invalid row dividing region, and after the invalid row dividing region is determined, no new longitudinal dividing line is made to the left side, and the last longitudinal dividing line (the third longitudinal dividing line) is deleted, and the third longitudinal dividing line does not exist in fig. 7.
The method for judging whether the circular dots fall into the column partition area or not can be characterized in that the abscissa of the circular dots is compared with the abscissas of the left longitudinal partition line and the right longitudinal partition line of the column partition area, if the abscissa of the circular dots falls between the left longitudinal partition line and the right longitudinal partition line of the column partition area, the circular dots fall into the column partition area, and if the abscissa of the circular dots does not fall between the left longitudinal partition line and the right longitudinal partition line of the column partition area, the circular dots do not fall into the column partition area.
A fourth longitudinal division line (i.e. a longitudinal dotted line on the left side of the fifth target feature and adjacent to the fifth target feature) is made at a 1-time column division threshold value moving to the left side of the front view of the calibration plate from the center point (reference point for column division) of the fifth target feature, a fifth longitudinal division line (i.e. a longitudinal dotted line on the right side of the fifth target feature and adjacent to the fifth target feature) is made at a 1-time row division threshold value moving to the right side of the front view of the calibration plate from the center point of the fifth target feature, the region between the fourth longitudinal partition line and the fifth longitudinal partition line is the column partition in which the fifth target feature is located, since at least the fifth target feature falls within the column division between the fourth longitudinal division line and the fifth longitudinal division line, thereby demarcating the column division area between the fourth longitudinal division line and the fifth longitudinal division line as an effective column division area where dots are present.
Further, a sixth longitudinal division line (i.e. a longitudinal dotted line on the right side of the fifth target feature and next to the fifth target feature) is made at a 2-fold column division threshold value moving to the right side of the front view of the calibration plate from the fifth longitudinal division line, whether a dot falls into the column division region is judged in the column division region between the sixth longitudinal division line and the fifth longitudinal division line, if so, the column division region between the sixth longitudinal division line and the fifth longitudinal division line is calibrated as an effective column division region with a dot, and if not, the column division region is calibrated as an ineffective column division region. In this way, longitudinal dividing lines are continuously made to the right side of the front view of the calibration plate and whether the newly divided column dividing regions are effective column dividing regions is judged.
A plurality of divided regions (small regions in the form of a square shape formed by interlacing the horizontal dotted lines and the vertical dotted lines in fig. 7) are formed by intersecting the effective row divided regions and the effective column divided regions as described above, and all dots in the front view of the calibration board are present in these divided regions, but dots are not present in all the divided regions due to, for example, reflection at the time of photographing, occlusion, and the like as in fig. 3 to 5. Therefore, after a plurality of divided regions are determined, it is necessary to determine which divided regions have dots, and specifically, for any one divided region, it is determined whether the divided region is an effective divided region including dots by determining whether or not the center coordinates of the dots fall into the divided region. For any one of the divided areas, if the center coordinates of the dots fall into the divided area, the divided area is marked as an effective divided area, otherwise, the divided area is not marked or marked as an ineffective divided area.
Thus, confirmation of the valid divided area is completed.
For any dot, when the row segmentation area and the column segmentation area where the dot is located are both effective segmentation areas, the segmentation area where the dot is located is judged to be an effective range.
And determining an effective range according to the size of the calibration plate for the calibration plate image containing all calibration plates, and determining the features positioned at the edge of the array in the effective range as feature points.
It should be further noted that, in the embodiment of the present invention, a quarter of a difference between vertical coordinates of a center point of the third target feature and a center point of the fourth target feature is used as the line segmentation threshold, and considering that the third target feature and the fourth target feature are not adjacent and are therefore far from each other, a line segmentation threshold obtained by a quarter of a difference between vertical coordinates of a center point of the third target feature and a center point of the fourth target feature is higher in accuracy than a line segmentation threshold obtained by a half of a difference between vertical coordinates of center points of adjacent dots (such as the second target feature and the first target feature).
In the same manner, even if one quarter of the difference between the abscissa of the center point of the first target feature and the abscissa of the center point of the fifth target feature is used as the column division threshold, the effect of higher accuracy can be obtained.
For a monocular camera, after a plurality of effective segmented regions are determined in the above manner, world coordinate points corresponding to respective dots in the effective segmented regions may be determined.
For a binocular camera, the images captured by the two lenses may be different, and therefore, for the calibration plate images corresponding to the two lenses, only a part of the contents may appear in the calibration plate images captured by the two lenses at the same time. In the embodiment of the invention, after the effective segmentation areas are determined in the calibration board front views corresponding to the calibration board images shot by the two lenses in the manner described above, the effective segmentation areas simultaneously appearing in the calibration board images shot by the two lenses are screened out as the effective segmentation areas of the binocular camera, and in the subsequent step, only the world coordinate points corresponding to the dots simultaneously appearing in the effective segmentation areas in the calibration board images shot by the two lenses are determined.
The set of the effective segmentation areas is an effective dot range.
And e7, determining the coordinates of the effective dots in the front view of the calibration plate.
In an alternative embodiment, in step e7, for each effective segmentation region, a dot whose dot center coordinate (obtained by the minimum bounding rectangle method) falls in the effective segmentation region is taken as an effective dot, the center coordinate of the effective dot is recorded, all effective segmentation regions are traversed in this way to screen out all effective dots, and the center coordinates of all effective dots are recorded.
Step e8, inverting the coordinates of all valid dots in the front view of the calibration plate back to the coordinates in the captured partial calibration plate image by the inverse of the transmission transform described above.
And e9, obtaining the world coordinates of all the effective dots in the world coordinate system according to the ranges of the effective segmentation areas.
In step e9, the world coordinates of the effective dots in the world coordinate system can be finally obtained by combining the effective dots in the captured partial calibration board image with the inputted calibration board information (e.g. the actual physical distances between the dots in the calibration board) by using the prior art within the range of the effective divided areas.
After the world coordinates of the valid dots in the world coordinate system are obtained, subsequent camera calibration work can be performed by using the Zhang Yongyou calibration algorithm.
In the method for acquiring the world coordinates of the dots in the partial calibration plate image according to the embodiment, the center point of the target feature in the calibration plate front view is used to determine the row segmentation threshold and the column segmentation threshold, and the center point of the target feature in the calibration plate front view is used as the reference point for row segmentation and column segmentation to realize accurate division of the effective segmentation area, so that an accurate range of the effective dots is obtained, and further, the minimum bounding rectangle method is used to obtain the center positions of the complete dots and the incomplete dots which are more accurate than the centroid method, so that the coordinate accuracy of the dots in the calibration plate image is improved, and the calibration effect of the camera can be improved.
It will be appreciated that other embodiments of the pattern, number, arrangement of target features in the calibration plate are possible.
In one embodiment, the pattern of target features may be at least sufficient to provide a marking effect that is distinguishable from non-target features. Preferably, in order to increase the effective moving range of the calibration board, a two-dimensional code is set in the central area of the calibration board as a target feature, the two-dimensional code is decoded to determine the serial number sequence of the two-dimensional code and the position of the two-dimensional code in the calibration board, four point pairs provided by the four two-dimensional codes are used for transmission transformation, a trip segmentation threshold value and a column segmentation threshold value are determined after the transmission transformation, and the normal view is segmented based on the segmentation threshold value and the column segmentation threshold value.
Preferably, the patterns of the respective target features (including the patterns themselves, and/or the pattern sizes) may be different from each other, so as to facilitate identification of a specific target feature among the target features; for example, in fig. 7, the target feature point 2 may serve to locate each target feature position, and for identification, the target feature point 2 may be designed to be a triangle, a square, or the like, which is different from the other 4 target feature patterns, so as to serve as a location indicator. These target feature patterns may be screened for outline, and/or area, and the center of the pattern may be obtained using centroid method by finding the centroid.
In the second embodiment, the arrangement of the target features or the combination of the arrangement and the pattern may facilitate the identification of the mutual position relationship between the target features; taking fig. 7 as an example, 4 target features (dots 1, 3, 5, and 4 in fig. 2) are adjacent to one non-target feature at the center of the calibration board in the upper, lower, left, and right directions, so that the coordinates of the center of the calibration board, the row division threshold, and the column division threshold can be calculated from the coordinates of the 4 target features. The 5 th target feature (dot 2 in fig. 7) other than the 4 target features may be located adjacent to any one of the 4 target features, and the positional relationship between the target feature points is identified by calculating the distance between each target feature point and the origin of the coordinate system. For example, of the 5 target feature points, the dot 2 in fig. 7 is closest to the origin of the coordinate system, and the position of 2 may be identified first during positioning; if the dot 2 in fig. 7 is located adjacent to both dots 4 and 5, the dot 2 is farthest from the origin of the coordinate system among the 5 target feature points, and based on this, the location of the dot 2 can be identified first in the positioning.
In a third embodiment, the combination of the number and arrangement of the target features may provide information including at least a row division threshold, a column division threshold, and coordinates of the center of the calibration plate. Referring to fig. 8, fig. 8 is a schematic diagram of the arrangement of the target features. The a and the b are arranged and distributed with the target feature quantity of 3, and the three target features are at least not on the same straight line, namely, the connecting lines of the three target features form a triangle, so that the segmentation threshold value is determined through the geometric relationship of the three target features; in order to reduce complexity, at least two target features are arranged in the same row or column of the three target features, as shown in a or b; in order to improve the accuracy of the segmentation threshold, preferably, two target features in the same row or column are not adjacent to each other. For the case that the number of target features is greater than 3, for example, c and d in the figure are an arrangement distribution with the number of target features being 4, at least two target features which are not in the same row or the same column in the target features are selected according to the geometric relationship to calculate the segmentation threshold; when a plurality of target features of the target features which are not in the same row or the same column exist, after the segmentation threshold values can be respectively calculated, the average value of all the segmentation threshold values is taken as the segmentation threshold value; preferably, in order to reduce complexity and improve calculation accuracy, the segmentation threshold may be calculated by selecting 4 feature points in total from two target features in the same row and two target features in the same column, as in the foregoing embodiment.
It should be understood that the embodiments of the present invention are not limited to the use of the calibration plate described above in fig. 1 for camera calibration, and any array calibration plate consisting of non-target features and target features may be used for camera calibration. Referring to fig. 9, fig. 9 is a schematic flow chart of camera calibration based on a calibration plate pattern array including non-target features and target features. The calibration method comprises the following steps of,
step 901, detecting whether the calibration board image is complete, if the calibration board image is incomplete, executing step 902;
if the calibration is complete, determining the effective range of the features according to the size of the calibration board, acquiring feature points in the effective range, and calibrating camera parameters according to the existing calibration method, wherein the camera calibration based on the calibration board comprising the non-target features and the target features is not different from the camera calibration based on the common calibration board comprising the non-target features; of course, the features at the edge of the array in the valid range may also be determined as feature points, and camera calibration may be performed according to the subsequent steps 902-914.
Step 902, pre-process the image, extract the outline of each feature in the calibration plate matrix array. Wherein the features include target features and non-target features, and the preprocessing includes at least binarization processing.
Step 903, determining rows and columns of a rectangular array according to the outline of the array features of the calibration plate; the row and column configuration parallel to the rectangular array accommodates all of the features in the calibration plate image;
if a certain vertex in the quadrangle is not in the calibration board image, screening out an outermost row of features and an outermost column of features which are closest to the image edge at the edge of the image adjacent to the vertex in the calibration board image, constructing row parallel lines which are parallel to the outermost row of features outside the outermost row of features, constructing column parallel lines which are parallel to the outermost column of features outside the outermost column of dots, and taking the intersection points of the row parallel lines and the column parallel lines as vertexes which are not in the calibration board image in the quadrangle to obtain a rectangle;
step 904, obtaining a front view of the calibration plate image through transmission transformation:
in one embodiment, the current position of the upper left point of the rectangle is used as a first source point, the origin of coordinates of the image coordinate system is used as a first target point of the first source point transmission transformation,
taking the current position of the upper right point of the rectangle as a second source point, taking a point with a first distance in the first direction from the origin of coordinates of the image coordinate system as a second target point of the transmission transformation of the second source point,
taking the current position of the lower left point of the rectangle as a third source point, taking a point with a second distance in a second direction from the origin of coordinates of the image coordinate system as a third target point of the transmission transformation of the third source point,
taking the current position of the lower right point of the rectangle as a fourth source point, taking the point with the first distance in the first direction and the second distance in the second direction from the coordinate origin of the image coordinate system as a fourth target point of the fourth source point transmission transformation,
wherein the first distance is: the square root of the sum of the square of the difference between the abscissa of the upper left point and the abscissa of the upper right point and the square of the difference between the ordinate of the upper left point and the ordinate of the upper right point;
the second distance is: the square root of the sum of the square of the difference between the abscissa of the upper left point and the abscissa of the lower left point and the square of the difference between the ordinate of the upper left point and the ordinate of the lower left point;
and setting a target point corresponding to the source point after transmission transformation by taking the vertex of the rectangle as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
In a second embodiment, pattern contours of at least four features in the calibration plate image are extracted, and mass points of each pattern contour are calculated; the at least four characteristics satisfy: the shape formed by sequentially connecting the four characteristics is a rectangle; and setting a target point corresponding to the source point after transmission transformation by taking the mass point as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
Step 905, after the binarization processing is carried out on the front view of the calibration plate,
extracting pattern profiles of all the features of the features in the front view of the calibration plate, calculating mass points of each pattern profile to obtain effective features with the mass points, and determining coordinates of the mass points as central coordinates of the features;
calculating particle parameters at least representing the projection area size of the effective feature pattern contour for each effective feature,
step 906, based on each of the valid bits. Identifying the target feature according to the difference between the particle parameters of the target feature and the particle parameters of the non-target feature;
in the step, based on each effective feature, calculating the sum of particle parameters of the pattern profile of all features, dividing the sum of the particle parameters of the pattern profile of all features by the total number of the features to obtain the average value of the particle parameters of the pattern profile, taking the average value of the particle parameters as a first threshold value for screening out target features from non-target features, comparing the particle parameters of the pattern profile of each feature with the first threshold value respectively, and identifying the target features according to the comparison result; wherein the difference between the particle parameters of the pattern profile of the target feature and the first threshold is greater than the difference between the particle parameters of the pattern profile of the non-target feature and the first threshold.
Through the above steps 905-906, target features are identified in which at least two of the front views are neither in the same row nor in the same column.
Step 907, determining the position of the identified target feature according to one of the pattern, number, arrangement of the target feature or any combination thereof:
taking the area of a certain range of the position of the identified target feature as the central area of the calibration plate;
for each identified target feature, respectively calculating the distance from each target feature to the origin of the image coordinate system according to the central coordinates of the target feature, taking the target feature with the third distance as the first target feature, and recording the coordinates of the first target feature; the third distance is a characteristic distance which is different from other distances in the calculated distances and is determined according to the distribution of the target characteristics;
step 908, determining the position relationship between the first target feature and the rest of the target features and the position relationship between the rest of the target features by taking the first target feature as a positioning point according to the central coordinates of the target features.
In one embodiment, a fourth distance having the smallest difference from the third distance is determined among the calculated distances, and at least one target feature having the fourth distance is taken as a second target feature; determining whether the second target feature is in the same row or the same column as the first target feature according to the central coordinates of the first target feature and the second target feature based on the first target feature;
determining a fifth distance which is the smallest difference from the third distance among the calculated distances, taking at least one target feature with the fifth distance as a third target feature, determining whether the third target feature and the first target feature are in the same row or the same column according to the central coordinates of the first target feature and the third target feature based on the first target feature, and if not, determining whether the third target feature and the second target feature are in the same row or the same column according to the central coordinates of the second target feature and the third target feature;
and determining the position relation between the first target feature and the rest of the target features and the position relation between the rest of the target features.
In a second embodiment, when the target feature comprises at least one positioning target feature with a positioning identification pattern, identifying the positioning target feature according to the positioning identification pattern; and determining the position relation between the first target feature and each of the rest target features and between the rest target features by taking the positioning target feature as a positioning point according to the central coordinates of each identified target feature.
The locations of the identified target features are determined through the above steps 907-908.
Step 909, determining the row division threshold and the column division threshold according to the position geometric relationship between the target features:
for example, as shown in a-d of fig. 8, at least two target features that are neither in the same row nor in the same column are selected from the identified target features; respectively calculating the projection distance of a line segment between two features in the direction of a transverse axis and the projection distance in the direction of a longitudinal axis of an image coordinate system according to the central coordinates of the selected target features, dividing the projection distance in the direction of the transverse axis by the number of segmentation thresholds contained in the projection distance to obtain a row segmentation threshold, and dividing the projection distance in the direction of the longitudinal axis by the number of segmentation thresholds contained in the projection distance to obtain a column segmentation threshold; the division threshold is one half of the distance between two adjacent feature dots located in the same row or the same column, and is used to represent a scale threshold for separating two adjacent feature patterns located in the same row or the same column by the same distance.
Or, as shown in a to d in fig. 8, selecting three target features which are not in the same straight line from the identified target features, wherein two target features are in the same row or the same column; calculating the distance between two target features positioned in the same row or the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a first row segmentation threshold or a first column segmentation threshold; selecting any one of two target features positioned in the same row or the same column, and respectively calculating the projection distance of the target feature, the line segment between the two target features which are not positioned in the same straight line and positioned in the same row or the same column, in the direction of the horizontal axis and the projection distance in the direction of the vertical axis of the image coordinate system; dividing the projection distance in the horizontal axis direction by the number of the division thresholds included in the projection distance to obtain a second row division threshold, and dividing the projection distance in the vertical axis direction by the number of the division thresholds included in the projection distance to obtain a second column division threshold; calculating an average value of a first row division threshold and a second row division threshold, calculating an average value of a first column division threshold and a second column division threshold, and respectively obtaining the row division threshold and the column division threshold;
or, as shown in d in fig. 8, selecting four target features, which are sequentially connected to form a rectangle, from the identified target features, wherein two adjacent target features are in the same row or the same column; selecting any two target features positioned in the same row, calculating the distance between the two target features positioned in the same row according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a row segmentation threshold; selecting any two target features in the same column, calculating the distance between the two target features in the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a column segmentation threshold;
or, as shown in fig. 7, selecting four target features which are formed by sequentially connecting four target features and have a square shape from the identified target features, wherein two non-adjacent target features are in the same row or the same column; the midpoint of a connecting line between two nonadjacent target features is positioned at the center of the calibration plate, two target features positioned in the same row are selected, the distance between the two target features positioned in the same row is calculated according to the central coordinates of the selected target features, and the distance is divided by the number of segmentation thresholds contained between the two target features to obtain a row segmentation threshold; selecting two target features positioned in the same column, calculating the distance between the two target features positioned in the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a row segmentation threshold; and determining the central point of the calibration plate according to the distance between the two target features positioned in the same row or the distance between the two target features positioned in the same column.
Through the steps 902-909, at least the row division threshold for dividing each row of the calibration plate array pattern, the column division threshold for dividing each column of the calibration plate array pattern, and the central area of the calibration plate are determined according to the imaging information of the target feature located in the central area of the calibration plate.
And step 910, dividing the front view into more than one divided area according to the row division threshold and the column division threshold by taking the center of any identified target feature as a reference.
Step 910a (not shown in the drawings), dividing the front view of the calibration board into a plurality of rows with a width twice as wide as the row division threshold according to the row division threshold, and enabling the central point of the target feature to be located at the middle position of the row where the central point is located:
taking the center of any identified target feature as a reference, and making a first transverse dividing line at the position which is 1 time of the dividing threshold value and above the center of the target feature; making a second transverse dividing line at the position which is 1 time of the line dividing threshold value from the lower side of the center of the target feature, wherein the area between the first transverse dividing line and the second transverse dividing line is the line dividing area where the target feature is located, and marking the line dividing area as an effective line dividing area with the feature;
taking the first transverse dividing line as a reference, making a third transverse dividing line at a row dividing threshold value which is 2 times of the distance from the first transverse dividing line to the upper side of the first transverse dividing line, judging whether a characteristic falls into the row dividing area in the row dividing area between the third transverse dividing line and the first transverse dividing line, if so, calibrating the row dividing area between the third transverse dividing line and the first transverse dividing line as an effective row dividing area with the characteristic, and if not, calibrating the line dividing area as an invalid row dividing area;
sequentially and analogically making a transverse dividing line and judging whether the newly divided row dividing area is an effective row dividing area or not, and ending until an ineffective row dividing area appears;
taking the second transverse dividing line as a reference, making a fourth transverse dividing line at a position which is 2 times of a line dividing threshold value away from the second transverse dividing line and is positioned below the second transverse dividing line, judging whether a characteristic falls into the line dividing area in the line dividing area between the fourth transverse dividing line and the second transverse dividing line, if so, calibrating the line dividing area between the second transverse dividing line and the fourth transverse dividing line as an effective line dividing area with the characteristic, and if not, calibrating the line dividing area as an invalid line dividing area;
sequentially and analogically making a transverse dividing line and judging whether the newly divided row dividing area is an effective row dividing area or not, and ending until an ineffective row dividing area appears;
judging whether the features fall into the row partition area comprises the steps of comparing the longitudinal coordinate of the features with the longitudinal coordinates of the upper transverse partition line and the lower transverse partition line of the row partition area, judging that dots fall into the row partition area if the longitudinal coordinate of the features falls between the upper transverse partition line and the lower transverse partition line of the row partition area, and otherwise, judging that no dots fall into the row partition area.
Step 910b (not shown in the figures), according to the column division threshold, dividing the front view of the calibration board into a plurality of columns with the width twice as wide as the column division threshold, and enabling the central point of the target feature to be located at the middle position of the column where the target feature is located;
taking the center of any identified target feature as a reference, and making a first longitudinal dividing line to the left side of the center of the target feature and at a position 1 time of a dividing threshold value away from the center; making a second longitudinal dividing line at the position which is at the right side of the center of the target feature and is 1 time of the distance from the column dividing threshold, wherein the area between the first longitudinal dividing line and the second longitudinal dividing line is the column dividing area where the target feature is located, and marking the column dividing area as an effective column dividing area with the feature;
taking the first longitudinal dividing line as a reference, making a third longitudinal dividing line to the left side of the first longitudinal dividing line and at a position 2 times of a column dividing threshold from the first longitudinal dividing line, judging whether a characteristic falls into a column dividing area between the third longitudinal dividing line and the first longitudinal dividing line, if so, calibrating the column dividing area between the third longitudinal dividing line and the first longitudinal dividing line as an effective column dividing area with the characteristic, and if not, calibrating the column dividing area as an ineffective column dividing area;
sequentially and analogically making a longitudinal dividing line and judging whether the newly divided column partition area is an effective column partition area or not until an ineffective column partition area appears, and ending;
taking the second longitudinal dividing line as a reference, making a fourth longitudinal dividing line to the right side of the second longitudinal dividing line and at a position 2 times of a column dividing threshold value away from the second transverse dividing line, judging whether a characteristic falls into the column dividing area in the column dividing area between the fourth longitudinal dividing line and the second longitudinal dividing line, if so, calibrating the column dividing area between the second longitudinal dividing line and the fourth longitudinal dividing line as an effective column dividing area with the characteristic, and if not, calibrating the column dividing area as an ineffective column dividing area;
sequentially and analogically making a longitudinal dividing line and judging whether the newly divided column partition area is an effective column partition area or not until an ineffective column partition area appears, and ending;
the step of judging whether the characteristic falls into the column partition area comprises the step of comparing the abscissa of the characteristic with the abscissas of the left transverse partition line and the right transverse partition line of the column partition area, if the abscissa of the characteristic falls between the upper longitudinal partition line and the lower longitudinal partition line of the column partition area, judging that a dot falls into the column partition area, and if not, judging that no dot falls into the column partition area.
The steps 910a and 910b are not in sequence.
Step 910c (not shown in the figure), the front view of the calibration board is divided into a plurality of divided areas by the row division and the column division.
Through step 910, the feature in the calibration board image is segmented by using the row segmentation threshold and the column segmentation threshold to obtain one or more segmented regions.
Step 911, determining an effective range according to whether each partitioned area contains particles with characteristics:
for any partition area, judging whether the partition area contains particles with characteristics, if so, judging the characteristics of the partition area as valid characteristics, and taking the partition area where the valid characteristics are as one of valid ranges, otherwise, judging the partition area as an invalid range;
and determining the effective range according to the segmentation area of each effective feature.
In other words, for any feature, when the row division area and the column division area where the feature is located are both valid division areas, it is determined that the division area where the feature is located is a valid range.
Determining the valid range based on the features in each of the segmented regions is accomplished, via step 911.
The steps 901-911 enable the effective range of the calibration plate image to be determined according to the target characteristic information in the calibration plate image.
Step 912, obtain feature points from the effective range of the calibration plate center area.
In this step, a feature is selected from the effective range, and the selected feature is inverse-transformed to a coordinate point in the calibration plate image by inverse transformation of the transmission transform, and the coordinate point is taken as the feature point.
And step 913, calculating world coordinate point information corresponding to the features in the effective range.
In this step, the coordinates of the feature points are calculated with the upper left corner of the calibration plate as the origin of the world coordinate system, and the world coordinate point information of the feature points is obtained.
And 914, calibrating and calculating the camera by using Zhang Zhengyou algorithm according to the characteristic information of the calibration plate image and the corresponding world coordinate point information.
According to the method, the target characteristic pattern is embedded in the central area of the calibration plate to serve as the mark, so that under the condition that the image of the calibration plate except the central area is incomplete, the position relation of the mark can be still utilized to perform row segmentation and column segmentation on the effective characteristic points in the calibration plate image, the space coordinate information of the effective characteristic points is obtained, and the requirement of a camera on the image during calibration is reduced; effective characteristic points positioned at the edge of the image are accurately extracted by using a tangent method, the angular point precision is ensured, and meanwhile, the utilization rate of image edge data is improved, so that the coverage rate of the image edge is improved, and the calculation precision of camera calibration is improved.
An embodiment of the present invention further provides an electronic device, a structure of which can be seen in fig. 10, where the electronic device includes: at least one processor 41; and a memory 42 communicatively coupled to the at least one processor 41; wherein the memory 42 stores instructions executable by the at least one processor 41, the instructions being executable by the at least one processor 41 to cause the at least one processor 41 to perform the steps of the calibration method as described in any one of the above embodiments.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components.
An embodiment of the present invention further provides a non-volatile computer-readable storage medium, which stores instructions, where the instructions, when executed by a processor, cause the processor to perform the steps in the calibration method as described in any one of the above embodiments.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (29)

1. A calibration plate, characterized in that it comprises,
a substrate, a first electrode and a second electrode,
a non-target feature pattern array printed on a surface of a substrate,
and at least two target feature patterns which are different from the non-target feature patterns and distributed in different rows and different columns of the non-target feature pattern array.
2. Calibration plate according to claim 1, characterized in that the arrangement of the target features, or the combination of the arrangement and the pattern, is such that the mutual positional relationship between the target features is identified.
3. The calibration plate of claim 1, wherein the target features are combined in number and arrangement to provide information on at least one of: a row division threshold, a column division threshold, and information to scale the coordinates of the center of the board.
4. The calibration plate of any one of claims 1 to 3, wherein the target feature pattern is printed in a central region of the substrate surface,
the non-target feature pattern array is a solid dot array, the non-target feature pattern is dots with a first radius, the target feature pattern is dots with a second radius, and the second radius is not equal to the first radius.
5. Calibration plate according to any one of claims 1 to 3, characterized in that the target feature pattern is a two-dimensional code.
6. A camera calibration method is characterized by comprising the following steps,
acquiring a calibration plate image containing a calibration plate according to claim 1, said calibration plate image including at least a target feature,
determining the effective range of the calibration plate image according to the target characteristic information in the calibration plate image,
calculating world coordinate point information corresponding to the features in the effective range, wherein the features comprise target features and/or non-target features;
and calibrating and calculating the camera according to the characteristic information of the calibration plate image and the corresponding world coordinate point information.
7. The method of claim 6, wherein the camera is a monocular camera, and the calculating the world coordinate point information corresponding to the feature in the valid range comprises: calculating the world coordinate point information corresponding to the mass points of the features in the effective range,
the calibration calculation of the camera according to the characteristic information of the calibration plate and the corresponding world coordinate point information comprises the calibration calculation of the camera by using a Zhang-Zhengyou calibration algorithm.
8. The method of claim 6, wherein the camera is a binocular camera, and the determining the effective range of the calibration plate image according to the target feature information in the calibration plate image comprises:
respectively determining effective ranges of a first calibration plate image and a second calibration plate image at the same time, wherein the first calibration plate image is obtained by a first monocular in a binocular camera, the second calibration plate image is obtained by a second monocular in the binocular camera,
determining an intersection of the valid range of the first calibration plate image and the valid range of the second calibration plate image,
the calculating the world coordinate point information corresponding to the features in the effective range comprises calculating the world coordinate point information corresponding to the mass points of the features in the intersection;
and performing calibration calculation on the camera according to the characteristic information of the calibration plate and the corresponding world coordinate point information, wherein the calibration calculation is performed on the camera by using a Zhang-Zhengyou calibration algorithm.
9. The method of claim 6, wherein determining the valid range of the calibration plate image according to the target feature information in the calibration plate image comprises:
determining an effective range according to the size of the calibration plate for the calibration plate image containing all calibration plates,
the feature at the edge of the array in the valid range is determined as a feature point.
10. The method of claim 6, wherein determining the valid range of the calibration plate image according to the target feature information in the calibration plate image comprises:
for a calibration plate image containing a portion of the calibration plate, wherein the calibration plate image includes at least a target feature located in a central region,
determining at least a row division threshold for dividing each row of the calibration plate array pattern, a column division threshold for dividing each column of the calibration plate array pattern, and a central region of the calibration plate based on imaging information of a target feature located in the central region of the calibration plate,
utilizing the row segmentation threshold and the column segmentation threshold to segment the features in the calibration plate image to obtain more than one segmentation area,
determining a valid range based on the features in each segmented region;
and acquiring the characteristic points from the effective range of the central area of the calibration plate.
11. The method of claim 10, wherein determining at least a row division threshold for dividing rows of the calibration plate array pattern and a column division threshold for dividing columns of the calibration plate array pattern based on imaging information of a target feature located in a central region of the calibration plate comprises,
obtaining a front view of the calibration plate image through transmission transformation,
identifying at least two target features in the elevation view that are neither in the same row nor in the same column,
determining the position of the identified target feature, and taking the area in a certain range of the position of the identified target feature as the central area of the calibration plate;
determining the row segmentation threshold and the column segmentation threshold according to the position geometric relationship among the target features,
the method comprises the steps of utilizing the row segmentation threshold and the column segmentation threshold to segment the features in the calibration plate image to obtain more than one segmentation region,
dividing the front view into more than one divided region according to the row division threshold and the column division threshold;
determining the effective range based on the features in each partitioned area, including determining the effective range according to whether each partitioned area contains particles with features;
the obtaining of the feature point from the effective range of the central area of the calibration plate includes selecting the feature in the effective range, inversely transforming the selected feature to a coordinate point in the calibration plate image through inverse transformation of transmission transformation, and taking the coordinate point as the feature point.
12. The method of claim 11, wherein obtaining the front view of the current calibration plate image by transmission transformation comprises,
determining rows and columns of a rectangular array according to the outline of the array pattern of the calibration plate;
constructing rows and columns parallel to the rectangular array to accommodate quadrilaterals that accommodate all features in the calibration plate image;
if a certain vertex in the quadrangle is not in the calibration board image, screening out an outermost row of features and an outermost column of features which are closest to the image edge at the edge of the image adjacent to the vertex in the calibration board image, constructing row parallel lines which are parallel to the outermost row of features outside the outermost row of features, constructing column parallel lines which are parallel to the outermost column of features outside the outermost column of dots, and taking the intersection points of the row parallel lines and the column parallel lines as the vertices which are not in the calibration board image in the quadrangle to obtain a rectangle;
and setting a target point corresponding to the source point after transmission transformation by taking the vertex of the rectangle as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
13. The method of claim 11, wherein the pattern of target features is a regular pattern that does not carry two-dimensional code information,
the front view of the calibration plate image is obtained by transmission transformation, including,
performing binarization processing on the calibration plate image, extracting pattern profiles of at least four features in the calibration plate image, and calculating mass points of each pattern profile; the at least four characteristics satisfy: the shape formed by sequentially connecting the four characteristics is a rectangle;
and setting a target point corresponding to the source point after transmission transformation by taking the mass point as the source point, and transmitting and transforming the calibration plate image in the rectangle into a calibration plate front view.
14. The method according to claim 12 or 13, wherein said obtaining a front view of a current calibration plate image by transmission transformation further comprises, performing binarization processing on said calibration plate front view,
the setting of the transmission transformed target point corresponding to the source point comprises,
taking the current position of the upper left point of the rectangle as a first source point, taking the coordinate origin of the image coordinate system as a first target point of the first source point transmission transformation,
the current position of the upper right point of the rectangle is taken as a second source point, a point which has a first distance in the first direction with the origin of coordinates of the image coordinate system is taken as a second target point of the transmission transformation of the second source point,
taking the current position of the lower left point of the rectangle as a third source point, taking a point with a second distance in a second direction from the origin of coordinates of the image coordinate system as a third target point of the transmission transformation of the third source point,
taking the current position of the lower right point of the rectangle as a fourth source point, taking a point which has a first distance in the first direction and a second distance in the second direction with the origin of coordinates of the image coordinate system as a fourth target point of the fourth source point transmission transformation,
wherein the first distance is: the square root of the sum of the square of the difference between the abscissa of the upper left point and the abscissa of the upper right point and the square of the difference between the ordinate of the upper left point and the ordinate of the upper right point;
the second distance is: the square root of the sum of the square of the difference between the abscissa of the upper left point and the abscissa of the lower left point and the square of the difference between the ordinate of the upper left point and the ordinate of the lower left point.
15. The method of claim 11, wherein said identifying at least two target features in said elevation view that are neither in the same row nor in the same column comprises,
extracting pattern profiles of all the features of the features in the front view of the calibration plate, calculating mass points of each pattern profile to obtain effective features with the mass points, and determining the coordinates of the mass points as central coordinates of the features;
calculating particle parameters at least representing the projection area size of the effective feature pattern contour for each effective feature,
identifying target features according to differences between the particle parameters of the target features and the particle parameters of the non-target features based on the effective features;
the determining the position of the identified target feature comprises determining the position of the identified target feature according to one of the pattern, the number, the arrangement of the target feature, or any combination thereof.
16. The method of claim 15, wherein identifying a target feature based on differences between particle parameters of the target feature and particle parameters of a non-target feature based on each of the valid features comprises,
on the basis of each of said valid characteristics,
the sum of the particle parameters of the pattern profile for all features is calculated,
dividing the sum of the particle parameters of the pattern profile for all features by the total number of features accumulated to obtain an average of the particle parameters of the pattern profile,
the particle parameter average is used as a first threshold to screen out target features from non-target features,
comparing the particle parameters of the pattern profile of each feature with the first threshold respectively, and identifying a target feature according to the comparison result; wherein the difference between the particle parameters of the pattern profile of the target feature and the first threshold is greater than the difference between the particle parameters of the pattern profile of the non-target feature and the first threshold.
17. The method of claim 16, wherein the calibration plate is a dot array calibration plate, the pattern of non-target features are dots having a first radius, the pattern of target features are dots having a second radius, the second radius is not equal to the first radius; the particle parameter is the radius of the outline of the dot,
the calculation of the particles for each pattern profile includes,
for a complete dot in the front view of the calibration plate, the outline of each dot is enclosed by a minimum enclosing rectangle, and each side of the minimum enclosing rectangle is tangent to the enclosed dot,
for incomplete dots in the front view of the calibration board, enclosing the outline of each incomplete dot by a minimum enclosing rectangle, wherein two opposite sides of the minimum enclosing rectangle are tangent to the enclosed incomplete dot, and discarding the incomplete dot if the two opposite sides of the minimum enclosing rectangle cannot be tangent to the enclosed incomplete dot,
taking the midpoint position between the tangents of the two opposite sides of the minimum bounding rectangle as the outline mass point of the dot surrounded by the minimum bounding rectangle; the coordinates of the mass point are determined as the central coordinates of the complete or incomplete dots enclosed by the minimum enclosing rectangle,
determining half of the distance between the tangent points on the two opposite sides of the minimum bounding rectangle as the contour radius of the dot surrounded by the minimum bounding rectangle;
the sum of the particle parameters for all the pattern profiles is calculated by,
accumulating the outline radiuses of all the complete dots and the incomplete dots;
dividing the sum of the particle parameters of the pattern profile for all features by the total number of features accumulated to obtain an average of the particle parameters of the pattern profile, including,
dividing the sum of the outline radii of all the complete dots and the incomplete dots by the total number of all the complete dots and the incomplete dots to obtain a radius screening value;
the first threshold is the radius screening value,
comparing the particle parameters of the pattern profile of each feature to the first threshold comprises comparing the profile radii of all complete dots and incomplete dots to the radius screening threshold.
18. The method of claim 15, wherein determining the location of the identified target feature based on one of a pattern, a number, an arrangement of target features, or any combination thereof comprises,
for each identified target feature, respectively calculating the distance from each target feature to the origin of the image coordinate system according to the central coordinates of the target feature, taking the target feature with the third distance as the first target feature, and recording the coordinates of the first target feature; the third distance is a characteristic distance which is different from other distances in the calculated distances and is determined according to the distribution of the target characteristics;
and determining the position relation between the first target feature and the rest target features and between the rest target features by taking the first target feature as a positioning point according to the central coordinates of all the target features.
19. The method of claim 18, wherein determining the positional relationship between the first target feature and the remaining target features and between the remaining target features using the first target feature as a positioning point based on the center coordinates of each target feature comprises,
determining a fourth distance having a smallest difference from the third distance among the calculated distances, and regarding at least one target feature having the fourth distance as a second target feature; determining whether the second target feature is in the same row or the same column as the first target feature according to the central coordinates of the first target feature and the second target feature on the basis of the first target feature;
determining a fifth distance which is the smallest difference from the third distance in the calculated distances, taking at least one target feature with the fifth distance as a third target feature, determining whether the third target feature and the first target feature are in the same row or the same column according to the central coordinates of the first target feature and the third target feature based on the first target feature, and if not, determining whether the third target feature and the second target feature are in the same row or the same column according to the central coordinates of the second target feature and the third target feature;
and determining the position relation between the first target feature and the rest of the target features and the position relation between the rest of the target features.
20. The method of claim 18, wherein the target features include at least one positioning target feature having a positioning identification pattern; the determining the location of the identified target feature based on one of a pattern, a number, an arrangement of the target feature, or any combination thereof may include,
identifying the positioning target characteristics according to the positioning identification pattern; and determining the position relation between the first target feature and each of the rest target features and between the rest target features by taking the positioning target feature as a positioning point according to the central coordinates of each identified target feature.
21. The method of claim 11, wherein the target features include at least four target features carrying two-dimensional code information, and the shape formed by connecting the target features in sequence is rectangular,
the front view of the current calibration plate image is obtained by transmission transformation, including,
acquiring the information of the two-dimensional code, decoding the two-dimensional code, determining the number sequence of the two-dimensional code and the position of the two-dimensional code in a calibration board,
and performing transmission transformation by using four points provided by the four two-dimensional codes to obtain a front view of the current calibration board image.
22. The method according to claim 11, wherein the determining the row segmentation threshold and the column segmentation threshold according to the geometric relationship of positions between target features comprises,
selecting at least two target features which are not in the same row or the same column from the identified target features;
respectively calculating the projection distance of the line segment between the two features in the direction of the horizontal axis and the projection distance in the direction of the vertical axis of the image coordinate system according to the central coordinates of the selected target features,
dividing the projection distance in the horizontal axis direction by the number of division thresholds included in the projection distance to obtain the line division threshold,
dividing the projection distance in the longitudinal axis direction by the number of the segmentation threshold values contained in the projection distance to obtain the column segmentation threshold value;
alternatively, the first and second electrodes may be,
selecting three target features which are not in the same straight line from the identified target features, wherein the two target features are positioned in the same row or the same column;
calculating the distance between two target features positioned in the same row or the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a first row segmentation threshold or a first column segmentation threshold;
selecting any one of two target features positioned in the same row or the same column, and respectively calculating the projection distance of the target feature, a line segment between the target feature and the target feature which is not positioned in the same straight line in the same row or the same column, in the direction of a transverse axis and the projection distance in the direction of a longitudinal axis of an image coordinate system; dividing the projection distance in the horizontal axis direction by the number of the division thresholds included in the projection distance to obtain a second row division threshold, and dividing the projection distance in the vertical axis direction by the number of the division thresholds included in the projection distance to obtain a second column division threshold;
calculating an average value of a first row division threshold and a second row division threshold, calculating an average value of a first column division threshold and a second column division threshold, and respectively obtaining the row division threshold and the column division threshold;
alternatively, the first and second electrodes may be,
selecting four target features which are sequentially connected with one another and are rectangular in shape from the identified target features, wherein two adjacent target features are in the same row or the same column;
selecting any two target features in the same row, calculating the distance between the two target features in the same row according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a row segmentation threshold;
selecting any two target features in the same column, calculating the distance between the two target features in the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds included between the two target features to obtain a column segmentation threshold;
alternatively, the first and second electrodes may be,
selecting four target features which are formed by sequentially connecting the four target features and are square in shape from the identified target features, wherein two non-adjacent target features are in the same row or the same column; and the midpoint of the connecting line between two non-adjacent target features is positioned at the center of the calibration plate,
selecting two target features positioned in the same row, calculating the distance between the two target features positioned in the same row according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a row segmentation threshold;
selecting two target features positioned in the same column, calculating the distance between the two target features positioned in the same column according to the central coordinates of the selected target features, and dividing the distance by the number of segmentation thresholds contained between the two target features to obtain a column segmentation threshold;
and determining the central point of the calibration plate according to the distance between the two target features positioned in the same row or the distance between the two target features positioned in the same column.
23. The method of claim 11, wherein the segmenting the elevation view into more than one segmented region according to the row segmentation threshold and the column segmentation threshold comprises,
based on the center of any identified target feature,
according to the line segmentation threshold, dividing the front view of the calibration board into a plurality of lines with the width twice as wide as the line segmentation threshold, and enabling the center point of the target feature to be located in the middle of the line where the target feature is located;
according to the column segmentation threshold, dividing the front view of the calibration board into a plurality of columns with the width twice as wide as the column segmentation threshold, and enabling the center point of the target feature to be located in the middle of the column where the target feature is located;
and dividing the front view of the calibration board into a plurality of divided areas through the row division and the column division.
24. The method of claim 23, wherein said line splitting the calibration board elevation into a plurality of lines having a width twice the line splitting threshold according to the line splitting threshold comprises,
taking the center of any identified target feature as a reference, and making a first transverse dividing line at the position which is 1 time of the dividing threshold value and above the center of the target feature; making a second transverse dividing line at the position which is 1 time of the line dividing threshold value from the lower side of the center of the target feature, wherein the area between the first transverse dividing line and the second transverse dividing line is the line dividing area where the target feature is located, and marking the line dividing area as an effective line dividing area with the feature;
taking the first transverse dividing line as a reference, making a third transverse dividing line at a row dividing threshold position which is 2 times of the distance from the first transverse dividing line to the upper side of the first transverse dividing line, judging whether a characteristic falls into the row dividing area in a row dividing area between the third transverse dividing line and the first transverse dividing line, if so, calibrating the row dividing area between the third transverse dividing line and the first transverse dividing line as an effective row dividing area with the characteristic, and if not, calibrating the row dividing area as an invalid row dividing area;
sequentially and analogically making a transverse dividing line and judging whether the newly divided line partition area is an effective line partition area or not until an ineffective line partition area appears, and ending;
taking the second transverse dividing line as a reference, making a fourth transverse dividing line at a position which is 2 times of a line dividing threshold value away from the second transverse dividing line and is below the second transverse dividing line, judging whether a characteristic falls into the line dividing area in the line dividing area between the fourth transverse dividing line and the second transverse dividing line, if so, calibrating the line dividing area between the second transverse dividing line and the fourth transverse dividing line as an effective line dividing area with the characteristic, and if not, calibrating the line dividing area as an invalid line dividing area;
sequentially and analogically making a transverse dividing line and judging whether the newly divided line partition area is an effective line partition area or not until an ineffective line partition area appears, and ending;
the step of judging whether the features fall into the row partition area comprises the step of comparing the vertical coordinate of the features with the vertical coordinates of the upper transverse partition line and the lower transverse partition line of the row partition area, if the vertical coordinate of the features falls between the upper transverse partition line and the lower transverse partition line of the row partition area, judging that dots fall into the row partition area, and if not, judging that no dots fall into the row partition area.
25. The method of claim 23, wherein said column-wise dividing said calibration board elevation by said column division threshold into a plurality of columns having a width twice said column division threshold comprises,
taking the center of any identified target feature as a reference, and making a first longitudinal dividing line to the left side of the center of the target feature and at a position 1 time of a dividing threshold value away from the center; making a second longitudinal dividing line at the position which is at the right side of the center of the target feature and is 1 time of the distance from the column dividing threshold, wherein the area between the first longitudinal dividing line and the second longitudinal dividing line is the column dividing area where the target feature is located, and marking the column dividing area as an effective column dividing area with the feature;
taking the first longitudinal dividing line as a reference, making a third longitudinal dividing line to the left side of the first longitudinal dividing line and at a position 2 times of a column dividing threshold from the first longitudinal dividing line, judging whether a characteristic falls into a column dividing area between the third longitudinal dividing line and the first longitudinal dividing line, if so, calibrating the column dividing area between the third longitudinal dividing line and the first longitudinal dividing line as an effective column dividing area with the characteristic, and if not, calibrating the column dividing area as an ineffective column dividing area;
sequentially and analogically making a longitudinal dividing line and judging whether the newly divided column partition area is an effective column partition area or not until an ineffective column partition area appears, and ending;
taking the second longitudinal dividing line as a reference, making a fourth longitudinal dividing line to the right side of the second longitudinal dividing line and at a position 2 times of a column dividing threshold value away from the second transverse dividing line, judging whether a characteristic falls into the column dividing area in the column dividing area between the fourth longitudinal dividing line and the second longitudinal dividing line, if so, calibrating the column dividing area between the second longitudinal dividing line and the fourth longitudinal dividing line as an effective column dividing area with the characteristic, and if not, calibrating the column dividing area as an ineffective column dividing area;
sequentially and analogically making a longitudinal dividing line and judging whether the newly divided column partition area is an effective column partition area or not until an ineffective column partition area appears, and ending;
the step of judging whether the characteristic falls into the column partition area comprises the step of comparing the abscissa of the characteristic with the abscissas of the left transverse partition line and the right transverse partition line of the column partition area, and if the abscissa of the characteristic falls between the upper longitudinal partition line and the lower longitudinal partition line of the column partition area, judging that a dot falls into the column partition area, otherwise, judging that no dot falls into the column partition area.
26. The method according to any one of claims 23 to 25, wherein determining the valid range based on the feature in each of the divided regions comprises, for any feature, determining that the divided region in which the feature is located is the valid range when the row divided region and the column divided region in which the feature is located are both valid divided regions.
27. The method of any of claims 23 to 25, wherein determining valid ranges based on whether each of the partitioned regions contains particles that are characteristic of comprises,
for any partition area, judging whether the partition area contains particles with characteristics, if so, judging that the characteristics of the partition area are valid characteristics, and taking the partition area where the valid characteristics are located as one of valid ranges, otherwise, judging that the partition area is an invalid range;
and determining the effective range according to the segmentation area of each effective feature.
28. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the calibration method as claimed in any one of claims 6 to 27.
29. A non-transitory computer readable storage medium storing instructions which, when executed by a processor, cause the processor to perform the steps of the calibration method of any one of claims 6 to 27.
CN201911376806.XA 2019-12-27 2019-12-27 Calibration board, camera calibration method and device Pending CN113052911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911376806.XA CN113052911A (en) 2019-12-27 2019-12-27 Calibration board, camera calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911376806.XA CN113052911A (en) 2019-12-27 2019-12-27 Calibration board, camera calibration method and device

Publications (1)

Publication Number Publication Date
CN113052911A true CN113052911A (en) 2021-06-29

Family

ID=76506341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911376806.XA Pending CN113052911A (en) 2019-12-27 2019-12-27 Calibration board, camera calibration method and device

Country Status (1)

Country Link
CN (1) CN113052911A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205558A (en) * 2021-07-02 2021-08-03 杭州灵西机器人智能科技有限公司 Camera calibration feature sorting method, calibration board and equipment
CN113822950A (en) * 2021-11-22 2021-12-21 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate
CN115222825A (en) * 2022-09-15 2022-10-21 湖南视比特机器人有限公司 Calibration method, computer storage medium and calibration system
CN115457144A (en) * 2022-09-07 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment
CN115830148A (en) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 Calibration plate and calibration method
CN116883515A (en) * 2023-09-06 2023-10-13 菲特(天津)检测技术有限公司 Optical environment adjusting method and optical calibration device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887023A (en) * 2017-02-21 2017-06-23 成都通甲优博科技有限责任公司 For scaling board and its scaling method and calibration system that binocular camera is demarcated
CN107656637A (en) * 2017-08-28 2018-02-02 哈尔滨拓博科技有限公司 A kind of automation scaling method using the projected keyboard for choosing manually at 4 points
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision
CN109829948A (en) * 2018-12-13 2019-05-31 昂纳自动化技术(深圳)有限公司 Camera calibration plate, calibration method and camera
CN110148174A (en) * 2019-05-23 2019-08-20 北京阿丘机器人科技有限公司 Scaling board, scaling board recognition methods and device
CN110378969A (en) * 2019-06-24 2019-10-25 浙江大学 A kind of convergence type binocular camera scaling method based on 3D geometrical constraint
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887023A (en) * 2017-02-21 2017-06-23 成都通甲优博科技有限责任公司 For scaling board and its scaling method and calibration system that binocular camera is demarcated
CN107656637A (en) * 2017-08-28 2018-02-02 哈尔滨拓博科技有限公司 A kind of automation scaling method using the projected keyboard for choosing manually at 4 points
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision
CN109829948A (en) * 2018-12-13 2019-05-31 昂纳自动化技术(深圳)有限公司 Camera calibration plate, calibration method and camera
CN110148174A (en) * 2019-05-23 2019-08-20 北京阿丘机器人科技有限公司 Scaling board, scaling board recognition methods and device
CN110378969A (en) * 2019-06-24 2019-10-25 浙江大学 A kind of convergence type binocular camera scaling method based on 3D geometrical constraint
CN110599548A (en) * 2019-09-02 2019-12-20 Oppo广东移动通信有限公司 Camera calibration method and device, camera and computer readable storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205558A (en) * 2021-07-02 2021-08-03 杭州灵西机器人智能科技有限公司 Camera calibration feature sorting method, calibration board and equipment
CN113822950A (en) * 2021-11-22 2021-12-21 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate
CN113822950B (en) * 2021-11-22 2022-02-25 天远三维(天津)科技有限公司 Calibration point distribution determination method, device, equipment and storage medium of calibration plate
CN115457144A (en) * 2022-09-07 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment
CN115457144B (en) * 2022-09-07 2023-08-15 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment
CN115222825A (en) * 2022-09-15 2022-10-21 湖南视比特机器人有限公司 Calibration method, computer storage medium and calibration system
CN115830148A (en) * 2023-02-23 2023-03-21 深圳佑驾创新科技有限公司 Calibration plate and calibration method
CN116883515A (en) * 2023-09-06 2023-10-13 菲特(天津)检测技术有限公司 Optical environment adjusting method and optical calibration device
CN116883515B (en) * 2023-09-06 2024-01-16 菲特(天津)检测技术有限公司 Optical environment adjusting method and optical calibration device

Similar Documents

Publication Publication Date Title
CN113052911A (en) Calibration board, camera calibration method and device
CN110136182B (en) Registration method, device, equipment and medium for laser point cloud and 2D image
CN110956660B (en) Positioning method, robot, and computer storage medium
US7894661B2 (en) Calibration apparatus, calibration method, program for calibration, and calibration jig
CN113052910A (en) Calibration guiding method and camera device
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN112132907B (en) Camera calibration method and device, electronic equipment and storage medium
CN112614188B (en) Dot-matrix calibration board based on cross ratio invariance and identification method thereof
CN111179360B (en) High-precision automatic calibration plate and calibration method
US20220284630A1 (en) Calibration board and calibration method and system
CN110827357B (en) Combined pattern calibration plate and structured light camera parameter calibration method
CN111028284A (en) Binocular vision stereo matching method and device based on homonymous mark points
CN109325381B (en) QR code positioning and correcting method
CN116342718B (en) Calibration method, device, storage medium and equipment of line laser 3D camera
CN114018932B (en) Pavement disease index measurement method based on rectangular calibration object
CN110770741B (en) Lane line identification method and device and vehicle
CN107850419A (en) Four phase unit planar array characteristic point matching methods and the measuring method based on it
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
CN111368573A (en) Positioning method based on geometric feature constraint
CN114241061A (en) Calibration method, calibration system and calibration target for line structured light imaging and measurement system using calibration target
CN113465573A (en) Monocular distance measuring method and device and intelligent device
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
CN116452852A (en) Automatic generation method of high-precision vector map
CN111598956A (en) Calibration method, device and system
CN110736426A (en) Object size acquisition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.