CN109255818B - Novel target and extraction method of sub-pixel level angular points thereof - Google Patents

Novel target and extraction method of sub-pixel level angular points thereof Download PDF

Info

Publication number
CN109255818B
CN109255818B CN201810918877.7A CN201810918877A CN109255818B CN 109255818 B CN109255818 B CN 109255818B CN 201810918877 A CN201810918877 A CN 201810918877A CN 109255818 B CN109255818 B CN 109255818B
Authority
CN
China
Prior art keywords
corner
camera
target
sub
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810918877.7A
Other languages
Chinese (zh)
Other versions
CN109255818A (en
Inventor
周素茵
徐爱俊
武新梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang A&F University ZAFU
Original Assignee
Zhejiang A&F University ZAFU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang A&F University ZAFU filed Critical Zhejiang A&F University ZAFU
Priority to CN201810918877.7A priority Critical patent/CN109255818B/en
Publication of CN109255818A publication Critical patent/CN109255818A/en
Application granted granted Critical
Publication of CN109255818B publication Critical patent/CN109255818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a novel target and an extraction method of sub-pixel level angular points thereof, wherein the novel target comprises a checkerboard target with staggered and superposed rectangular black blocks and rectangular white blocks, the widths of the rectangular black blocks and the rectangular white blocks are equal, the target is horizontally arranged, the length of the rear row is increased by delta d compared with the length of the front rowiComprises the following steps: d f' (x) mm. The extraction method comprises the steps of defining two different corner templates by using a growth-based checkerboard corner detection method, positioning sub-pixel-level corner by using a corner SubPix () function in OpenCV, growing and reconstructing a checkerboard according to an energy function, marking the corner, and outputting sub-pixel-level corner coordinates. The method can accurately extract the angular points in the optical axis direction, and reasonably avoids the influence of perspective transformation on the angular point extraction precision; the extraction method of the sub-pixel level corner points has a good extraction effect on the picture with large distortion degree.

Description

Novel target and extraction method of sub-pixel level angular points thereof
Technical Field
The invention relates to the field of ground close-range photogrammetry, in particular to a novel target of a pinhole camera under a monocular vision system and an extraction method of a sub-pixel level angular point of the target.
Background
Image-based object ranging is a research hotspot at present, and is widely applied to many fields in industry, such as forestry measurement, automatic driving and the like. Target object ranging based on images mainly comprises two methods of active ranging and passive ranging[1]. The active distance measurement is carried out by adopting a laser radar[2-4]. The passive distance measurement is to calculate the depth information of the target object in the two-dimensional digital image through machine vision, and then calculate the distance of the target object according to the image pixel information and the camera imaging principle[5-6]. The machine vision can replace a lot of manual work, improve the production automation level and the detection precision, and especiallyMany conventional measurement methods do not achieve an effective solution. The machine vision distance measurement mainly comprises two types of monocular vision distance measurement and binocular vision distance measurement1-9]. The early depth information acquisition method mainly comprises binocular stereo vision and camera motion information, and requires a plurality of images to complete the acquisition of the image depth information[10-16]. Compared with binocular vision ranging, monocular ranging image acquisition does not need strict hardware conditions and has more competitive advantages. Since the geometric positions of the image points on the image acquired by the camera are closely related to the geometric positions of the corresponding object points in the real world, and the positions and the mutual relations are determined by the geometric model imaged by the camera, once the parameters of the geometric model can be determined, the corresponding relation between the two-dimensional image points and the three-dimensional object points can be completely expressed, and the image target object distance can be calculated. In the prior art, there are various methods for acquiring the depth information of a target object of a monocular vision system. If a corresponding point calibration method is adopted to obtain the depth information of the target object to be measured[17-19]. Document [17]]A robot target positioning and ranging method based on monocular vision is researched, and the method generally comprises the steps of obtaining internal and external parameters of a camera through camera calibration and solving a conversion relation between an image coordinate system and a world coordinate system by combining a projection model so as to calculate depth information of a target object. The method has the defects that target images in different directions need to be acquired, corresponding coordinates of each point in a world coordinate system and an image coordinate system are accurately recorded, and calibration precision has a large influence on measurement precision. In addition, a depth extraction model can be constructed by studying the relation between the actual imaging of the target object and the image pixels in the world coordinate system, and the distance between the target object in the image and the camera in the actual scene can be calculated. Document [20 ]]And placing a reference object on the road surface, measuring the distance of the reference object, selecting a proper mathematical model, fitting the corresponding relation between the distance of the reference object and the pixel, and extracting the depth information in real time by utilizing the relation. In short, document [20 ]]The accuracy of the method is affected by the distance measurement error and fitting error. Document [21]]Designing a vertical target image, and establishing mapping between image ordinate pixel values and actual measurement angles by detecting corner data of the imageAnd (4) obtaining vehicle-mounted depth information in the image by combining the known vehicle-mounted monocular camera height by utilizing the relation. For different camera devices, the method needs to acquire target image information again and establish a camera depth information extraction model for different camera devices of different models, and different vehicle-mounted cameras have different pitch angles due to lens manufacturing, lens assembling and the like, so that document [21]]The method has poor universality. Further, document [21]]The method adopts the vertical target to research the relation between the imaging angle of the image point on the vertical plane and the pixel value of the vertical coordinate, and applies the relation to the measurement of the distance of the object on the horizontal plane, so that the distance measurement precision is relatively low, and the distortion rules of the camera in the horizontal direction and the vertical direction are not completely the same. The selection of the target is crucial to the construction of a depth extraction model, however, when the traditional checkerboard target is used for researching the relation between the actual imaging angle of an object point and the vertical coordinate pixel of the corresponding image point, due to the characteristic of equal length and width, the model fitting effect is poor due to the influence of the perspective transformation phenomenon, and the measurement precision is not high.
In addition, the depth extraction model is constructed by the camera calibration method, so that the model has equipment universality, the model needs to acquire the internal parameters of the camera by the camera calibration method, and the calibration precision of the camera plays a crucial role in the target object ranging precision of the image. The Zhangzhen friend calibration method is the most common camera calibration method at present, and adopts a checkerboard target with traditional black and white squares staggered and superposed. The target can only be used for calibrating the cameras in the same plane, and if the target is horizontally placed, the calibration precision is correspondingly reduced. The invention application with the application number of 201710849961.3 discloses an improved camera calibration model and a distortion correction model (hereinafter referred to as an improved calibration model with a nonlinear distortion term) suitable for an intelligent mobile terminal camera, which can help to correct a calibration plate picture and obtain internal and external parameters of the camera with higher precision, but the distortion correction model can only be used for correcting the nonlinear distortion of the camera and is not suitable for perspective phenomenon in imaging.
Reference documents:
[1] unmanned aerial vehicle target positioning method based on Monte Carlo Kalman filtering [ J ]. proceedings of northwest university, 2017, 35 (3): 435-441.
[2]Lin F,Dong X,Chen B M,et al.A Robust Real-Time Embedded Vision System on an Unmanned Rotorcraft for Ground Target Following[J].IEEE Trans on Industrial Electronics,2012,59(2):1038-1049.
[3] Zhang 29740Lin, Huzhengliang, Zhujiangjun, etc. a target position resolving method [ J ] in a single-soldier comprehensive sighting instrument, an electronic measurement technology, 2014, 37 (11): 1-3.
[4] Laser target positioning based on symmetric wavelet denoising and asymmetric gaussian fitting [ J ] chinese laser, 2017, 44 (6): 178-185.
[5] Passive tracking algorithm based on machine vision under incomplete measurement [ J ] proceedings of science and technology university in china, 2017, 45 (6): 33-37.
[6] Xu Cheng, Huangdaqing, hole-clandeering, a passive target location and accuracy analysis of small unmanned aerial vehicle [ J ] instrumental and instrumental reporting, 2015, 36 (5): 1115-1122.
[7] The method for extracting the image depth of the 2-dimensional to 3-dimensional image/video conversion of lie kokumi, ginger sensitivity and Gong Yongyi reviews [ J ]. Chinese graphic and newspapers, 2014, 19 (10): 1393-1406.
[8] Wanhao, xu xie weng, xikun, etc. OpenCV-based binocular ranging system [ J ] university bulletin, 2014, 32 (2): 188-194.
[9]Sun W,Chen L,Hu B,et al.Binocular vision-based position determination algorithm and system[C]//Proceedings of the 2012International Conference on Computer Distributed Control and Intelligent Environmental Monitoring.Piscataway:IEEE Computer Society,2012:170-173.
[10]Ikeuchi K.Determining a depth map using a dual photometric stereo[J].The International Journal of Robotics Research,1987,6(1):15-31.
[11]Shao M,Simchony T,Chellappa R.New algorithms from reconstruction of a 3-d depth map from one or more images[C]//Proceedings of CVPR’88.Ann Arbor:IEEE,1988:530-535.
[12]Matthies L,Kanade T,Szeliski R.Kalman filter-based algorithms for estimating depth from image sequences[J].International Journal of Computer Vision,1989,3(3):209-238.
[13]Mathies L,Szeliski R,Kanade T.Incremental estimation of dense depth maps from image sequence[C]//Proceedings of CVPR’88.Ann Arbor;IEEE,1988:366-374.
[14]Mori T,Yamamoto M.A dynamic depth extraction method[C]//Proceedings of Third International Conference on Computer Vision.Osaka:IEEE,1990:672-676.
[15]Inoue H,Tachikawa T,Inaba M.Robot vision system with a correlation chip for real-time tracking,optical flow and depth map generation[C]//Proceeding of Robotics and Automation.Nice:IEEE,1992:1621-1626.
[16] Tree image ranging method based on binocular vision [ J ] agro-mechanical science, 2010, 41 (11): 158-162.
[17] Then, leisurely, brilliant, linjia, monocular vision-based robot target positioning and ranging method research [ J ] computer measurement and control, 2012, 20 (10): 2654-2660.
[18] Ranging study in monocular autonomous robot visual navigation by wu gang, shao chang. 828-832.
[19] Study of front vehicle detection and ranging methods based on monocular vision [ J ] video applications and engineering, 2011, 35 (1): 125-128.
[20]Wu C F,Lin C J,Lee C Y,et al.Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement[J].IEEE Transactions on Systems,Man and Cybernetics-Part C:Applications and Reviews,2012,42(4):577-589.
[21] Yellow clouds, peak, xu nationality, etc. monocular depth information based on a single vertical target image is extracted [ J ]. the university of aerospace, beijing, 2015, 41 (4): 649-655.
Disclosure of Invention
The invention aims to provide a novel target and an extraction method of sub-pixel-level angular points thereof, wherein the novel target not only can accurately extract angular points in the direction of an optical axis, but also reasonably avoids the influence of perspective transformation on the angular point extraction precision; the extraction method of the sub-pixel-level corner points does not need to appoint the number of checkerboards in advance, has high algorithm robustness, and has good extraction effect on pictures with large distortion degree.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a novel mark target, includes the crisscross superimposed chess board check mark target of rectangle black color piece and rectangle white piece, and the width of each rectangle black color piece and rectangle white color piece is the fixed length and equals its characterized in that: the target is horizontally placed, the size of a row of color blocks of the target closest to the camera is set to be d mm x d mm, and the length of the next row is increased by the length of the previous row
Figure BDA0001762578370000051
Let xiIs the actual distance from the ith corner point to the camera, yiThe length of each color block is the difference delta d between the lengths of the adjacent color blocksiComprises the following steps:
Figure BDA0001762578370000052
and (3) setting the relation between the calculated length of each color block and the actual distance as f (x), and obtaining the result according to the formula (8):
Figure BDA0001762578370000053
thus, when the target is dmm x d mm from the closest row of tiles to the camera, each row is then of fixed width and increased length
Figure BDA0001762578370000054
D x f (x) mm.
The method of claim 1 for extracting corner points at sub-pixel level of a novel target, comprising the steps of:
defining two different corner templates by using a growth-based checkerboard corner detection method provided by Andrea Geiger and the like, wherein one corner is used for being parallel to a coordinate axis, and the other corner is used for rotating an angle of 45 degrees, searching for a corner on an image according to similarity parameters of each pixel point in the image and the template, and performing initial corner detection; performing sub-pixel level fine extraction on the positions and the directions of the corners, performing sub-pixel level corner positioning by applying a cornerSubPix () function in OpenCV, and refining edge direction vectors by minimizing the standard deviation rate of a gradient image; and finally marking the corner points and outputting sub-pixel level coordinates of the corner points, growing and reconstructing a checkerboard according to the energy function, marking the corner points and outputting the sub-pixel level corner point coordinates.
Compared with the prior art, the invention has the beneficial effects that: due to the adoption of the technical scheme, the device has the advantages that,
(1) on the basis of a traditional checkerboard target, according to a perspective transformation phenomenon existing when a camera shoots a horizontal ground, a novel checkerboard target with a specific specification and equal width and increasing length is adopted as an experimental material, the target not only can accurately extract angular points in the direction of an optical axis, but also reasonably avoids the influence of the perspective transformation on the angular point extraction precision;
(2) the growth-based checkerboard corner detection method provided by Andrea Geiger and the like is combined with the cornerSubPix () function provided by OpenCV, the number of checkerboards does not need to be specified in advance in the algorithm, the algorithm has high robustness, and the effect of extracting pictures with large distortion degrees is good.
Drawings
FIG. 1 is a diagram of a novel target;
FIG. 2 is a flow chart of an implementation of a corner detection algorithm;
FIG. 3 is a geometric model diagram of a camera with an upward viewing angle higher than the horizon;
FIG. 4 is a geometric model diagram of a camera with an upward view below the horizon;
FIG. 5 is a camera shot projection geometry model;
FIG. 6 is a schematic diagram of coordinate systems in a pinhole model;
FIG. 7 is a camera stereo imaging system principle;
FIG. 8 is a graph of object ordinate pixel values versus imaging angle;
Detailed Description
In order to make the technical solution of the present invention clearer, the present invention will be described in detail below with reference to fig. 1 to 8. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention relates to a novel target, which comprises a checkerboard target formed by staggered and superposed rectangular black color blocks and rectangular white color blocks, wherein the widths of the rectangular black color blocks and the rectangular white color blocks are fixed and equal, the target is horizontally placed, the size of a row of color blocks of the target, which is closest to a camera, is set to be d mm x d mm, and the length of the next row is increased by the length of the next row compared with the length of the previous row
Figure BDA0001762578370000061
Let xiIs the actual distance from the ith corner point to the camera, yiThe length of each color block is the difference delta d between the lengths of the adjacent color blocksiComprises the following steps:
Figure BDA0001762578370000062
and (3) setting the relation between the calculated length of each color block and the actual distance as f (x), and obtaining the result according to the formula (8):
Figure BDA0001762578370000063
thus, when the target is dmm x d mm from the closest row of tiles to the camera, each row is then of fixed width and increased length
Figure BDA0001762578370000071
D x f (x) mm.
By adopting the novel target of the invention, the calculated length of each color block and the actual distance form an extremely obvious linear correlation relationship through verification, so f (x)) can be preferably selected as a constant.
A method for extracting sub-pixel-level corner points of the novel target comprises the following steps: defining two different corner templates by using a growth-based checkerboard corner detection method provided by Andrea Geiger and the like, wherein one corner is used for being parallel to a coordinate axis, and the other corner is used for rotating an angle of 45 degrees, searching for a corner on an image according to similarity parameters of each pixel point in the image and the template, and performing initial corner detection; performing sub-pixel level fine extraction on the positions and the directions of the corners, performing sub-pixel level corner positioning by applying a cornerSubPix () function in OpenCV, and refining edge direction vectors by minimizing the standard deviation rate of a gradient image; and finally marking the corner points and outputting sub-pixel level coordinates of the corner points, growing and reconstructing a checkerboard according to the energy function, marking the corner points and outputting the sub-pixel level corner point coordinates.
Example 1
A monocular vision-based optimized depth extraction and passive ranging method comprises the following steps:
firstly, calibrating a camera to obtain internal parameters and image resolution of the camera. The calibration adopts a Zhangyingyou calibration method, and an improved calibration model with a nonlinear distortion term is introduced to correct the internal parameters of the camera.
First, the physical size of each pixel on the image plane is set to dx dy (unit: mm), and the coordinates of the origin of the image coordinate system (x, y) in the pixel coordinate system (u, v) are set to (u, v)0,v0) And (x, y) is the normalized coordinates of the image points in the actual image, and any pixel in the image satisfies the following relations in two coordinate systems:
Figure BDA0001762578370000072
fx、fyat any point P in the camera coordinate system for normalized focal length on the x-axis and y-axisc(Xc,Yc,Zc) Projected onto the image coordinate system as (x)c,ycF), the plane of the image coordinate system is vertical to the z axis of the optical axis, the distance from the original point is f, and the following can be obtained according to the similar triangle principle:
Figure BDA0001762578370000081
the improved calibration model with the nonlinear distortion term is introduced, the calibration model comprises radial distortion caused by lens shape defects and tangential distortion caused by different degrees of eccentricity of an optical system, and a radial distortion mathematical model is as follows:
Figure BDA0001762578370000082
wherein r is2=x2+y2(x ', y') is a normalized coordinate value of the ideal linear camera coordinate system without distortion terms after correction, the radial distortion value is related to the position of the image point in the image, the radial distortion value at the edge of the image is larger,
the mathematical model of the tangential distortion model is as follows:
Figure BDA0001762578370000083
wherein contains k1、k2、k3、p1、p2The distortion correction function model obtained by the equations (3) and (4) is as follows:
Figure BDA0001762578370000084
the transformation from world coordinates to camera coordinates has the following relationship:
Pc=R·(PW-C)=R·PW+T (6)
combining equations (1) - (6), expressed in terms of homogeneous coordinates and matrix form, can be:
Figure BDA0001762578370000085
Mint、Mextrespectively calibrating an internal parameter matrix and an external parameter matrix of the camera, wherein the internal parameter of the camera comprises a pixel value u of a central point of an image0、v0,fx、fyThe calibration of the mobile phone camera is realized by combining Java with OpenCV for normalized focal lengths on an x axis and a y axis, and the internal parameters, the camera lens distortion parameters and the image resolution v of the mobile phone camera are obtainedmax、umax(ii) a The internal parameters of the camera obtained by calibration are as follows: f. ofx=3486.5637,u0=1569.0383,fy=3497.4652,v0When 2107.9899, the image resolution is 3120 × 4208, and the camera lens distortion parameters are: [0.0981, -0.1678,0.0003, -0.0025,0.0975],
And secondly, establishing a camera depth extraction model through acquisition of the novel target image. The existing target is a black and white checkerboard target with equal length and width. The novel target of the invention is different from the existing target in that the size of a row of squares of the target closest to a camera is set to be d x d mm, then the width of each row of squares is fixed, and the value of the next row of squares is increased compared with the previous row of squares
Figure BDA0001762578370000091
In the formula xiIs the actual distance from the ith corner point to the camera, yiThe length of each square grid is the difference delta d between the lengths of the adjacent square gridsiComprises the following steps:
Figure BDA0001762578370000092
assuming that the relationship between the calculated length and the actual distance of each square is f (x), the following formula (8) can be obtained:
Figure BDA0001762578370000093
through Pearson correlation analysis, the length and the actual distance have a very significant linear correlation (p is less than 0.01), the correlation coefficient r is equal to 0.975, the derivative f' (x) of f (x) can be calculated through a least square method,
therefore, when the size of a row of squares of the target closest to the camera is d x d mm (the measurement accuracy is highest when d ranges from 30 mm to 60 mm), each row is fixed in width and increased in length
Figure BDA0001762578370000094
D f' (x) mm, the novel target is shown in figure 1,
the perspective transformation phenomenon exists when objects on the horizontal ground are shot, so that the robustness of common corner detection algorithms such as Harris and Shi-Tomasi is poor, and the detection fails when the camera rotates anticlockwise along the ox axis of a camera coordinate system, so that the invention combines a growth-based checkerboard corner detection method provided by Andrea Geiger and the like and a comerSubPix () function provided by OpenCV to carry out sub-pixel level corner position detection, the algorithm has high robustness, and the extraction effect on pictures with large distortion degree is good,
the flow of implementing the corner detection algorithm is shown in fig. 2, and the sub-pixel level corner extraction step of the novel target of the present invention comprises:
1) searching angular points on the image according to similarity parameters of each pixel point in the image and the template, and positioning the angular point positions of the targets;
firstly, defining two different corner point templates, wherein one corner point template is used for a corner point parallel to a coordinate axis, and the other corner point template is used for a corner point rotated by 45 degrees, each template consists of 4 filtering kernels { A, B, C and E } and is used for carrying out convolution operation on a subsequent image; the similarity of each corner to a corner is then calculated using the two corner templates:
Figure BDA0001762578370000101
wherein
Figure BDA0001762578370000102
Represents the convolution response of a convolution kernel X (X ═ A, B, C, E) and a template i (i ═ 1, 2) at a certain pixel point,
Figure BDA0001762578370000103
and
Figure BDA0001762578370000104
representing the similarity of two possible inflection points of the template i, and calculating the similarity of each pixel point in the image to obtain a corner similarity graph; processing the corner pixel image by using a non-maximum suppression algorithm to obtain candidate points; then, the candidate points are verified in an nxn neighborhood of a local area by a gradient statistical method, the sobel filtering is firstly carried out on a local area gray level image, then, a weighted direction histogram (32bins) is calculated, and two main modes gamma in the local area gray level image are found by using a meanshift algorithm1And gamma2(ii) a Depending on the direction of the edge, for a desired gradient strength
Figure BDA0001762578370000105
A template T is constructed.
Figure BDA0001762578370000106
And (the prime symbol represents a cross-correlation operator) and the product of the corner similarity as a corner score, and then judging by using a threshold value to obtain an initial corner.
2) Performing sub-pixel level fine extraction on the positions and the directions of the angular points;
performing sub-pixel-level corner positioning by applying a cornerSubPix () function in OpenCV, and positioning corners to sub-pixels so as to obtain a sub-pixel-level corner detection effect; to refine the edge direction vectors, their standard deviation ratio is minimized according to the image gradient values:
Figure BDA0001762578370000107
wherein
Figure BDA0001762578370000108
Is the gradient value m of the neighboring pixel set and the module ii=[cos(γi)sin(γi)]TAnd (4) matching. (the calculation scheme is according to Geiger A, Moosmann F, Car
Figure BDA0001762578370000109
et a1.Automatic camera and range sensor calibration using a single shot[C]//Robotics and Automation(ICRA),2012 IEEE International Conference on.IEEE,2012:3936-3943.)
3) Marking the angular points and outputting sub-pixel level coordinates of the angular points, growing and reconstructing a checkerboard according to an energy function, marking the angular points and outputting sub-pixel level angular point coordinates;
according to the document "Geiger A, MoosmannF, Car
Figure BDA0001762578370000111
et al.Automatic camera and range sensor calibration using a single shot[C]v/Robotics and Automation (ICRA), 2012IEEE International Conference on. IEEE, 2012: 3936-:
E(x,y)=Ecorners(y)+Estruct(x,y) (16)
wherein E iscornersIs the negative value of the total number of corner points of the current chessboard, EstructIs the degree of matching of two adjacent corners and the predicted corner; corner pixel values are output via OpenCV.
The object imaging angle and the ordinate pixel value are subjected to linear correlation analysis by using the SPSS 22, and a Pearson correlation coefficient is output, and the output Pearson correlation coefficient r is shown in table 2. Through verification, the object ordinate pixel value and the actual imaging angle are in a very significant negative correlation relationship (p < 0.01) under different types of equipment and camera rotation angles, in addition, the invention also performs significance test on the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different types of equipment and camera rotation angles, the result shows that the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different types of equipment and camera rotation angles is very significant (p < 0.01), the equipment and camera rotation angles of different types are shown, the depth extraction models of the equipment and camera rotation angles are different,
TABLE 2 correlation coefficient of object ordinate pixel value and imaging angle
Table 2 Pearson correlation coefficient of image ordinate pixel values and actual imaging angles
Figure BDA0001762578370000112
Figure BDA0001762578370000121
Note: indicates extreme significance (p < 0.01).
Note:**represents very significant correlation(p<0.01).
Through verification, under the rotation angles of the equipment and the camera with the same model, the pixel value of the vertical coordinate of the object and the actual imaging angle form an extremely obvious negative correlation relationship (p is less than 0.01), and the correlation coefficient r is more than 0.99. In addition, the method also carries out significance test on the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different equipment models and camera rotation angles. The result shows that the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different equipment models and camera rotation angles is extremely obvious (p is less than 0.01), which indicates that the depth extraction models of the equipment models and the camera rotation angles are different.
Setting an abstract function according to the linear relation between the target object imaging angle alpha and the ordinate pixel value v, establishing a spatial relation model containing three parameters of the target object imaging angle alpha, the ordinate pixel value v and the camera rotation angle beta, namely alpha is F (v, beta),
under different models of equipment and camera rotation angles, the vertical coordinate pixel value of a shot object and an imaging angle are in extremely obvious negative linear correlation relationship, the slope and intercept of the linear relationship are different, so that the following conditions are set:
α=F(v,β)=a·v+b (17)
wherein the parameters a, b are related to the camera model and the camera rotation angle,
when alpha takes the minimum value alpha ═ alphaminWhen the angle is 90-theta-beta, theta is half of the vertical field angle of the camera, namely when the object is projected to the bottom end of the picture, v is vmax(vmaxFor camera CMOS or CCD image sensor column coordinate effective pixel count), the substitution of equation (17) can be derived:
90-β-θ=a·vmax+b (18)
when alpha ismin+2 theta > 90 deg., theta > beta, when the upward angle of view of the camera is higher than the horizontal line, the projection geometry model of the camera is shown in fig. 3, the ground plane is at infinity, alpha is infinity close to 90 deg., when v is infinity close to v0-tanβ*fy,fyThe same holds true for a negative value of β, i.e., a counterclockwise rotation of the camera, which is the focal length of the camera in pixel units, and therefore, formula (17) can be substituted:
90=a·(v0-tanβ·fy)+b (19)
when alpha isminWhen the +2 theta is less than 90 degrees, namely theta is less than beta, the upward angle of view of the camera is lower than the horizontal line, the shooting projection geometric model of the camera is shown in figure 4, the imaging angle alpha of the target object at the infinite distance from the ground plane is the maximum value, and alpha ismax=αminWhen +2 θ is 90 — β + θ, that is, when the object is projected to the highest point of the picture, v is 0, and formula (17) is substituted with:
90-β+θ=b (20)
according to the pinhole camera construction principle, the tangent value of half of the vertical field angle theta of the camera is equal to half of the side length of the CMOS or CCD image sensor of the camera divided by the focal length of the camera, so that the value of theta:
Figure BDA0001762578370000131
l in the formula (21)CMOSIs a phase ofThe side length of the organic CMOS or CCD image sensor, in combination with equations (18) to (21), F (α, β) is:
Figure BDA0001762578370000132
and (3) in the formula (10), delta is a nonlinear distortion term error of the camera, and a mobile phone camera depth extraction model is established according to a trigonometric function principle by combining the shooting height h of the mobile phone camera:
Figure BDA0001762578370000133
substituting the internal parameters of the millet 3 mobile phone camera into the formula (10) to obtain:
Figure BDA0001762578370000134
the specific depth extraction model of the device obtained according to the trigonometric function principle is as follows:
Figure BDA0001762578370000135
and thirdly, acquiring pixel values u and v of the target points through image acquisition of the target object to be detected. Acquiring images through a mobile phone camera, and establishing a projection geometric model as shown in fig. 5, wherein f is a camera focal length, theta is a half of a vertical field angle of the camera, h is a camera photographing height, beta is a rotation angle of the camera along an ox axis of a camera coordinate system, a clockwise rotation beta value of the camera is positive, an anticlockwise rotation beta value is negative, the beta value is acquired through a gravity sensor in the camera, and alpha is an imaging angle of a target object; combining the camera lens distortion parameters obtained by the first step of camera calibration, and performing nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image; substituting the corrected ideal linear normalized coordinate values (x, y) into a formula (1), calculating pixel coordinate values of each point of the corrected image, and performing interpolation processing on the corrected pixel values by a bilinear interpolation method to obtain the corrected image; and (3) preprocessing the corrected image by adopting computer vision and image processing technologies, wherein the preprocessing comprises image binarization, image morphological operation and target object contour edge detection to obtain the edge of the target object, and further calculating the pixel value (u, v) of the geometric center point of the edge of the target object, which is in contact with the ground.
Using a millet cell phone 3(MI 3) camera as a picture taking device, taking a picture through a camera tripod, and measuring the height h from the camera to the ground to be 305mm, the rotation angle beta of the camera to be 0 DEG,
carrying out nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image;
calibrating the acquired camera lens distortion parameters according to the camera in the first step: [0.0981, -0.1678, 0.0003, -0.0025, 0.0975], calculating the ideal linear normalized coordinate value after correction according to the formula (5):
Figure BDA0001762578370000141
calculating pixel coordinate values of each point of the corrected image by combining the formulas (1) and (2), and obtaining the corrected image through bilinear interpolation processing;
the depth and the distance of a cuboid box placed on a horizontal ground are measured by taking the cuboid box as an example, firstly, binarization processing is carried out on an acquired image, then, edge detection is carried out on the cuboid box by using a Canny operator, and the outline of a target object is extracted. The pixel value of the central point of the bottom edge of the rectangular box is extracted to be (1851.23, 3490).
And fourthly, calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the previous step and combining with a camera depth extraction model. Selecting a corresponding depth model according to the magnitude relation between the camera rotation angle beta and a half of the camera vertical field angle theta, and calculating the pixel value v of the camera internal parameter image center point obtained in the step0Normalized focal length f on the y-axisyAnd image resolution vmaxThe vertical coordinate pixel value v and the camera rotation angle of the target object to be measured calculated in the above stepsBeta and the shooting height h of the mobile phone camera are substituted into the depth extraction model to calculate the depth value D of the target point,
FIG. 6 is a schematic diagram of a camera stereo imaging system, where point P is the camera position, the straight line on which point A, B lies is parallel to the image plane, A has coordinates (X, Y, Z) in the camera coordinate system, and B has coordinates (X + T)xY, Z), projected onto an image plane a' (x)l,yl)、B’(xr,yr) Above, it can be obtained from equation (2):
Figure BDA0001762578370000151
combining equation (1) and equation (22), the horizontal parallax d of two points a ', B' with the same Y value and the same depth Z can be derived:
Figure BDA0001762578370000152
thus, the camera focal length f, the image center point coordinates (u) are known0,v0) And the physical size d of each pixel in the x-axis direction on the image planexThen, combining with the depth extraction model, the vertical distance T from the target point to the optical axis direction is calculatedx
Figure BDA0001762578370000153
In the pinhole model, the transformation relationship between the coordinate systems of the camera is shown in FIG. 7, and the target depth D and the vertical distance T from the target depth D to the optical axis are calculatedxBased on equations (11) to (12), the distance L between an arbitrary point on the image and the camera can be calculated:
Figure BDA0001762578370000154
the actual imaging angle of the target object can be calculated to be 69.58 degrees by substituting the internal parameters of the camera, the photographing height h of the camera, the rotating angle beta and the vertical coordinate pixel value v of the central point of the bottom edge of the cuboid box into a formula (24). The target point depth value D (in mm) is calculated according to the trigonometric function principle:
D=305*tan 69.58°=819.21 (27)
will be the parameter fx,u0D and substituting the horizontal coordinate pixel value u of the bottom edge center point of the cuboid box into a formula (12) can calculate the vertical distance T from the geometric center point of the target object to the optical axis directionx
Figure BDA0001762578370000161
Therefore, the distance L from the rectangular box to the ground projection point of the shooting camera is as follows:
Figure BDA0001762578370000162
the distance between the rectangular box and the ground projection point of the camera is 827mm through measuring by a tape, so that the relative error of the method for measuring the distance is 0.62 percent.
Example 2
The following takes a millet 3(MI 3) mobile phone as an example, and specifically describes the novel target and the method for extracting sub-pixel-level corner points thereof according to the present invention.
Firstly, calibrating a camera to acquire internal parameters and image resolution of the camera
Using a checkerboard with 8 × 9 rows and 20 × 20 sizes as an experimental material for camera calibration, acquiring 20 calibration board pictures at different angles by using a millet 3 mobile phone camera, calibrating the millet 3(MI 3) mobile phone camera by using OpenCV according to the improved camera calibration model with the nonlinear distortion term,
firstly, reading a calibration board picture by using a fin () function, and acquiring the image resolution of a first picture through cols and rows; then extracting sub-pixel level corner points in the calibration plate picture through a find4QuadCornerSubpix () function, and marking the corner points by using a drawChessbosdCorers () function; calling a calibretacarama () function to calibrate the camera, using the obtained internal and external parameters of the camera to perform re-projection calculation on the three-dimensional points of the space to obtain new projection points, and calculating the error between the new projection points and the old projection points; outputting and storing the camera internal parameter matrix and the distortion parameter,
the internal parameters of the camera obtained by calibration are as follows: f. ofx=3486.5637,u0=1569.0383,fy=3497.4652,v0When 2107.9899, the image resolution is 3120 × 4208, and the camera lens distortion parameters are: [0.0981, -0.1678,0.0003, -0.0025,0.0975],
Secondly, establishing a camera depth extraction model through acquisition of novel target images
The invention uses a traditional checkerboard calibration plate of 45 × 45mm as an initial experimental material for target design, and in order to calculate the difference value of the lengths of adjacent squares, the invention designs 6 groups of experiments, extracts the corner point value of the traditional checkerboard with the square size of 45 × 45mm, and calculates the actual physical distance represented by unit pixels between adjacent corners in a world coordinate system, and in order to ensure that the difference values of longitudinal coordinate pixels between the corners are approximately equal, the length y of each squareiThe values of (A) are shown in Table 1,
TABLE 1 calculated Width of squares
Table 1 Computing width of each grid
Figure BDA0001762578370000171
Pearson correlation analysis shows that the length and the actual distance have extremely obvious linear correlation relationship (p is less than 0.01), the correlation coefficient r is equal to 0.975, and the derivative f' (x) of f (x) can be calculated to be 0.262 by the least square method, so that when the size of a row of squares of the target closest to the camera is 45mm, the width of each row is fixed, the width increase value delta d is 11.79mm,
extracting the corner points of the novel target by a corner point extraction algorithm in a concrete implementation step,
the invention uses millet 3 to respectively select three different types of smart phones including millet, Huashi and iPhone as image acquisition equipment, and the rotation angle beta of the camera is 0 degree. The corner detection algorithm is used for acquiring data and performing function fitting on the relationship, fig. 8 shows the relationship between the fitted ordinate pixel value and the object imaging angle, the data can be extracted according to the corner and the relationship shown in fig. 8, and the depth extraction model of the millet 3 mobile phone when the camera rotation angle beta is 0 degrees is as follows:
α=-0.015v+112.6 (30)
and (8) extracting the pixel value of the bottom edge center point of the cuboid box to be (1762.05, 2360), and substituting the vertical coordinate pixel value v of the bottom edge center point of the cuboid box into a formula (29) to calculate that the actual imaging angle of the target is equal to 74.2 degrees. The target point depth value D (in mm) is calculated according to the trigonometric function principle:
D=305*tan 74.2°=1077.85 (31)。

Claims (2)

1. the utility model provides a novel mark target, includes the crisscross superimposed chess board check mark target of rectangle black color piece and rectangle white piece, and the width of each rectangle black color piece and rectangle white color piece is the fixed length and equals its characterized in that: the target is horizontally placed, the size of a row of color blocks of the target closest to the camera is set to be d mm x d mm, and the length of the next row is increased by the length of the previous row
Figure FDA0002987051960000011
Let xiIs the actual distance from the ith corner point to the camera, yiThe length of each color block is the difference delta d between the lengths of the adjacent color blocksiComprises the following steps:
Figure FDA0002987051960000012
and (3) setting the relation between the calculated length of each color block and the actual distance as f (x), and obtaining the result according to the formula (8):
Figure FDA0002987051960000013
thus, when the target is dmm x d mm from the closest row of tiles to the camera, each row is then of fixed width and increased length
Figure FDA0002987051960000014
Is d f' (x) mm.
2. The method of claim 1 for extracting corner points at sub-pixel level of a novel target, comprising the steps of:
defining two different corner templates by using a growth-based checkerboard corner detection method, wherein one corner template is used for a corner parallel to a coordinate axis, and the other corner template is used for a corner rotated by 45 degrees, searching for a corner on an image according to similarity parameters of each pixel point in the image and the template, and performing initial corner detection; performing sub-pixel level fine extraction on the positions and the directions of the corners, performing sub-pixel level corner positioning by applying a cornerSubPix () function in OpenCV, and refining edge direction vectors by minimizing the standard deviation rate of a gradient image; and finally marking the corner points and outputting sub-pixel level coordinates of the corner points, growing and reconstructing a checkerboard according to the energy function, marking the corner points and outputting the sub-pixel level corner point coordinates.
CN201810918877.7A 2018-08-12 2018-08-12 Novel target and extraction method of sub-pixel level angular points thereof Active CN109255818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810918877.7A CN109255818B (en) 2018-08-12 2018-08-12 Novel target and extraction method of sub-pixel level angular points thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810918877.7A CN109255818B (en) 2018-08-12 2018-08-12 Novel target and extraction method of sub-pixel level angular points thereof

Publications (2)

Publication Number Publication Date
CN109255818A CN109255818A (en) 2019-01-22
CN109255818B true CN109255818B (en) 2021-05-28

Family

ID=65049244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810918877.7A Active CN109255818B (en) 2018-08-12 2018-08-12 Novel target and extraction method of sub-pixel level angular points thereof

Country Status (1)

Country Link
CN (1) CN109255818B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859137B (en) * 2019-02-14 2023-02-17 重庆邮电大学 Wide-angle camera irregular distortion global correction method
CN110443856A (en) * 2019-08-12 2019-11-12 广州图语信息科技有限公司 A kind of 3D structure optical mode group scaling method, storage medium, electronic equipment
CN111798422B (en) * 2020-06-29 2023-10-20 福建汇川物联网技术科技股份有限公司 Checkerboard corner recognition method, device, equipment and storage medium
CN113256735B (en) * 2021-06-02 2021-10-08 杭州灵西机器人智能科技有限公司 Camera calibration method and system based on binocular calibration
CN113658272B (en) * 2021-08-19 2023-11-17 亿咖通(湖北)技术有限公司 Vehicle-mounted camera calibration method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202066514U (en) * 2011-03-21 2011-12-07 李为民 Compound target for measuring large-scale coordinate
CN202622812U (en) * 2012-05-25 2012-12-26 山东天泽软控技术股份有限公司 Calibrating plate for visual system of robot
CN103019643A (en) * 2012-12-30 2013-04-03 中国海洋大学 Method for automatic correction and tiled display of plug-and-play large screen projections
CN203149664U (en) * 2013-03-27 2013-08-21 黑龙江科技学院 Calibration plate for binocular vision camera
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN203217624U (en) * 2013-05-04 2013-09-25 长春工业大学 Novel checkerboarded calibration target
CN103927750A (en) * 2014-04-18 2014-07-16 上海理工大学 Detection method of checkboard grid image angular point sub pixel
CN204388802U (en) * 2015-01-19 2015-06-10 长春师范大学 Line-structured light vision system calibration plate
CN105105779A (en) * 2014-04-18 2015-12-02 Fei公司 High aspect ratio X-ray targets and uses of same
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN206574134U (en) * 2017-03-09 2017-10-20 昆山鹰之眼软件技术有限公司 Automate scaling board

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202066514U (en) * 2011-03-21 2011-12-07 李为民 Compound target for measuring large-scale coordinate
CN202622812U (en) * 2012-05-25 2012-12-26 山东天泽软控技术股份有限公司 Calibrating plate for visual system of robot
CN103019643A (en) * 2012-12-30 2013-04-03 中国海洋大学 Method for automatic correction and tiled display of plug-and-play large screen projections
CN203149664U (en) * 2013-03-27 2013-08-21 黑龙江科技学院 Calibration plate for binocular vision camera
CN203217624U (en) * 2013-05-04 2013-09-25 长春工业大学 Novel checkerboarded calibration target
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN103927750A (en) * 2014-04-18 2014-07-16 上海理工大学 Detection method of checkboard grid image angular point sub pixel
CN105105779A (en) * 2014-04-18 2015-12-02 Fei公司 High aspect ratio X-ray targets and uses of same
CN204388802U (en) * 2015-01-19 2015-06-10 长春师范大学 Line-structured light vision system calibration plate
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN206574134U (en) * 2017-03-09 2017-10-20 昆山鹰之眼软件技术有限公司 Automate scaling board

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种实用的X型靶标亚像素角点提取方法;白瑞林;《光学技术》;20100731;第36卷(第4期);全文 *
圆阵列平面靶标特征点的自动标记;张胜男;《计算机工程与应用》;20161231;第52卷(第2期);全文 *
基于平面标靶的物体空间姿态精确测量;周传德;《重庆大学学报》;20110831;第34卷(第8期);全文 *

Also Published As

Publication number Publication date
CN109255818A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN109035320B (en) Monocular vision-based depth extraction method
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN109269430B (en) Multi-standing-tree breast height diameter passive measurement method based on deep extraction model
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
CN110068270B (en) Monocular vision box volume measuring method based on multi-line structured light image recognition
Alismail et al. Automatic calibration of a range sensor and camera system
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN110956660B (en) Positioning method, robot, and computer storage medium
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN107886547B (en) Fisheye camera calibration method and system
CN101286235A (en) Video camera calibration method based on flexible stereo target
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN108362205B (en) Space distance measuring method based on fringe projection
CN113592957A (en) Multi-laser radar and multi-camera combined calibration method and system
CN112465912A (en) Three-dimensional camera calibration method and device
CN112365545B (en) Calibration method of laser radar and visible light camera based on large-plane composite target
CN112489137A (en) RGBD camera calibration method and system
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
JP2016218815A (en) Calibration device and method for line sensor camera
CN113012234A (en) High-precision camera calibration method based on plane transformation
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113963065A (en) Lens internal reference calibration method and device based on external reference known and electronic equipment
CN112288821B (en) Method and device for calibrating external parameters of camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant