CN109255818A - A kind of extracting method of novel target and its sub-pixel angle point - Google Patents

A kind of extracting method of novel target and its sub-pixel angle point Download PDF

Info

Publication number
CN109255818A
CN109255818A CN201810918877.7A CN201810918877A CN109255818A CN 109255818 A CN109255818 A CN 109255818A CN 201810918877 A CN201810918877 A CN 201810918877A CN 109255818 A CN109255818 A CN 109255818A
Authority
CN
China
Prior art keywords
camera
target
corner
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810918877.7A
Other languages
Chinese (zh)
Other versions
CN109255818B (en
Inventor
周素茵
徐爱俊
武新梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiyang College of Zhejiang A&F University
Original Assignee
Jiyang College of Zhejiang A&F University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiyang College of Zhejiang A&F University filed Critical Jiyang College of Zhejiang A&F University
Priority to CN201810918877.7A priority Critical patent/CN109255818B/en
Publication of CN109255818A publication Critical patent/CN109255818A/en
Application granted granted Critical
Publication of CN109255818B publication Critical patent/CN109255818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种新型标靶及其亚像素级角点的提取方法,所述新型标靶包括矩形黑色块和矩形白色块交错叠置的棋盘格标靶,各个矩形黑色块和矩形白色块的宽度相等,标靶水平放置,后一排长度比前一排长度增加值为Δdi为:d*f’(x)mm。所述提取方法,利用基于生长的棋盘格角点检测方法定义两种不同的角点模板,运用OpenCV中的cornerSubPix()函数进行亚像素级角点定位,根据能量函数生长并重建棋盘格,标记角点,输出亚像素级角点坐标。本发明可以较准确的提取到光轴方向角点,而且合理的规避了透视变换对角点提取精度造成的影响;所述亚像素级角点的提取方法,对畸变程度较大的图片提取效果较好。The invention discloses a novel target and a method for extracting sub-pixel-level corner points. The novel target comprises a checkerboard target in which rectangular black blocks and rectangular white blocks are alternately stacked, and each rectangular black block and rectangular white block The width of the target is equal, the target is placed horizontally, and the length of the latter row is increased by Δd i compared to the length of the former row: d*f'(x)mm. The extraction method uses the growth-based checkerboard corner detection method to define two different corner templates, uses the cornerSubPix() function in OpenCV to perform subpixel-level corner positioning, grows and reconstructs the checkerboard according to the energy function, and marks Corner, output sub-pixel corner coordinates. The invention can extract the corner points in the direction of the optical axis more accurately, and reasonably avoid the influence of perspective transformation on the corner point extraction accuracy; the sub-pixel-level corner point extraction method has the effect of extracting pictures with a large degree of distortion. better.

Description

A kind of extracting method of novel target and its sub-pixel angle point
Technical field
The present invention relates to ground close-range photogrammetry field, especially a kind of single camera vision system lower pinhole camera it is novel The extracting method of target and its sub-pixel angle point.
Background technique
Object ranging based on image is current research hotspot, and many fields industrially have obtained widely Using, such as forestry survey, automatic Pilot.Object ranging based on image, is broadly divided into initiative range measurement and passive ranging Two methods[1].Initiative range measurement is to carry out ranging using laser radar[2-4].Passive ranging is by machine vision to two-dimensional digital Object depth information is calculated in image, then according to image pixel information and camera imaging principle calculate object away from From[5-6].It not only can replace many manual workings using machine vision, improve production automation level, improve detection accuracy, more It is effective solution route that many general measuring methods cannot achieve.Machine vision ranging is broadly divided into monocular vision ranging, double Two class of mesh visual token [1-9].The depth information acquisition method of early stage is mainly binocular stereo vision and camera motion information, is needed Multiple image is wanted to complete the acquisition of image depth information[10-16].Compared with binocular distance measurement, the acquisition of monocular range images is not required to Want stringent hardware condition, more competitive superiority.The geometric position of the picture point on image obtained due to camera and true generation Corresponded in boundary the geometric position of object point be it is closely related, these positions and its correlation are then the geometry moulds by camera imaging What type determined, therefore once can determine the parameter of this geometrical model, we can completely express two-dimentional picture point and three The corresponding relationship between object point is tieed up, to calculate image object object distance.In the prior art, the object of single camera vision system There are many methods for Depth Information Acquistion.The depth information of target to be measured is such as obtained using corresponding points standardization[17-19].Document [17] a kind of robot target positioning distance measuring method based on monocular vision is had studied, this method is usually to obtain by camera calibration The inside and outside parameter of camera is taken, the transformational relation between image coordinate system and world coordinate system is solved in conjunction with projection model, to count Calculate object depth information.Unfortunately, the method needs to acquire the target image of different direction, and accurately records each point and exist Respective coordinates in world coordinate system and image coordinate system, stated accuracy are affected for measurement accuracy.Further, it is also possible to logical The relationship building depth extraction model in research world coordinate system between object actual imaging and image pixel is crossed, image is calculated Middle object is at a distance from actual scene between camera.Document [20] puts object of reference on road surface and measures its distance, choosing Suitable mathematical model is selected, the corresponding relationship being fitted between object of reference distance and pixel recycles this relationship extract real-time depth Information.Unfortunately, the method precision of document [20] will receive the influence of telemeasurement error and error of fitting.Document [21] A kind of vertical target image is devised, the angle point data by detecting the image establish image ordinate pixel value and actual measurement Mapping relations between angle combine known vehicle-mounted monocular camera height to obtain vehicle-mounted depth in image and believe using this relationship Breath.Since different cameral equipment inner parameter has differences, for the camera apparatus of different model, this method needs are resurveyed Target image information, establishes camera depth information extraction model, and different in-vehicle cameras camera lens make with assemble etc. due to, So that camera pitch angle can also have differences, therefore the method versatility of document [21] is poor.In addition, the method for document [21] is adopted It is applied to horizontal plane with the relationship between vertical target research perpendicular picture point imaging angle and ordinate pixel value, and by this The measurement of upper object distance, so that range accuracy is relatively low, because of camera level and the incomplete phase of vertical direction Distortion Law Together.The selection of target is most important to the building of depth extraction model, however, real using traditional gridiron pattern target research object point It is existing by perspective transform due to its as broad as long characteristic when relationship between border imaging angle and corresponding picture point ordinate pixel So that models fitting effect is poor, measurement accuracy is not high for the influence of elephant.
It can make model that there is equipment interoperability in addition, constructing depth extraction model by the method for camera calibration, it should Model needs to obtain camera internal parameter, the stated accuracy of camera, the object ranging to image by the method for camera calibration Precision has vital effect.Zhang Zhengyou calibration method is most common camera calibration method, using tradition Black and white square interlock stacked gridiron pattern target.This target can only be used to realize the calibration of camera in same plane, if will The horizontal positioned then stated accuracy of target can be reduced accordingly.Application No. is 201710849961.3 patent applications, disclose one kind Improved camera calibration model and distortion correction model suitable for intelligent sliding moved end camera is (hereinafter referred to as: improved with non- The peg model of linear distortion item), this method can help to correct scaling board picture, obtain the inside and outside ginseng of camera of higher precision Number, but the distortion correction model is only used for the nonlinear distortion of correction camera, the perspective phenomenon not being suitable in imaging.
Bibliography:
[1] He Ruofei, Tian Xuetao, Liu Hongjuan wait unmanned plane target localization method of the based on Monte Carlo Kalman filtering [J] Northwestern Polytechnical University journal, 2017,35 (3): 435-441.
[2] Lin F, Dong X, Chen B M, et al.A Robust Real-Time Embedded Vision System on an Unmanned Rotorcraft for Ground Target Following[J].IEEE Trans on Industrial Electronics, 2012,59 (2): 1038-1049.
[3] Zhang Wanlin, Hu Zhengliang, Zhu Jianjun wait one of the comprehensive view instrument of individual soldier target position calculation method [J] Electronic measurement technique, 2014,37 (11): 1-3.
[4] Sun Junling, Sun Guangmin, Ma Pengge wait laser eyepiece of the based on symmetrical wavelet noise reduction and asymmetric Gauss curve fitting Position [J] Chinese laser, 2017,44 (6): 178-185.
[5] Shi Jie, Li Yin child, Qi Guoqing, passive tracking algorithm [J] China under waiting not exclusively to measure based on machine vision Middle University of Science and Technology's journal, 2017,45 (6): 33-37.
[6] Xu Cheng, yellow grand celebration, a kind of passive target positioning of the numerous small drone of clanging or clanking sound in hole and precision analysis [J] instrument instrument Table journal, 2015,36 (5): 1115-1122.
[7] Li Kehong, Jiang Lingmin, Gong Yong justice .2 are tieed up to 3 d image/Video Quality Metric image depth extracting method and are summarized [J] Journal of Image and Graphics, 2014,19 (10): 1393-1406.
[8] Wang Hao, Xu Zhiwen, Xie Kun, wait binocular range-measurement system [J] Jilin University journal of the based on OpenCV, and 2014, 32 (2): 188-194.
[9] Sun W, Chen L, Hu B, et al.Binocular vision-based position determination algorithm and system[C]//Proceedings of the 2012International Conference on Computer Distributed Control and Intelligent Environmental Monitoring.Piscataway:IEEE Computer Society, 2012:170-173.
[10]Ikeuchi K.Determining a depth map using a dual photometric stereo [J] .The International Journal of Robotics Research, 1987,6 (1): 15-31.
[11] Shao M, Simchony T, Chellappa R.New algorithms from reconstruction of a 3-d depth map from one or more images[C]//Proceedings of CVPR’88.Ann Arbor:IEEE, 1988:530-535.
[12] Matthies L, Kanade T, Szeliski R.Kalman filter-based algorithms for estimating depth from image sequences[J].International Journal of Computer Vision, 1989,3 (3): 209-238.
[13] Mathies L, Szeliski R, Kanade T.Incremental estimation of dense depth maps from image sequence[C]//Proceedings of CVPR'88.Ann Arbor;IEEE, 1988:366-374.
[14] Mori T, Yamamoto M.A dynamic depth extraction method [C] // Proceedings of Third International Conference on Computer Vision.Osaka:IEEE, 1990:672-676.
[15] Inoue H, Tachikawa T, Inaba M.Robot vision system with a Correlation chip for real-time tracking, optical flow and depth map generation [C] //Proceeding of Robotics and Automation.Nice:IEEE, 1992:1621-1626.
[16] Tree image distance measuring method [J] the agricultural mechanics of Hu Tianxiang, Zheng Jiaqiang, Zhou Hongping based on binocular vision Report, 2010,41 (11): 158-162.
[17] Yu Naigong, Huang Can, Lin Jia are calculated based on robot target positioning distance measuring technique study [J] of monocular vision Machine measurement and control, 2012,20 (10): 2654-2660.
[18] ranging research [J] the robot in Wu Gang, Tang Zhen people's monocular formula autonomous robot vision guided navigation, 2010, 32 (6): 828-832.
[19] Lu Weiwei, Xiao Zhitao, Lei Meilin study [J] with distance measuring method based on the front vehicles detection of monocular vision Video Applications and engineering, 2011,35 (1): 125-128.
[20] Wu C F, Lin C J, Lee C Y, et al.Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement [J] .IEEE Transactions on Systems, Man and Cybernetics-Part C:Applications and Reviews, 2012,42 (4): 577-589.
[21] yellow cloudling, peak, Xu Guoyan wait monocular depth information extraction [J] of based on single width vertical target image BJ University of Aeronautics & Astronautics's journal, 2015,41 (4): 649-655.
Summary of the invention
The object of the present invention is to provide the extracting method of a kind of novel target and its sub-pixel angle point, the novel target Optical axis direction angle point accurate can be not only extracted, but also has reasonably evaded perspective transform angle steel joint extraction accuracy and has caused Influence;The extracting method of the sub-pixel angle point does not need to specify gridiron pattern number in advance and algorithm robustness is high, to abnormal It is preferable to become more picture extraction effect.
To achieve the above object, the present invention adopts the following technical scheme:
A kind of novel target, including the gridiron pattern target that rectangle black block and rectangular white block are staggeredly stacked, each rectangle The width of black block and rectangular white block is fixed length and equal, it is characterised in that: the target is horizontal positioned, setting target away from A row color lump size nearest from camera is d mm*d mm, and subsequent latter row's length is than previous row's length incrementIf xiFor the actual range of i-th of angle point to camera, yiFor the length of each color lump, then the difference DELTA d of adjacent color lump lengthiAre as follows:
If the relationship between the computational length and actual range of each color lump is f (x), can be obtained according to formula (8):
Therefore, when a target row grid size nearest apart from camera is dmm*d mm, subsequent every row's width fixes, Length incrementFor d*f (x) mm.
A kind of extracting method of the sub-pixel angle point of novel target described in claim 1, it is characterised in that including such as Lower step:
It is defined using the checkerboard angle point detection process based on growth of the propositions such as Andreas Geiger two different Angle point template, a kind of for the angle point parallel with reference axis, another kind is for rotating 45 ° of angle point, according to pixel each in image Point and the similarity parameter of template find angle point on the image, carry out initial Corner Detection;The position and direction of angle steel joint carry out Sub-pixel finely extracts, and sub-pixel Corner character is carried out with the cornerSubPix () function in OpenCV, by most The standard deviation rate of smallization gradient image refines edge direction vector;It is finally label angle point and exports its subpixel coordinate, Gridiron pattern is grown and rebuild according to energy function, marks angle point, exports sub-pixel angular coordinate.
Compared with prior art, the beneficial effects of the present invention are: due to the adoption of the above technical scheme,
(1) on the basis of traditional gridiron pattern target, existing perspective transform is existing when according to camera shooting level ground As using a kind of incremental novel chess case marker target of the width equal length of specific standard as experimental material, this target is not Optical axis direction angle point accurate can be only extracted, and has reasonably been evaded caused by perspective transform angle steel joint extraction accuracy It influences;
(2) checkerboard angle point detection process by propositions such as Andreas Geiger based on growth and OpenCV are provided CornerSubPix () function combines, and algorithm does not need to specify gridiron pattern number in advance and algorithm robustness is high, to distortion journey It is preferable to spend biggish picture extraction effect.
Detailed description of the invention
Fig. 1 is novel target figure;
Fig. 2 is Corner Detection Algorithm implementation flow chart;
Fig. 3 is that camera upward angle of visibility is higher than horizontal line shooting geometrical model figure;
Fig. 4 is camera upward angle of visibility lower than horizontal line shooting geometrical model figure;
Fig. 5 is camera shooting perspective geometry model;
Fig. 6 is each coordinate system schematic diagram in pin-hole model;
Fig. 7 is camera stereo imaging system principle;
Fig. 8 is the relational graph between object ordinate pixel value and imaging angle;
Specific embodiment
In order to be more clear technical solution of the present invention, below in conjunction with attached drawing 1 to 8, the present invention is described in detail. It should be understood that specific embodiment described in this specification is not intended to limit just for the sake of explaining the present invention Protection scope of the present invention.
The present invention is a kind of novel target, and interlock stacked gridiron pattern target including rectangle black block and rectangular white block, The width of each rectangle black block and rectangular white block is fixed length and equal, and the target is horizontal positioned, sets target distance The nearest row's color lump size of camera is d mm*d mm, and subsequent latter row's length is than previous row's length incrementIf xi For the actual range of i-th of angle point to camera, yiFor the length of each color lump, then the difference DELTA d of adjacent color lump lengthiAre as follows:
If the relationship between the computational length and actual range of each color lump is f (x), can be obtained according to formula (8):
Therefore, when a target row grid size nearest apart from camera is dmm*d mm, subsequent every row's width fixes, Length incrementFor d*f (x) mm.
It is verified using novel target of the invention, it is in extremely significant line between the computational length and actual range of each color lump Property correlativity, therefore can preferably f (x)) be constant.
A kind of extracting method of the sub-pixel angle point of above-mentioned novel target includes the following steps: to utilize Andreas The checkerboard angle point detection process based on growth of the propositions such as Geiger defines two different angle point templates, and one kind is used for and sits The parallel angle point of parameter, another kind exist for rotating 45 ° of angle point according to the similarity parameter of pixel each in image and template Angle point is found on image, carries out initial Corner Detection;The position and direction of angle steel joint carry out sub-pixel and finely extract, and use CornerSubPix () function in OpenCV carries out sub-pixel Corner character, by minimize the standard of gradient image from Rate refines edge direction vector;It is finally label angle point and exports its subpixel coordinate, is laid equal stress on according to energy function growth Gridiron pattern is built, angle point is marked, exports sub-pixel angular coordinate.
Embodiment 1
A kind of depth extraction and passive ranging method of the optimization based on monocular vision, include the following steps:
One, mobile phone camera is demarcated, obtains camera internal parameter and image resolution ratio.The calibration uses Zhang Zhengyou Standardization, and introduce the improved peg model with nonlinear distortion variable and camera internal parameter is corrected.
The physical size size as pixel each in plane is set first as dx*dy (unit: mm), image coordinate system (x, Y) coordinate of the origin in pixel coordinate system (u, v) is (u0, v0), (x, y) is the normalized coordinate of picture point in real image, figure Any pixel meets following relationship in two coordinate systems as in:
fx、fyFor the normalization focal length in x-axis and y-axis, any point P in camera coordinates systemc(Xc, Yc, Zc) project to image It is (x on coordinate systemc, yc, f), it is f with initial point distance that image coordinate system plane is vertical with optical axis z-axis, according to similar triangles original Reason it follows that
Introduce the improved peg model with nonlinear distortion variable, including the diameter as caused by lens shape defect To distortion and since there are tangential distortion caused by different degrees of bias, radial distortion mathematical models for optical system are as follows:
Wherein r2=x2+y2, (x ', y ') it is sat for the normalization of the ideal linearity camera coordinates system without distortion term after correction Scale value, radial distortion value is related with the position of picture point in the picture, and the radial distortion value at image border is larger,
Tangential distortion model mathematical model are as follows:
It wherein include k1、k2、k3、p1、p2Totally 5 kilrrfactors obtain distortion correction Function Modules by formula (3), (4) Type is as follows:
There are following relationships for conversion from world coordinate transformation to camera coordinates:
Pc=R (PW- C)=RPW+T (6)
Convolution (1)~(6), may be expressed as: with homogeneous coordinates and matrix form
Mint、MextIt is the inside and outside parameter matrix of camera calibration respectively, wherein camera internal parameter includes image center pixel Value u0、v0, fx、fyFor in x-axis and y-axis normalization focal length, by Java combination OpenCV realize mobile phone camera calibration, acquisition Inner parameter described in mobile phone camera, camera lens distortion parameter and image resolution ratio vmax、umax;Calibration gained camera internal ginseng Number are as follows: fx=3486.5637, u0=1569.0383, fy=3497.4652, v0=2107.9899, image resolution ratio 3120 × 4208, camera lens distortion parameter are as follows: [0.0981, -0.1678,0.0003, -0.0025,0.0975],
Two, it by the acquisition to novel target image, establishes camera depth and extracts model.Existing target is that length and width are equal Black and white chessboard case marker target.The difference of modulation of the invention and existing target is, setting target is apart from camera nearest One row's grid size is d*d mm, and the width of subsequent every row's grid is to fix, and the previous row's value added of the latter parallelism of length is
X in following formulaiFor the actual range of i-th of angle point to camera, yiFor the length of each grid, then adjacent square length Difference DELTA diAre as follows:
If the relationship between the computational length and actual range of each grid is f (x), can be obtained according to formula (8):
Through Pearson correlation analysis, it is between length and actual range extremely significant linear relationship (p < 0.01), Correlation coefficient r be equal to 0.975, by least square method can in the hope of calculate f (x) derivative f ' (x),
Therefore, when a target row grid size nearest apart from camera is d*d mm (the survey when range of d takes 30~60mm Accuracy of measurement highest) when, subsequent every row's width fixes, length incrementFor d*f ' (x) mm, novel target as shown in Figure 1,
There are the angles that perspective transform phenomenon makes Harris and Shi-Tomasi etc. common when object on shooting level ground Point detection algorithm robustness is poor, and can also detect mistake when camera is larger along camera coordinates system ox axis rotated counterclockwise by angle It loses, therefore the checkerboard angle point detection process based on growth of the propositions such as present invention combination Andreas Geiger and OpenCV are mentioned The comerSubPix () function of confession carries out the detection of sub-pixel corner location, and the algorithm robustness is high, larger to distortion degree Picture extraction effect it is preferable,
The implementation process of Corner Detection Algorithm as shown in Fig. 2, the above-mentioned modulation of the present invention sub-pixel angle point grid Step are as follows:
1) angle point is found according to the similarity parameter of pixel each in image and template on the image, positions target angle point position It sets;
Two different angle point templates are defined first, and a kind of for the angle point parallel with reference axis, another kind is for rotating 45 ° of angle point, each template are made of 4 filtering cores { A, B, C, E }, with carrying out convolution operation with image later;Then sharp The similarity of each inflection point and angle point is calculated with the two angle point templates:
WhereinIndicate that convolution kernel X (X=A, B, C, E) and template i (i=1,2) are responded in the convolution of some pixel, WithIt indicates the similarity of two kinds of possible inflection points of template i, calculates the available angle of similarity of each pixel in image Point similar diagram;It is handled using non-maxima suppression algorithm angle steel joint pixel map to obtain candidate point;Then it is counted with gradient Method verify these candidate points in the nxn neighborhood of a local, first local area grayscale image carries out sobel filtering, then counts Weighting direction histogram (32bins) is calculated, finds two therein main mode γ with meanshift algorithm1And γ2;According to The direction at edge, for desired gradient intensityConstruct a template T.(* indicates cross-correlation operation symbol) Then product with angle point similarity is judged just to obtain initial angle point with threshold value as angle point score value.
2) the position and direction progress sub-pixel of angle steel joint finely extracts;
Sub-pixel Corner character is carried out with the cornerSubPix () function in OpenCV, by Corner character to sub- picture Element, to obtain the other Corner Detection effect of sub-pixel;To refine edge direction vector, it is minimized according to image gradient value Standard deviation rate:
WhereinIt is adjacent pixel collection, the gradient value m with module ii =[cos (γi)sin(γi)]TMatch.(ask calculation scheme according to document Geiger A, Moosmann F, Caret a1.Automatic camera and range sensor calibration using a single shot[C]// Robotics and Automation (ICRA), 2012 IEEE International Conference on.IEEE, 2012:3936-3943.)
3) it is finally label angle point and exports its subpixel coordinate, gridiron pattern is grown and rebuild according to energy function, marks Remember angle point, exports sub-pixel angular coordinate;
According to document " Geiger A, MoosmannF, Caret al.Automatic camera and range Sensor calibration using a single shot [C] //Robotics and Automation (ICRA), 2012IEEE International Conference on.IEEE, the method that 2012:3936-3943. " is provided optimize energy Function rebuilds gridiron pattern and marks angle point, energy growth function formula are as follows:
E (x, y)=Ecorners(y)+Estruct(x, y) (16)
Wherein, EcornersIt is the negative value of current chessboard angle point sum, EstructIt is of two adjacent corner points and prediction angle point With degree;Angle point pixel value is exported by OpenCV.
Linear correlative analysis is carried out to image objects angle, ordinate pixel value using SPSS 22, exports Pearson phase Relationship number, output Pearson correlation coefficient r are as shown in table 2.It is verified, under equipment and camera the rotation angle of different model, object Body ordinate pixel value and actual imaging angle are in extremely significant negative correlativing relation (p < 0.01), in addition, the present invention is also to different The slope difference of linear function carries out between object ordinate pixel value and imaging angle under device model and camera rotation angle Significance test, the results showed that, under distinct device model and camera rotation angle object ordinate pixel value and imaging angle it Between linear function slope differences it is heteropolar significant (p < 0.01), illustrate that the equipment of different model and camera rotation angle, depth mention Modulus type is different,
2 object ordinate pixel value of table and imaging angle related coefficient
Table 2 Pearson correlation coefficient of image ordinate pixel values and actual imaging angles
Note: * * indicates extremely significant (p < 0.01).
Note:**represents very significant correlation (p < 0.01)
Verified, equipment and camera with model rotate under angle, and object ordinate pixel value is in actual imaging angle Extremely significant negative correlativing relation (p < 0.01), correlation coefficient r are greater than 0.99.In addition, the present invention is also to different device model and phase The slope difference that machine rotates linear function between object ordinate pixel value and imaging angle under angle carries out significance test.Knot Fruit show distinct device model and camera rotation angle under between object ordinate pixel value and imaging angle linear function it is oblique Rate difference is extremely significant (p < 0.01), illustrates the equipment and camera rotation angle of different model, depth extraction model is not Together.
Abstract function is set according to the linear relationship between object imaging angle α and ordinate pixel value v, establishes and contains mesh Mark tri- object imaging angle α, ordinate pixel value v and camera rotation angle β parameter space relational models, i.e. α=F (v, β),
Under equipment and camera the rotation angle of different model, subject ordinate pixel value and imaging angle are in pole Significant negative linear correlation, and the slope of the linear relationship and intercept are different, therefore set:
α=F (v, β)=av+b (17)
Wherein parameter a, b is related with camera model and camera rotation angle,
When α is minimized α=αminWhen=90- θ-β, θ is the half at camera vertical field of view angle, i.e. subject projects When to picture lowermost end, v=vmax(vmaxFor camera CMOS or ccd image sensor column coordinate valid pixel number), substitute into formula (17) it can obtain:
90- β-θ=avmax+b (18)
Work as αminWhen+2 90 ° of θ >, i.e. θ > β, camera upward angle of visibility is higher than horizontal line at this time, and camera shoots perspective geometry model Such as Fig. 3, ground level unlimited distance, α is infinitely close to 90 °, and v is substantially equal to v at this time0-tanβ*fy, fyFor phase under pixel unit The focal length of machine, when β rotates counterclockwise for negative value, that is, camera also similarly, therefore, substituting into formula (17) can obtain:
90=a (v0-tanβ·fy)+b (19)
Work as αminWhen+2 90 ° of θ <, i.e. θ < β, camera upward angle of visibility is lower than horizontal line at this time, and camera shoots perspective geometry model If Fig. 4, ground level unlimited distance object imaging angle α are maximized, αmaxminWhen+2 θ=90- β+θ, i.e. subject When body projects to picture highest point, v=0, substituting into formula (17) can be obtained:
90- β+θ=b (20)
According to pinhole camera aufbauprinciple, the tangent value of the camera vertical field of view angle θ of half is schemed equal to camera CMOS or CCD As the half of sensor side length is divided by camera focus, therefore θ can be calculated:
L in formula (21)CMOSFor the side length of camera CMOS or ccd image sensor, convolution (18)~(21), F (α, β) Are as follows:
δ is camera nonlinear distortion variable error in formula (10), in conjunction with mobile phone camera shooting height h, according to trigonometric function Principle establishes mobile phone camera depth extraction model:
3 mobile phone camera inner parameter of millet is substituted into formula (10) to obtain:
The specific depth extraction model of the equipment is obtained according to trigonometric function principle are as follows:
Three, by the Image Acquisition to target to be measured, target point pixel value u, v are obtained.Figure is carried out by mobile phone camera As acquisition, perspective geometry model such as Fig. 5 is established, wherein f is camera focus, and θ is the half at camera vertical field of view angle, and h is camera It takes pictures highly, β is rotation angle of the camera along camera coordinates system ox axis, and camera rotates clockwise β value and is positive, is negative counterclockwise, β value It is obtained by camera internal gravity sensor, α is object imaging angle;The camera lens obtained in conjunction with first step camera calibration Distortion parameter carries out nonlinear distortion correction to radial distortion existing for image and tangential distortion error;By the ideal after correction Linear normalization coordinate value (x, y) substitutes into formula (1), image each point pixel coordinate value after asking calculating to correct, by bilinearity Image after slotting method corrects pixel value progress interpolation processing after correction;Using computer vision and image procossing Technology pre-processes the image after correction, including image binaryzation, morphological image operation and the inspection of object contour edge It surveys, obtains the edge of object, and then calculate the geometric center point pixel value (u, v) at the edge of object and ground face contact.
Use (MI 3) camera of millet mobile phone 3 as picture collection equipment, carries out picture collection by camera trivets, and The height h for measuring camera to ground is equal to 305mm, and camera rotation angle β is equal to 0 °,
Nonlinear distortion correction is carried out to radial distortion existing for image and tangential distortion error;
The camera lens distortion parameter obtained according to first step camera calibration: [0.0981, -0.1678,0.0003, - 0.0025,0.0975], ideal linearity normalized coordinate value after correcting is calculated according to formula (5):
Image each point pixel coordinate value after correcting is calculated in conjunction with formula (1) and (2), is handled and is rectified by bilinear interpolation Image after just;
The present invention measures its depth and distance by taking the cuboid box being placed on level ground as an example, first to acquisition Image carries out binary conversion treatment, then carries out edge detection to cuboid box using Canny operator, extracts object profile. Extracting cuboid box bottom margin central point pixel value is (1851.23,3490).
Four, the camera internal parameter and target point pixel value and combining camera depth extraction mould of above-mentioned steps acquisition are utilized Type calculates object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera L.Angle beta and half are rotated according to camera Camera vertical field of view angle θ between size relation, select corresponding depth model, ask the camera internal of calculation to join above-mentioned steps Number image center pixel value v0, normalized focal length f in y-axisyAnd image resolution ratio vmaxAnd above-mentioned steps ask the to be measured of calculation Object ordinate pixel value v, camera rotation angle beta and mobile phone camera shooting height h substitute into the depth extraction model, calculate Target point depth value D,
Fig. 6 is camera stereo imaging system schematic diagram, and midpoint P is camera position, and the straight line and image where point A, B are flat Face is parallel, and coordinate of the A under camera coordinates system is (X, Y, Z), and the coordinate of point B is (X+Tx, Y, Z), project to plane of delineation A ' (xl, yl)、B’(xr, yr) on, it can be obtained according to formula (2):
In conjunction with formula (1) and formula (22), the two o'clock A ' that Y value is identical and depth Z is equal, the horizontal parallax of B ' can be derived D:
It is thus known that camera focus f, image center coordinate (u0, v0) and as pixel each in plane in the direction of the x axis Physical size size dx, in conjunction with depth extraction model, calculate target point to optical axis direction vertical range Tx:
In pin-hole model, the transformational relation between each coordinate system of camera is as shown in fig. 7, calculating target point depth value D And its vertical range T to optical axis directionxOn the basis of, according to formula (11)~(12), arbitrary point on image can be calculated To shooting the distance between camera L:
Camera internal parameter, camera are taken pictures height h, rotation angle beta and cuboid box bottom margin central point is vertical sits Mark pixel value v, which substitutes into formula (24), can calculate the object actual imaging angle equal to 69.58 °.According to trigonometric function original Reason calculates target point depth value D (unit: mm):
D=305*tan 69.58 °=819.21 (27)
By parameter fx, u0, D and cuboid box bottom margin central point abscissa pixel value u substitute into formula (12) and can count Vertical range T of the calculation object geometric center point to optical axis directionx:
Therefore, which reaches shooting camera in the distance L of floor projection point are as follows:
By tape measuring, distance of the cuboid box apart from camera floor projection point is 827mm, therefore uses the present invention Carry out ranging, relative error 0.62%.
Embodiment 2
Below by taking millet 3 (MI 3) mobile phone as an example, novel target and its sub-pixel angle point of the invention are illustrated Extracting method.
One, mobile phone camera is demarcated, obtains camera internal parameter and image resolution ratio
Use ranks number be 8*9 size be 20*20 gridiron pattern scaling board as the experimental material of camera calibration, lead to The scaling board picture that 3 mobile phone camera of millet acquires 20 different angles is crossed, using OpenCV according to above-mentioned improved with non-thread The camera calibration model of sex distortion item demarcates millet 3 (MI 3) mobile phone camera,
Scaling board picture is read using fin () function first, and obtains the image of the first picture by .cols .rows Resolution ratio;Then sub-pixel angle point in scaling board picture is extracted by find4QuadCornerSubpix () function, be used in combination DrawChessboardCorners () function marks angle point;CalibrateCamera () function is called to demarcate camera, It is used for obtained camera interior and exterior parameter to carry out projection again to the three-dimensional point in space calculating, obtains new subpoint, calculate Error between new subpoint and old subpoint;Camera internal reference matrix and distortion parameter are exported and save,
Calibration gained camera internal parameter are as follows: fx=3486.5637, u0=1569.0383, fy=3497.4652, v0= 2107.9899, image resolution ratio is 3120 × 4208, camera lens distortion parameter are as follows: [0.0981, -0.1678,0.0003, - 0.0025,0.0975],
Two, it by the acquisition to novel target image, establishes camera depth and extracts model
The initial experiment material that the present invention uses traditional gridiron pattern scaling board of 45*45mm to design as target, to calculate The difference of adjacent square length, the present invention devise 6 groups of experiments, extract traditional X-comers that grid size is 45*45mm Value, and ask calculate adjacent corner points between the actual physics distance that is represented under world coordinate system of unit pixel, to guarantee to indulge between angle point Coordinate pixel value difference is roughly equal, the length y of each gridiValue it is as shown in table 1,
The calculating width of each grid of table 1
Table 1 Computing width of each grid
Through Pearson correlation analysis, it is between length and actual range extremely significant linear relationship (p < 0.01), Correlation coefficient r is equal to 0.975, can be in the hope of derivative f ' (x)=0.262 of calculating f (x), therefore, when the mark by least square method When a range row grid size nearest from camera is 45*45mm, then every row's width is fixed, width value added Δ d is 11.79mm,
The angle point of the novel target is extracted by the Robust Algorithm of Image Corner Extraction in specific implementation step,
The present invention use millet 3 choose respectively millet, Huawei, tri- kinds of different models of iPhone smart phone as image Equipment is acquired, camera rotates angle beta=0 °.Data are acquired using the Corner Detection Algorithm, and it is quasi- to carry out function to its relationship It closes, Fig. 8 is the relationship between the ordinate pixel value fitted and image objects angle, can according to angle point grid data and Fig. 8 , 3 mobile phone of millet depth extraction model when camera rotates angle beta=0 ° are as follows:
α=- 0.015v+112.6 (30)
Extracting cuboid box bottom margin central point pixel value is (1762.05,2360), by cuboid box bottom sides Edge central point ordinate pixel value v, which substitutes into formula (29), can calculate the object actual imaging angle equal to 74.2 °.According to Trigonometric function principle calculates target point depth value D (unit: mm):
D=305*tan 74.2 °=1077.85 (31).

Claims (2)

1.一种新型标靶,包括矩形黑色块和矩形白色块交错叠置的棋盘格标靶,各个矩形黑色块和矩形白色块的宽度均为定长且相等,其特征在于:所述标靶水平放置,设定标靶距离相机最近的一排色块大小为d mm*d mm,随后后一排长度比前一排长度增加值为设xi为第i个角点到相机的实际距离,yi为各色块的长度,则相邻色块长度的差值Δdi为:1. A novel target, comprising a checkerboard target in which rectangular black blocks and rectangular white blocks are alternately stacked, and the widths of each rectangular black block and rectangular white block are fixed and equal, and it is characterized in that: the target Place it horizontally, set the size of the row of color blocks closest to the target to the camera to be d mm*d mm, and the length of the subsequent row will increase by the value of the previous row. Let x i be the actual distance from the i-th corner point to the camera, and y i be the length of each color block, then the difference Δd i of the lengths of adjacent color blocks is: 设各色块的计算长度与实际距离之间的关系为f(x),根据公式(8)可得:Let the relationship between the calculated length of each color block and the actual distance be f(x), according to formula (8), we can get: 因此,当该标靶距离相机最近的一排方格大小为dmm*d mm时,随后每排宽度固定、长度增加值为d*f’(x)mm。Therefore, when the size of the row of squares closest to the target is dmm*d mm, the width of each row is fixed and the length increases. is d*f'(x)mm. 2.一种权利要求1所述的新型标靶的亚像素级角点的提取方法,其特征在于包括如下步骤:2. the extraction method of the sub-pixel level corner point of a novel target according to claim 1, is characterized in that comprising the steps: 利用Andreas Geiger等提出的基于生长的棋盘格角点检测方法定义两种不同的角点模板,一种用于和坐标轴平行的角点,另一种用于旋转45°的角点,根据图像中各像素点与模板的相似度参数在图像上寻找角点,进行初始角点检测;对角点的位置和方向进行亚像素级精细提取,运用OpenCV中的comerSubPix()函数进行亚像素级角点定位,通过最小化梯度图像的标准离差率细化边缘方向向量;最后是标记角点并输出其亚像素级坐标,根据能量函数生长并重建棋盘格,标记角点,输出亚像素级角点坐标。Using the growing-based checkerboard corner detection method proposed by Andreas Geiger et al. to define two different corner templates, one for the corners parallel to the coordinate axis and the other for the corners rotated 45°, according to the image The similarity parameter between each pixel point and the template in the image is to find the corner points on the image, and perform initial corner point detection; perform sub-pixel fine extraction of the position and direction of the corner points, and use the comerSubPix() function in OpenCV to perform sub-pixel level corners. Point positioning, refine the edge direction vector by minimizing the standard deviation of the gradient image; finally, mark the corner points and output their sub-pixel coordinates, grow and reconstruct the checkerboard according to the energy function, mark the corner points, and output the sub-pixel level corners point coordinates.
CN201810918877.7A 2018-08-12 2018-08-12 A novel target and its sub-pixel corner extraction method Active CN109255818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810918877.7A CN109255818B (en) 2018-08-12 2018-08-12 A novel target and its sub-pixel corner extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810918877.7A CN109255818B (en) 2018-08-12 2018-08-12 A novel target and its sub-pixel corner extraction method

Publications (2)

Publication Number Publication Date
CN109255818A true CN109255818A (en) 2019-01-22
CN109255818B CN109255818B (en) 2021-05-28

Family

ID=65049244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810918877.7A Active CN109255818B (en) 2018-08-12 2018-08-12 A novel target and its sub-pixel corner extraction method

Country Status (1)

Country Link
CN (1) CN109255818B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859137A (en) * 2019-02-14 2019-06-07 重庆邮电大学 A kind of irregular distortion universe bearing calibration of wide angle camera
CN110443856A (en) * 2019-08-12 2019-11-12 广州图语信息科技有限公司 A kind of 3D structure optical mode group scaling method, storage medium, electronic equipment
CN111798422A (en) * 2020-06-29 2020-10-20 福建汇川物联网技术科技股份有限公司 Checkerboard angular point identification method, device, equipment and storage medium
CN113256735A (en) * 2021-06-02 2021-08-13 杭州灵西机器人智能科技有限公司 Camera calibration method and system based on binocular calibration
CN113658272A (en) * 2021-08-19 2021-11-16 湖北亿咖通科技有限公司 Vehicle-mounted camera calibration method, device, equipment and storage medium
CN114187363A (en) * 2021-11-24 2022-03-15 北京极豪科技有限公司 Method and device for obtaining radial distortion parameter value and mobile terminal
CN114255272A (en) * 2020-09-25 2022-03-29 广东博智林机器人有限公司 Positioning method and device based on target image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202066514U (en) * 2011-03-21 2011-12-07 李为民 Compound target for measuring large-scale coordinate
CN202622812U (en) * 2012-05-25 2012-12-26 山东天泽软控技术股份有限公司 Calibrating plate for visual system of robot
CN103019643A (en) * 2012-12-30 2013-04-03 中国海洋大学 Method for automatic correction and tiled display of plug-and-play large screen projections
CN203149664U (en) * 2013-03-27 2013-08-21 黑龙江科技学院 Calibration plate for binocular vision camera
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN203217624U (en) * 2013-05-04 2013-09-25 长春工业大学 A new checkerboard calibration board
CN103927750A (en) * 2014-04-18 2014-07-16 上海理工大学 Detection method of checkboard grid image angular point sub pixel
CN204388802U (en) * 2015-01-19 2015-06-10 长春师范大学 Line-structured light vision system calibration plate
CN105105779A (en) * 2014-04-18 2015-12-02 Fei公司 High aspect ratio X-ray targets and uses of same
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN206574134U (en) * 2017-03-09 2017-10-20 昆山鹰之眼软件技术有限公司 Automate scaling board

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202066514U (en) * 2011-03-21 2011-12-07 李为民 Compound target for measuring large-scale coordinate
CN202622812U (en) * 2012-05-25 2012-12-26 山东天泽软控技术股份有限公司 Calibrating plate for visual system of robot
CN103019643A (en) * 2012-12-30 2013-04-03 中国海洋大学 Method for automatic correction and tiled display of plug-and-play large screen projections
CN203149664U (en) * 2013-03-27 2013-08-21 黑龙江科技学院 Calibration plate for binocular vision camera
CN203217624U (en) * 2013-05-04 2013-09-25 长春工业大学 A new checkerboard calibration board
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN103927750A (en) * 2014-04-18 2014-07-16 上海理工大学 Detection method of checkboard grid image angular point sub pixel
CN105105779A (en) * 2014-04-18 2015-12-02 Fei公司 High aspect ratio X-ray targets and uses of same
CN204388802U (en) * 2015-01-19 2015-06-10 长春师范大学 Line-structured light vision system calibration plate
CN106803273A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 A kind of panoramic camera scaling method
CN206574134U (en) * 2017-03-09 2017-10-20 昆山鹰之眼软件技术有限公司 Automate scaling board

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周传德: "基于平面标靶的物体空间姿态精确测量", 《重庆大学学报》 *
张胜男: "圆阵列平面靶标特征点的自动标记", 《计算机工程与应用》 *
白瑞林: "一种实用的X型靶标亚像素角点提取方法", 《光学技术》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859137A (en) * 2019-02-14 2019-06-07 重庆邮电大学 A kind of irregular distortion universe bearing calibration of wide angle camera
CN109859137B (en) * 2019-02-14 2023-02-17 重庆邮电大学 Wide-angle camera irregular distortion global correction method
CN110443856A (en) * 2019-08-12 2019-11-12 广州图语信息科技有限公司 A kind of 3D structure optical mode group scaling method, storage medium, electronic equipment
CN111798422A (en) * 2020-06-29 2020-10-20 福建汇川物联网技术科技股份有限公司 Checkerboard angular point identification method, device, equipment and storage medium
CN111798422B (en) * 2020-06-29 2023-10-20 福建汇川物联网技术科技股份有限公司 Checkerboard corner recognition method, device, equipment and storage medium
CN114255272A (en) * 2020-09-25 2022-03-29 广东博智林机器人有限公司 Positioning method and device based on target image
CN113256735A (en) * 2021-06-02 2021-08-13 杭州灵西机器人智能科技有限公司 Camera calibration method and system based on binocular calibration
CN113658272A (en) * 2021-08-19 2021-11-16 湖北亿咖通科技有限公司 Vehicle-mounted camera calibration method, device, equipment and storage medium
CN113658272B (en) * 2021-08-19 2023-11-17 亿咖通(湖北)技术有限公司 Vehicle-mounted camera calibration method, device, equipment and storage medium
CN114187363A (en) * 2021-11-24 2022-03-15 北京极豪科技有限公司 Method and device for obtaining radial distortion parameter value and mobile terminal

Also Published As

Publication number Publication date
CN109255818B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN109146980A (en) The depth extraction and passive ranging method of optimization based on monocular vision
CN109035320B (en) Monocular vision-based depth extraction method
CN109255818A (en) A kind of extracting method of novel target and its sub-pixel angle point
CN109269430B (en) Passive measurement method of diameter at breast height of multiple standing trees based on depth extraction model
CN102376089B (en) Target correction method and system
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN111563921B (en) An underwater point cloud acquisition method based on binocular camera
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN109242915A (en) Multicamera system scaling method based on multi-face solid target
CN101299270A (en) Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN105654547B (en) Three-dimensional rebuilding method
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN109974618B (en) Global calibration method of multi-sensor vision measurement system
CN108010086A (en) Camera marking method, device and medium based on tennis court markings intersection point
CN102693543B (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN109448043A (en) Standing tree height extracting method under plane restriction
CN107977996A (en) Space target positioning method based on target calibrating and positioning model
CN106871900A (en) Image matching positioning method in ship magnetic field dynamic detection
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN113554708A (en) Complete calibration method of linear structured light vision sensor based on single cylindrical target
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant