CN109146980A - The depth extraction and passive ranging method of optimization based on monocular vision - Google Patents

The depth extraction and passive ranging method of optimization based on monocular vision Download PDF

Info

Publication number
CN109146980A
CN109146980A CN201810918876.2A CN201810918876A CN109146980A CN 109146980 A CN109146980 A CN 109146980A CN 201810918876 A CN201810918876 A CN 201810918876A CN 109146980 A CN109146980 A CN 109146980A
Authority
CN
China
Prior art keywords
camera
image
angle
value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810918876.2A
Other languages
Chinese (zh)
Other versions
CN109146980B (en
Inventor
徐爱俊
武新梅
周素茵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang A&F University ZAFU
Original Assignee
Zhejiang A&F University ZAFU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang A&F University ZAFU filed Critical Zhejiang A&F University ZAFU
Priority to CN201810918876.2A priority Critical patent/CN109146980B/en
Publication of CN109146980A publication Critical patent/CN109146980A/en
Application granted granted Critical
Publication of CN109146980B publication Critical patent/CN109146980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of depth extraction of optimization based on monocular vision and passive ranging methods, it is characterised in that includes the following steps --- and step 1: demarcating mobile phone camera, obtains camera internal parameter and image resolution ratio;Step 2: depth extraction model is establishedStep 3: by the Image Acquisition to target to be measured, target point pixel value u, v are obtained;Step 4: the camera internal parameter and target point pixel value and combining camera depth extraction model obtained using above-mentioned steps is calculated object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera LThe depth extraction and passive ranging method of optimization based on monocular vision of the invention, it can be suitable for the different cameras of parameters such as field angle, focal length, image resolution ratio, range accuracy is improved, provides support for object measurement and real scene three-dimensional reconstruction in machine vision.

Description

The depth extraction and passive ranging method of optimization based on monocular vision
Technical field
The present invention relates to ground close-range photogrammetry field, especially a kind of single camera vision system lower pinhole camera it is passive Distance measuring method.
Background technique
Object ranging and positioning based on image, are broadly divided into two methods of initiative range measurement and passive ranging[1].Actively Ranging is to install laser ranging system on machine (such as camera) to carry out ranging[2-4].Passive ranging is by machine vision to two Object depth information is calculated in dimension word image, then calculates target according to image pixel information and camera imaging principle Object distance[5-6].Machine vision ranging is broadly divided into monocular vision ranging, two class of binocular distance measurement[7-9].In ranging process, Committed step is the acquisition of object depth information, and the depth information acquisition method of early stage is mainly binocular stereo vision and phase Machine motion information needs multiple image to complete the acquisition of image depth information[10-16].Compared with binocular distance measurement, monocular is surveyed Stringent hardware condition, more competitive superiority are not needed away from Image Acquisition.
In the prior art, there are many methods for the object Depth Information Acquistion of single camera vision system.Such as use corresponding points mark Method is determined to obtain the depth information of target to be measured[17-19].Document [17] has studied a kind of robot mesh based on monocular vision Position distance measuring method is demarcated, this method is usually to obtain the inside and outside parameter of camera by camera calibration, solves figure in conjunction with projection model As the transformational relation between coordinate system and world coordinate system, to calculate object depth information.Unfortunately, the method needs The target image of different direction is acquired, and accurately records respective coordinates of each point in world coordinate system and image coordinate system, Stated accuracy is affected for measurement accuracy.
Document [20] puts object of reference on road surface and measures its distance, selects suitable mathematical model, fitting object of reference away from From the corresponding relationship between pixel, this relationship extract real-time depth information is recycled.Unfortunately, the method essence of document [20] Degree will receive the influence of telemeasurement error and error of fitting.
Document [21] devises a kind of vertical target image, and the angle point data by detecting the image establish image ordinate Mapping relations between pixel value and actual measurement angle combine known vehicle-mounted monocular camera height to obtain figure using this relationship The vehicle-mounted depth information as in.Since different cameral equipment inner parameter has differences, for the camera apparatus of different model, the party Method needs to resurvey target image information, establishes camera depth information extraction model, and different in-vehicle cameras are due to camera lens system The reasons such as work and assembly, so that camera pitch angle can also have differences, therefore the method versatility of document [21] is poor.
In addition, the method for document [21] is using vertical target research perpendicular picture point imaging angle and ordinate pixel value Between relationship, and by this be applied to horizontal plane on object distance measurement so that range accuracy is relatively low, because of camera water It is flat not exactly the same with vertical direction Distortion Law.Application No. is 201710849961.3 patent applications, disclose one kind and change Into the camera calibration model and distortion correction model suitable for intelligent sliding moved end camera (hereinafter referred to as: improved with non-thread The peg model of sex distortion item), this method can help to correct scaling board picture, the inside and outside parameter of camera of higher precision is obtained, Unfortunately, this method does not expand in the nonlinear distortion correction and the measurement of object to testing image.
Bibliography:
[1] He Ruofei, Tian Xuetao, Liu Hongjuan wait unmanned plane target localization method of the based on Monte Carlo Kalman filtering [J] Northwestern Polytechnical University journal, 2017,35 (3): 435-441.
[2] Lin F, Dong X, Chen B M, et al.A Robust Real-Time Embedded Vision System on an Unmanned Rotorcraft for Ground Target Following[J].IEEE Trans on Industrial Electronics, 2012,59 (2): 1038-1049.
[3] Zhang Wanlin, Hu Zhengliang, Zhu Jianjun wait one of the comprehensive view instrument of individual soldier target position calculation method [J] Electronic measurement technique, 2014,37 (11): 1-3.
[4] Sun Junling, Sun Guangmin, Ma Pengge wait laser eyepiece of the based on symmetrical wavelet noise reduction and asymmetric Gauss curve fitting Position [J] Chinese laser, 2017,44 (6): 178-185.
[5] Shi Jie, Li Yin child, Qi Guoqing, passive tracking algorithm [J] China under waiting not exclusively to measure based on machine vision Middle University of Science and Technology's journal, 2017,45 (6): 33-37.
[6] Xu Cheng, yellow grand celebration, a kind of passive target positioning of the numerous small drone of clanging or clanking sound in hole and precision analysis [J] instrument instrument Table journal, 2015,36 (5): 1115-1122.
[7] Li Kehong, Jiang Lingmin, Gong Yong justice .2 are tieed up to 3 d image/Video Quality Metric image depth extracting method and are summarized [J] Journal of Image and Graphics, 2014,19 (10): 1393-1406.
[8] Wang Hao, Xu Zhiwen, Xie Kun, wait binocular range-measurement system [J] Jilin University journal of the based on OpenCV, and 2014, 32 (2): 188-194.
[9] Sun W, Chen L, Hu B, et a1.Binocular vision-based position determination algorithm and system[C]//Proceedings of the 2012 International Conference on Computer Distributed Control and Intelligent Environmental Monitoring.Piscataway:IEEE Computer Society, 2012:170-173.
[10]Ikeuchi K.Determining a depth map using a dual photometric stereo [J] .The International Journal of Robotics Research, 1987,6 (1): 15-31.
[11] Shao M, Simehony T, Chellappa R.New algorithms from reconstruction of a 3-d depth map from one or more images[C]//Proceedings of CVPR’88.Ann Arbor:IEEE, 1988:530-535.
[12] Matthies L, Kanade T, Szeliski R.Kalman filter-based algorithms for estimating depth from image sequences[J].International Journal of Computer Vision, 1989,3 (3): 209-238.
[13] Mathies L, Szeliski R, Kanade T.Incremental estimation of dense Depth maps from image sequence [C] //Proceedings of CVPR ' 88.Ann Arbor:IEEE, 1988:366-374.
[14] Mori T, Yamamoto M.A dynamic depth extraction method [C] // Proceedings of Third International Conference on Computer Vision.Osaka:IEEE, 1990∶672-676.
[15] Inoue H, Tachikawa T, Inaba M.Robot vision system with a Correlation chip for real-time tracking, optical flow and depth map generation [C] //Proceeding of Robotics and Automation.Nice:IEEE, 1992:1621-1626.
[16] Tree image distance measuring method [J] the agricultural mechanics of Hu Tianxiang, Zheng Jiaqiang, Zhou Hongping based on binocular vision Report, 2010,41 (11): 158-162.
[17] Yu Naigong, Huang Can, Lin Jia are calculated based on robot target positioning distance measuring technique study [J] of monocular vision Machine measurement and control, 2012,20 (10): 2654-2660.
[18] ranging research [J] the robot in Wu Gang, Tang Zhen people's monocular formula autonomous robot vision guided navigation, 2010, 32 (6): 828-832.
[19] Lu Weiwei, Xiao Zhitao, Lei Meilin study [J] with distance measuring method based on the front vehicles detection of monocular vision Video Applications and engineering, 2011,35 (1): 125-128.
[20] Wu C F, Lin C J, Lee C Y, et al.Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement [J] .IEEE Transactions on Systems, Man and Cybernetics-Part C:Applications and Reviews, 2012,42 (4): 577-589.
[21] yellow cloudling, peak, Xu Guoyan wait monocular depth information extraction [J] of based on single width vertical target image BJ University of Aeronautics & Astronautics's journal, 2015,41 (4): 649-655.
Summary of the invention
The object of the present invention is to provide a kind of depth extraction of optimization based on monocular vision and passive ranging methods, can The camera different suitable for parameters such as field angle, focal length, image resolution ratios improves range accuracy, is object in machine vision Measurement and real scene three-dimensional reconstruction provide support.
To achieve the above object, the present invention adopts the following technical scheme:
A kind of depth extraction and passive ranging method of the optimization based on monocular vision, it is characterised in that including walking as follows It is rapid:
Step 1: demarcating mobile phone camera, obtains camera internal parameter and image resolution ratio
Using Zhang Zhengyou calibration method, and the improved peg model with nonlinear distortion variable is introduced to camera internal parameter It is corrected
The physical size size as pixel each in plane is set first as dx*dy, and image coordinate system (x, y) origin is in picture Coordinate in plain coordinate system (u, v) is (u0, v0), (x, y) is the normalized coordinate of picture point in real image, any picture in image Element meets following relationship in two coordinate systems:
fx、fyFor the normalization focal length in x-axis and y-axis, any point P in camera coordinates systemc(Xc, Yc, Zc) project to image It is (x on coordinate systemc, yc, f), it is f with initial point distance that image coordinate system plane is vertical with optical axis z-axis, according to similar triangles original Reason it follows that
Introduce the improved peg model with nonlinear distortion variable, including the diameter as caused by lens shape defect To distortion and since there are tangential distortion caused by different degrees of bias, radial distortion mathematical models for optical system are as follows:
Wherein r2=x2+y2, (x ', y ') it is sat for the normalization of the ideal linearity camera coordinates system without distortion term after correction Scale value, radial distortion value is related with the position of picture point in the picture, and the radial distortion value at image border is larger,
Tangential distortion mathematical model are as follows:
It wherein include k1、k2、k3、p1、p2Totally 5 kilrrfactors obtain distortion correction Function Modules by formula (3), (4) Type is as follows:
There are following relationships for conversion from world coordinate transformation to camera coordinates:
Pc=R (PW- C)=RPW+T (6)
Convolution (1)~(6), may be expressed as: with homogeneous coordinates and matrix form
Mint、MextIt is the inside and outside parameter matrix of camera calibration respectively, wherein camera internal parameter includes image center pixel Value u0、v0, fx、fyFor in x-axis and y-axis normalization focal length, by Java combination OpenCV realize mobile phone camera calibration, acquisition Inner parameter described in mobile phone camera, camera lens distortion parameter and image resolution ratio vmax、umax
Step 2: depth extraction model is established
Abstract function is set according to the linear relationship between object imaging angle α and ordinate pixel value v, establishes and contains mesh Mark tri- object imaging angle α, ordinate pixel value v and camera rotation angle β parameter space relational models, i.e. α=F (v, β),
Under equipment and camera the rotation angle of different model, subject ordinate pixel value and imaging angle are in pole Significant negative linear correlation, and the slope of the linear relationship and intercept are different, therefore set:
α=F (v, β)=av+b (17)
Wherein parameter a, b is related with camera model and camera rotation angle,
When α is minimized α=αminWhen=90- θ-β, θ is the half at camera vertical field of view angle, i.e. subject projects When to picture lowermost end, v=vmax(vmaxFor camera CMOS or ccd image sensor column coordinate valid pixel number), substitute into formula (17) it can obtain:
90- β-θ=avmax+b (18)
Work as αminWhen+2 90 ° of θ >, i.e. θ > β, camera upward angle of visibility is higher than horizontal line at this time, and ground level unlimited distance, α is unlimited Close to 90 °, v is substantially equal to v at this time0-tanβ*fy, fyFor the focal length of camera under pixel unit, β is negative value, that is, camera inverse time When needle rotates also similarly, therefore, substituting into formula (17) can obtain:
90=a (v0-tanβ·fy)+b (19)
Work as αminWhen+2 90 ° of θ <, i.e. θ < β, camera upward angle of visibility is lower than horizontal line, ground level unlimited distance object at this time Imaging angle α is maximized, αmaxminWhen+- 2 θ=90- β+θ, i.e., when subject projects to picture highest point, v=0, Substitution formula (17) can obtain:
90- β+θ=b (20)
According to pinhole camera aufbauprinciple, the tangent value of the camera vertical field of view angle θ of half is schemed equal to camera CMOS or CCD As the half of sensor side length is divided by camera focus, therefore θ can be calculated:
L in formula (21)CMOSFor the side length of camera CMOS or ccd image sensor, convolution (18)~(21), F (α, β) Are as follows:
δ is camera nonlinear distortion variable error in formula (10), in conjunction with mobile phone camera shooting height h, according to trigonometric function Establish mobile phone camera depth extraction model:
Step 3: by the Image Acquisition to target to be measured, target point pixel value u, v are obtained;
It further include the nonlinear distortion correction to target to be measured image in the image acquisition step to target to be measured And pretreatment, it may be assumed that
Image Acquisition is carried out by mobile phone camera, establishes perspective geometry model, wherein f is camera focus, and θ is that camera is vertical The half of field angle, h are that camera is taken pictures highly, and β is rotation angle of the camera along camera coordinates system ox axis, and camera rotates clockwise β Value is positive, and is negative counterclockwise, and β value is obtained by camera internal gravity sensor, and α is object imaging angle;
In conjunction with the camera lens distortion parameter that step 1 camera calibration obtains, to radial distortion existing for image and tangential abnormal Become error and carries out nonlinear distortion correction;Ideal linearity normalized coordinate value (x, y) after correction is substituted into formula (1), asks calculation Image each point pixel coordinate value after correcting out, by the method for bilinear interpolation to pixel value after correction carry out interpolation processing to Image after being corrected;The image after correction is pre-processed using computer vision and image processing techniques, including image Binaryzation, morphological image operation and the detection of object contour edge, obtain the edge of object, and then calculate object and ground The geometric center point pixel value (u, v) at the edge of face contact;
Step 4: the camera internal parameter and target point pixel value and combining camera depth extraction obtained using above-mentioned steps Model calculates object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera L
The size relation between angle beta and the camera vertical field of view angle θ of half is rotated according to camera, selects corresponding depth Model is extracted, step 1 is asked to the camera internal parametric image central point pixel value v of calculation0, normalized focal length f in y-axisyAnd Image resolution ratio vmaxAnd step 3 asks the target to be measured ordinate pixel value v of calculation, camera rotation angle beta and mobile phone camera to clap Take the photograph height h and substitute into the depth extraction model, calculate target point depth value D, calculate target point to optical axis direction it is vertical away from From Tx:
According to formula (11)~(12), arbitrary point can be calculated on image to the distance between shooting camera L:
Compared with prior art, the beneficial effects of the present invention are: due to the adoption of the above technical scheme,
(1) compared with other monocular vision passive ranging methods, this method does not need large scene calibration place, avoids number Error caused by according to being fitted;
(2) the depth extraction model established has equipment interoperability, and camera rotation angle is introduced model, for different shaped Number camera, it is only necessary to for the first time by camera calibration obtain camera internal parameter after, any picture point on single picture can be calculated Depth;
(3) verified, using this method when distance is to carry out short distance ranging in 0.5~2.6m, depth value measurement is flat Equal relative error is 0.937%, and when distance is 3~10m, measurement relative error is 1.71%.Therefore, using this method ranging Measurement accuracy with higher.
Detailed description of the invention
Fig. 1 is distance measuring method flow diagram of the invention;
Fig. 2 is novel target schematic diagram;
Fig. 3 is Corner Detection Algorithm implementation process schematic diagram;
Fig. 4 is that camera upward angle of visibility is higher than horizontal line shooting geometrical model schematic diagram;
Fig. 5 is camera upward angle of visibility lower than horizontal line shooting geometrical model schematic diagram;
Fig. 6 is camera shooting perspective geometry mould signal type;
Fig. 7 is each coordinate system schematic diagram in pin-hole model;
Fig. 8 is camera stereo imaging system schematic illustration;
Fig. 9 is the relation schematic diagram between three kinds of model device object ordinate pixel values and imaging angle;
Figure 10 is the relation schematic diagram between different cameral rotation angle object ordinate pixel value and actual imaging angle.
Specific embodiment
In order to be more clear technical solution of the present invention, below in conjunction with attached drawing 1 to 10, the present invention is carried out specifically It is bright.It should be understood that specific embodiment described in this specification is not intended to limit just for the sake of explaining the present invention Determine protection scope of the present invention.
The present invention is the depth extraction and passive ranging method of a kind of optimization based on monocular vision, is included the following steps:
One, mobile phone camera is demarcated, obtains camera internal parameter and image resolution ratio.The calibration uses Zhang Zhengyou Standardization, and introduce the improved peg model with nonlinear distortion variable and camera internal parameter is corrected.
The physical size size as pixel each in plane is set first as dx*dy (unit: mm), image coordinate system (x, Y) coordinate of the origin in pixel coordinate system (u, v) is (u0, v0), (x, y) is the normalized coordinate of picture point in real image, figure Any pixel meets following relationship in two coordinate systems as in:
fx、fyFor the normalization focal length in x-axis and y-axis, any point P in camera coordinates systemc(Xc, Yc, Zc) project to image It is (x on coordinate systemc, yc, f), it is f with initial point distance that image coordinate system plane is vertical with optical axis z-axis, according to similar triangles original Reason it follows that
Introduce the improved peg model with nonlinear distortion variable, including the diameter as caused by lens shape defect To distortion and since there are tangential distortion caused by different degrees of bias, radial distortion mathematical models for optical system are as follows:
Wherein r2=x2+y2, (x ', y ') it is sat for the normalization of the ideal linearity camera coordinates system without distortion term after correction Scale value, radial distortion value is related with the position of picture point in the picture, and the radial distortion value at image border is larger,
Tangential distortion model mathematical model are as follows:
It wherein include k1、k2、k3、p1、p2Totally 5 kilrrfactors obtain distortion correction Function Modules by formula (3), (4) Type is as follows:
There are following relationships for conversion from world coordinate transformation to camera coordinates:
Pc=R (PW- C)=RPW+T (6)
Convolution (1)~(6), may be expressed as: with homogeneous coordinates and matrix form
Mint、MextIt is the inside and outside parameter matrix of camera calibration respectively, wherein camera internal parameter includes image center pixel Value u0、v0, fx、fyFor in x-axis and y-axis normalization focal length, by Java combination OpenCV realize mobile phone camera calibration, acquisition Inner parameter described in mobile phone camera, camera lens distortion parameter and image resolution ratio vmax、umax
Two, it by the acquisition to novel target image, establishes camera depth and extracts model.Existing target is that length and width are equal Black and white chessboard case marker target.The difference of modulation of the invention and existing target is, setting target is apart from camera nearest One row's grid size is d*d mm, and the width of subsequent every row's grid is to fix, and the previous row's value added of the latter parallelism of length is
X in following formulaiFor the actual range of i-th of angle point to camera, yiFor the length of each grid, then adjacent square length Difference DELTA diAre as follows:
If the relationship between the computational length and actual range of each grid is f (x), can be obtained according to formula (8):
Through Pearson correlation analysis, it is between length and actual range extremely significant linear relationship (p < 0.01), Correlation coefficient r be equal to 0.975, by least square method can in the hope of calculate f (x) derivative f ' (x),
Therefore, when a target row grid size nearest apart from camera is d*d mm (the survey when range of d takes 30~60mm Accuracy of measurement highest) when, subsequent every row's width fixes, length incrementFor d*f ' (x) mm, novel target as shown in Fig. 2,
There are the angles that perspective transform phenomenon makes Harris and Shi-Tomasi etc. common when object on shooting level ground Point detection algorithm robustness is poor, and can also detect mistake when camera is larger along camera coordinates system ox axis rotated counterclockwise by angle It loses, therefore the checkerboard angle point detection process based on growth of the propositions such as present invention combination Andreas Geiger and OpenCV are mentioned The cornerSubPix () function of confession carries out the detection of sub-pixel corner location, and the algorithm robustness is high, larger to distortion degree Picture extraction effect it is preferable,
The implementation process of Corner Detection Algorithm as shown in figure 3, the above-mentioned modulation of the present invention sub-pixel angle point grid Step are as follows:
1) angle point is found according to the similarity parameter of pixel each in image and template on the image, positions target angle point position It sets;
Two different angle point templates are defined first, and a kind of for the angle point parallel with reference axis, another kind is for rotating 45 ° of angle point, each template are made of 4 filtering cores { A, B, C, E }, with carrying out convolution operation with image later;Then sharp The similarity of each inflection point and angle point is calculated with the two angle point templates:
WhereinIndicate that convolution kernel X (X=A, B, C, E) and template i (i=1,2) are responded in the convolution of some pixel, WithIt indicates the similarity of two kinds of possible inflection points of template i, calculates the available angle of similarity of each pixel in image Point similar diagram;It is handled using non-maxima suppression algorithm angle steel joint pixel map to obtain candidate point;Then it is counted with gradient Method verify these candidate points in the nxn neighborhood of a local, first local area grayscale image carries out sobel filtering, then counts Weighting direction histogram (32bins) is calculated, finds two therein main mode γ with meanshift algorithm1And γ2;According to The direction at edge, for desired gradient intensityConstruct a template T.(* indicates cross-correlation operation symbol) Then product with angle point similarity is judged just to obtain initial angle point with threshold value as angle point score value.
2) the position and direction progress sub-pixel of angle steel joint finely extracts;
Sub-pixel Corner character is carried out with the cornerSubPix () function in OpenCV, by Corner character to sub- picture Element, to obtain the other Corner Detection effect of sub-pixel;To refine edge direction vector, it is minimized according to image gradient value Standard deviation rate:
WhereinIt is adjacent pixel collection, the gradient value m with module ii= [cos(γi)sin(γi)]TMatch.(ask calculation scheme according to document Geiger A, Moosmann F, Caret al.Automatic camera and range sensor calibration using a single shot[C]// Robotics and Automation (ICRA), 2012IEEE International Conference on.IEEE, 2012: 3936-3943.)
3) it is finally label angle point and exports its subpixel coordinate, gridiron pattern is grown and rebuild according to energy function, marks Remember angle point, exports sub-pixel angular coordinate;
According to document " Geiger A, Moosmann F, Caret al.Automatic camera and range Sensor calibration using a single shot [C] //Robotics and Automation (ICRA), 2012IEEE International Conference on.IEEE, the method that 2012:3936-3943. " is provided optimize energy Function rebuilds gridiron pattern and marks angle point, energy growth function formula are as follows:
E (x, y)=Ecorners(y)+Estruct(x, y) (16)
Wherein, EcornersIt is the negative value of current chessboard angle point sum, EstructIt is of two adjacent corner points and prediction angle point With degree;Angle point pixel value is exported by OpenCV.
Linear correlative analysis is carried out to image objects angle, ordinate pixel value using SPSS 22, exports Pearson phase Relationship number, verified, under equipment and camera the rotation angle of different model, object ordinate pixel value is in actual imaging angle Extremely significant negative correlativing relation (p < 0.01), in addition, the present invention is also vertical to object under different device models and camera rotation angle The slope difference of linear function carries out significance test between coordinate pixel value and imaging angle, the results showed that, distinct device type Number and camera rotation angle under between object ordinate pixel value and imaging angle linear function heteropolar significant (the p < of slope differences 0.01), illustrating the equipment and camera rotation angle of different model, depth extraction model is different,
Abstract function is set according to the linear relationship between object imaging angle α and ordinate pixel value v, establishes and contains mesh Mark tri- object imaging angle α, ordinate pixel value v and camera rotation angle β parameter space relational models, i.e. α=F (v, β),
Under equipment and camera the rotation angle of different model, subject ordinate pixel value and imaging angle are in pole Significant negative linear correlation, and the slope of the linear relationship and intercept are different, therefore set:
α=F (v, β)=av+b (17)
Wherein parameter a, b is related with camera model and camera rotation angle,
When α is minimized α=αminWhen=90- θ-β, θ is the half at camera vertical field of view angle, i.e. subject projects When to picture lowermost end, v=vmax(vmaxFor camera CMOS or ccd image sensor column coordinate valid pixel number), substitute into formula (17) it can obtain:
90- β-θ=avmax+b (18)
Work as αminWhen+2 90 ° of θ >, i.e. θ > β, camera upward angle of visibility is higher than horizontal line at this time, and camera shoots perspective geometry model Such as Fig. 4, ground level unlimited distance, α is infinitely close to 90 °, and v is substantially equal to v at this time0-tanβ*fy, fyFor phase under pixel unit The focal length of machine, when β rotates counterclockwise for negative value, that is, camera also similarly, therefore, substituting into formula (17) can obtain:
90=a (v0-tanβ·fy)+b (19)
Work as αminWhen+2 90 ° of θ <, i.e. θ < β, camera upward angle of visibility is lower than horizontal line at this time, and camera shoots perspective geometry model If Fig. 5, ground level unlimited distance object imaging angle α are maximized, αmaxminWhen+2 θ=90- β+θ, i.e. subject When body projects to picture highest point, v=0, substituting into formula (17) can be obtained:
90- β+θ=b (20)
According to pinhole camera aufbauprinciple, the tangent value of the camera vertical field of view angle θ of half is schemed equal to camera CMOS or CCD As the half of sensor side length is divided by camera focus, therefore θ can be calculated:
L in formula (21)CMOSFor the side length of camera CMOS or ccd image sensor, convolution (18)~(21), F (α, β) Are as follows:
δ is camera nonlinear distortion variable error in formula (10), in conjunction with mobile phone camera shooting height h, according to trigonometric function Principle establishes mobile phone camera depth extraction model:
Three, by the Image Acquisition to target to be measured, target point pixel value u, v are obtained.Figure is carried out by mobile phone camera As acquisition, perspective geometry model such as Fig. 6 is established, wherein f is camera focus, and θ is the half at camera vertical field of view angle, and h is camera It takes pictures highly, β is rotation angle of the camera along camera coordinates system ox axis, and camera rotates clockwise β value and is positive, is negative counterclockwise, β value It is obtained by camera internal gravity sensor, α is object imaging angle;The camera lens obtained in conjunction with first step camera calibration Distortion parameter carries out nonlinear distortion correction to radial distortion existing for image and tangential distortion error;By the ideal after correction Linear normalization coordinate value (x, y) substitutes into formula (1), image each point pixel coordinate value after asking calculating to correct, by bilinearity Image after slotting method corrects pixel value progress interpolation processing after correction;Using computer vision and image procossing Technology pre-processes the image after correction, including image binaryzation, morphological image operation and the inspection of object contour edge It surveys, obtains the edge of object, and then calculate the geometric center point pixel value (u, v) at the edge of object and ground face contact.
Four, the camera internal parameter and target point pixel value and combining camera depth extraction mould of above-mentioned steps acquisition are utilized Type calculates object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera L.Angle beta and half are rotated according to camera Camera vertical field of view angle θ between size relation, select corresponding depth model, ask the camera internal of calculation to join above-mentioned steps Number image center pixel value v0, normalized focal length f in y-axisyAnd image resolution ratio vmaxAnd above-mentioned steps ask the to be measured of calculation Object ordinate pixel value v, camera rotation angle beta and mobile phone camera shooting height h substitute into the depth extraction model, calculate Target point depth value D,
Fig. 7 is camera stereo imaging system schematic diagram, and midpoint P is camera position, and the straight line and image where point A, B are flat Face is parallel, and coordinate of the A under camera coordinates system is (X, Y, Z), and the coordinate of point B is (X+Tx, Y, Z), project to plane of delineation A ' (xl, yl)、B’(xr, yr) on, it can be obtained according to formula (2):
In conjunction with formula (1) and formula (22), the two o'clock A ' that Y value is identical and depth Z is equal, the horizontal parallax of B ' can be derived D:
It is thus known that camera focus f, image center coordinate (u0, v0) and as pixel each in plane in the direction of the x axis Physical size size dx, in conjunction with depth extraction model, calculate target point to optical axis direction vertical range Tx:
In pin-hole model, the transformational relation between each coordinate system of camera is as shown in figure 8, calculating target point depth value D And its vertical range T to optical axis directionxOn the basis of, according to formula (11)~(12), arbitrary point on image can be calculated To shooting the distance between camera L:
Embodiment 1
Below by taking millet 3 (MI 3) mobile phone as an example, the depth for illustrating the optimization of the invention based on monocular vision is mentioned It takes and passive ranging method.
One, mobile phone camera is demarcated, obtains camera internal parameter and image resolution ratio
Use ranks number be 8*9 size be 20*20 gridiron pattern scaling board as the experimental material of camera calibration, lead to The scaling board picture that 3 mobile phone camera of millet acquires 20 different angles is crossed, using OpenCV according to above-mentioned improved with non-thread The camera calibration model of sex distortion item demarcates millet 3 (MI 3) mobile phone camera,
Scaling board picture is read using fin () function first, and obtains the image of the first picture by .cols .rows Resolution ratio;Then sub-pixel angle point in scaling board picture is extracted by find4QuadCornerSubpix () function, be used in combination DrawChessboardCorners () function marks angle point;CalibrateCamera () function is called to demarcate camera, It is used for obtained camera interior and exterior parameter to carry out projection again to the three-dimensional point in space calculating, obtains new subpoint, calculate Error between new subpoint and old subpoint;Camera internal reference matrix and distortion parameter are exported and save,
Calibration gained camera internal parameter are as follows: fx=3486.5637, u0=1569.0383, fy=3497.4652, v0= 2107.9899, image resolution ratio is 3120 × 4208, camera lens distortion parameter are as follows: [0.0981, -0.1678,0.0003, - 0.0025,0.0975],
Two, it by the acquisition to novel target image, establishes camera depth and extracts model
The initial experiment material that the present invention uses traditional gridiron pattern scaling board of 45*45mm to design as target, to calculate The difference of adjacent square length, the present invention devise 6 groups of experiments, extract traditional X-comers that grid size is 45*45mm Value, and ask calculate adjacent corner points between the actual physics distance that is represented under world coordinate system of unit pixel, to guarantee to indulge between angle point Coordinate pixel value difference is roughly equal, the length y of each gridiValue it is as shown in table 1,
The calculating width of each grid of table 1
Table 1 Computing width of each grid
Through Pearson correlation analysis, it is between length and actual range extremely significant linear relationship (p < 0.01), Correlation coefficient r is equal to 0.975, can be in the hope of derivative f ' (x)=0.262 of calculating f (x), therefore, when the mark by least square method When a range row grid size nearest from camera is 45*45mm, then every row's width is fixed, width value added Δ d is 11.79mm,
The angle point of the novel target is extracted by the Robust Algorithm of Image Corner Extraction in specific implementation step,
The present invention choose respectively millet, Huawei, tri- kinds of different models of iPhone smart phone as image capture device, Camera rotates angle beta={ -10 °, 0 °, 10 °, 20 °, 30 ° }.Data are acquired using the Corner Detection Algorithm, and to its relationship Carry out Function Fitting, Fig. 9 be β=10 ° when three kinds of different models smart phone ordinate pixel value and image objects angle it Between relationship, Figure 10 is that different cameral rotates relationship between ordinate pixel value and image objects angle under angle,
The camera apparatus and camera of different model rotate angle, with the increase of ordinate pixel value, image objects angle Taper off trend, and the difference of device model and camera rotation angle makes that different lines are presented between pixel value and imaging angle Property functional relation, using SPSS 22 to image objects angle, ordinate pixel value carry out Linear correlative analysis, export Pearson Correlation coefficient r is as shown in table 2.
2 object ordinate pixel value of table and imaging angle related coefficient
Table 2 Pearson correlation coefficient of image ordinate pixel values and actual imaging angles
Note: * * indicates extremely significant (p < 0.01).
Note:**represents very significant correlation (p < 0.01)
Verified, equipment and camera with model rotate under angle, and object ordinate pixel value is in actual imaging angle Extremely significant negative correlativing relation (p < 0.01), correlation coefficient r are greater than 0.99.In addition, the present invention is also to different device model and phase The slope difference that machine rotates linear function between object ordinate pixel value and imaging angle under angle carries out significance test.Knot Fruit show distinct device model and camera rotation angle under between object ordinate pixel value and imaging angle linear function it is oblique Rate difference is extremely significant (p < 0.01), illustrates the equipment and camera rotation angle of different model, depth extraction model is not Together.
3 mobile phone camera inner parameter of millet is substituted into formula (10) by the depth extraction model according to specific embodiment ::
The specific depth extraction model of the equipment is obtained according to trigonometric function principle according to trigonometric function principle are as follows:
Three, by the Image Acquisition to target to be measured, target point pixel value u, v are obtained.
Use (MI 3) camera of millet mobile phone 3 as picture collection equipment, carries out picture collection by camera trivets, and The height h for measuring camera to ground is equal to 305mm, and camera rotation angle β is equal to 0 °,
Nonlinear distortion correction is carried out to radial distortion existing for image and tangential distortion error;
The camera lens distortion parameter obtained according to first step camera calibration: [0.0981, -0.1678,0.0003, - 0.0025,0.0975], ideal linearity normalized coordinate value after correcting is calculated according to formula (5):
Image each point pixel coordinate value after correcting is calculated in conjunction with formula (1) and (2), is handled and is rectified by bilinear interpolation Image after just;
The present invention measures its depth and distance by taking the cuboid box being placed on level ground as an example, first to acquisition Image carries out binary conversion treatment, then carries out edge detection to cuboid box using Canny operator, extracts object profile. Extracting cuboid box bottom margin central point pixel value is (1851.23,3490).
Four, the camera internal parameter and target point pixel value and combining camera depth extraction mould of above-mentioned steps acquisition are utilized Type calculates object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera L.
Camera internal parameter, camera are taken pictures height h, rotation angle beta and cuboid box bottom margin central point is vertical sits Mark pixel value v, which substitutes into formula (24), can calculate the object actual imaging angle equal to 69.58 °.According to trigonometric function original Reason calculates target point depth value D (unit: mm):
D=305*tan 69.58 °=819.21 (27)
By parameter fx, u0, D and cuboid box bottom margin central point abscissa pixel value u substitute into formula (12) and can count Vertical range T of the calculation object geometric center point to optical axis directionx:
Therefore, which reaches shooting camera in the distance L of floor projection point are as follows:
By tape measuring, distance of the cuboid box apart from camera floor projection point is 827mm, therefore uses the present invention Carry out ranging, relative error 0.62%.

Claims (1)

1. the depth extraction and passive ranging method of a kind of optimization based on monocular vision, it is characterised in that include the following steps:
Step 1: demarcating mobile phone camera, obtains camera internal parameter and image resolution ratio
Using Zhang Zhengyou calibration method, and introduces the improved peg model with nonlinear distortion variable and camera internal parameter is carried out Correction
The physical size size as pixel each in plane is set first as dx*dy, and image coordinate system (x, y) origin is sat in pixel Coordinate in mark system (u, v) is (u0, v0), (x, y) is the normalized coordinate of picture point in real image, and any pixel exists in image Meet following relationship in two coordinate systems:
fx、fyFor the normalization focal length in x-axis and y-axis, any point P in camera coordinates systemc(Xc, Yc, Zc) project to image coordinate It fastens as (xc, yc, f), it is f with initial point distance that image coordinate system plane is vertical with optical axis z-axis, can according to similar triangle theory To obtain:
The improved peg model with nonlinear distortion variable is introduced, including the radial direction as caused by lens shape defect is abnormal Become and since there are tangential distortion caused by different degrees of bias, radial distortion mathematical models for optical system are as follows:
Wherein r2=x2+y2, (x ', y ') it is the normalized coordinate value that the ideal linearity camera coordinates system of distortion term is free of after correcting, Radial distortion value is related with the position of picture point in the picture, and the radial distortion value at image border is larger,
Tangential distortion mathematical model are as follows:
It wherein include k1、k2、k3、p1、p2Totally 5 kilrrfactors obtain distortion correction function model such as by formula (3), (4) Under:
There are following relationships for conversion from world coordinate transformation to camera coordinates:
Pc=R (PW- C)=RPW+T (6)
Convolution (1)~(6), may be expressed as: with homogeneous coordinates and matrix form
Mint、MextIt is the inside and outside parameter matrix of camera calibration respectively, wherein camera internal parameter includes image center pixel value u0、 v0, fx、fyFor in x-axis and y-axis normalization focal length, by Java combination OpenCV realize mobile phone camera calibration, acquisition mobile phone phase Inner parameter described in machine, camera lens distortion parameter and image resolution ratio vmax、umax
Step 2: depth extraction model is established
Abstract function is set according to the linear relationship between object imaging angle α and ordinate pixel value v, establishes and contains object Imaging angle α, ordinate pixel value v and tri- parameter space relational models of camera rotation angle β, i.e. α=F (v, β),
Under equipment and camera the rotation angle of different model, subject ordinate pixel value is in extremely significant with imaging angle Negative linear correlation, and the slope of the linear relationship and intercept are different, therefore set:
α=F (v, β)=av+b (17)
Wherein parameter a, b is related with camera model and camera rotation angle,
When α is minimized α=αminWhen=90- θ-β, θ is the half at camera vertical field of view angle, i.e. subject projects to figure When piece lowermost end, v=vmax(vmaxFor camera CMOS or ccd image sensor column coordinate valid pixel number), substituting into formula (17) can :
90- β-θ=avmax+b (18)
Work as αminWhen+2 90 ° of θ >, i.e. θ > β, camera upward angle of visibility is higher than horizontal line, ground level unlimited distance, α infinite approach at this time In 90 °, v is substantially equal to v at this time0-tanβ*fy, fyFor the focal length of camera under pixel unit, β is that negative value, that is, camera revolves counterclockwise When turning also similarly, therefore, substituting into formula (17) can obtain:
90=a (v0-tanβ·fy)+b (19)
Work as αminWhen+2 90 ° of θ <, i.e. θ < β, camera upward angle of visibility is lower than horizontal line, the imaging of ground level unlimited distance object at this time Angle [alpha] is maximized, αmaxminWhen+2 θ=90- β+θ, i.e., when subject projects to picture highest point, v=0 is substituted into Formula (17) can obtain:
90- β+θ=b (20)
According to pinhole camera aufbauprinciple, the tangent value of the camera vertical field of view angle θ of half is equal to camera CMOS or ccd image passes The half of sensor side length can calculate θ divided by camera focus:
L in formula (21)CMOSFor the side length of camera CMOS or ccd image sensor, convolution (18)~(21), F (α, β) are as follows:
δ is camera nonlinear distortion variable error in formula (10), in conjunction with mobile phone camera shooting height h, according to trigonometric function principle Establish mobile phone camera depth extraction model:
Step 3: by the Image Acquisition to target to be measured, target point pixel value u, v are obtained;
It further include to the nonlinear distortion of target to be measured image correction and pre- in the image acquisition step to target to be measured Processing, it may be assumed that
Image Acquisition is carried out by mobile phone camera, establishes perspective geometry model, wherein f is camera focus, and θ is camera vertical field of view The half at angle, h are that camera is taken pictures highly, and β is rotation angle of the camera along camera coordinates system ox axis, and camera rotates clockwise β value and is Just, it is negative counterclockwise, β value is obtained by camera internal gravity sensor, and α is object imaging angle;
In conjunction with the camera lens distortion parameter that step 1 camera calibration obtains, radial distortion existing for image and tangential distortion are missed Difference carries out nonlinear distortion correction;Ideal linearity normalized coordinate value (x, y) after correction is substituted into formula (1), calculating is asked to rectify Image each point pixel coordinate value after just carries out interpolation processing to pixel value after correction by the method for bilinear interpolation to obtain Image after correction;The image after correction is pre-processed using computer vision and image processing techniques, including image two-value Change, morphological image operation and the detection of object contour edge, obtain the edge of object, and then calculate object and connect with ground The geometric center point pixel value (u, v) at the edge of touching;
Step 4: the camera internal parameter and target point pixel value and combining camera depth extraction mould obtained using above-mentioned steps Type calculates object to be measured object image and takes up an official post meaning point to the distance between mobile phone camera L
The size relation between angle beta and the camera vertical field of view angle θ of half is rotated according to camera, selects corresponding depth extraction Step 1 is sought the camera internal parametric image central point pixel value v of calculation by model0, normalized focal length f in y-axisyAnd image Resolution ratio vmaxAnd step 3 asks the target to be measured ordinate pixel value v of calculation, camera rotation angle beta and mobile phone camera shooting high Spend h and substitute into the depth extraction model, calculate target point depth value D, calculate target point to optical axis direction vertical range Tx:
According to formula (11)~(12), arbitrary point can be calculated on image to the distance between shooting camera L:
CN201810918876.2A 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method Active CN109146980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810918876.2A CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810918876.2A CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Publications (2)

Publication Number Publication Date
CN109146980A true CN109146980A (en) 2019-01-04
CN109146980B CN109146980B (en) 2021-08-10

Family

ID=64793074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810918876.2A Active CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Country Status (1)

Country Link
CN (1) CN109146980B (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109883654A (en) * 2019-01-25 2019-06-14 武汉精立电子技术有限公司 A kind of chessboard trrellis diagram, generation method and localization method for OLED sub-pixel positioning
CN109977853A (en) * 2019-03-25 2019-07-05 太原理工大学 A kind of mine group overall view monitoring method based on more identifiers
CN110065075A (en) * 2019-05-29 2019-07-30 哈尔滨工业大学 A kind of spatial cell robot external status cognitive method of view-based access control model
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam
CN110374045A (en) * 2019-07-29 2019-10-25 哈尔滨工业大学 A kind of intelligence de-icing method
CN110515884A (en) * 2019-09-04 2019-11-29 西安工业大学 Construction site reinforcing bar range unit based on image analysis
CN110555888A (en) * 2019-08-22 2019-12-10 浙江大华技术股份有限公司 Master-slave camera calibration method, storage device, computer equipment and system thereof
CN110672020A (en) * 2019-06-14 2020-01-10 浙江农林大学 Stand tree height measuring method based on monocular vision
CN110728638A (en) * 2019-09-25 2020-01-24 深圳疆程技术有限公司 Image distortion correction method, vehicle machine and vehicle
CN110737942A (en) * 2019-10-12 2020-01-31 清华四川能源互联网研究院 Underwater building model establishing method, device, equipment and storage medium
CN111046843A (en) * 2019-12-27 2020-04-21 华南理工大学 Monocular distance measurement method under intelligent driving environment
CN111192235A (en) * 2019-12-05 2020-05-22 中国地质大学(武汉) Image measuring method based on monocular vision model and perspective transformation
CN111250406A (en) * 2020-03-16 2020-06-09 科为升视觉技术(苏州)有限公司 PCB detection production line automatic placement method and system based on visual positioning
CN111508027A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111623776A (en) * 2020-06-08 2020-09-04 昆山星际舟智能科技有限公司 Method for measuring distance of target by using near infrared vision sensor and gyroscope
CN111683193A (en) * 2019-03-11 2020-09-18 株式会社东芝 Image processing apparatus
CN111798444A (en) * 2020-07-17 2020-10-20 太原理工大学 Unmanned workshop steel pipe length measuring method based on image distortion correction color separation processing
CN112001880A (en) * 2020-06-29 2020-11-27 浙江大学 Characteristic parameter detection method and device for planar component
CN112135125A (en) * 2020-10-28 2020-12-25 歌尔光学科技有限公司 Camera internal reference testing method, device, equipment and computer readable storage medium
CN112489116A (en) * 2020-12-07 2021-03-12 青岛科美创视智能科技有限公司 Method and system for estimating target distance by using single camera
CN112528974A (en) * 2021-02-08 2021-03-19 成都睿沿科技有限公司 Distance measuring method and device, electronic equipment and readable storage medium
CN112686961A (en) * 2020-12-31 2021-04-20 杭州海康机器人技术有限公司 Method and device for correcting calibration parameters of depth camera
CN112798812A (en) * 2020-12-30 2021-05-14 中山联合汽车技术有限公司 Target speed measuring method based on monocular vision
CN112907462A (en) * 2021-01-28 2021-06-04 黑芝麻智能科技(上海)有限公司 Distortion correction method and system for ultra-wide-angle camera device and shooting device comprising distortion correction system
CN113034618A (en) * 2021-04-20 2021-06-25 延锋伟世通汽车电子有限公司 Method and system for measuring imaging distance of automobile head-up display
CN113034565A (en) * 2021-03-25 2021-06-25 奥比中光科技集团股份有限公司 Monocular structured light depth calculation method and system
CN113091607A (en) * 2021-03-19 2021-07-09 华南农业大学 Calibration-free space point coordinate measuring method for single smart phone
CN113137920A (en) * 2021-05-19 2021-07-20 重庆大学 Underwater measurement equipment and underwater measurement method
CN113344906A (en) * 2021-06-29 2021-09-03 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113686314A (en) * 2021-07-28 2021-11-23 武汉科技大学 Monocular water surface target segmentation and monocular distance measurement method of shipborne camera
CN113720299A (en) * 2021-09-18 2021-11-30 兰州大学 Distance measurement method based on scene with three-dimensional camera or monocular camera sliding on guide rail
WO2021245290A1 (en) 2020-06-06 2021-12-09 Querbes Olivier Generation of full-scale 3d models from 2d images produced by a single-eye imaging device
CN113838150A (en) * 2021-08-30 2021-12-24 上海大学 Moving target three-dimensional trajectory tracking method based on electro-hydraulic adjustable-focus lens
CN113888640A (en) * 2021-09-07 2022-01-04 浙江大学 Improved calibration method suitable for unmanned aerial vehicle pan-tilt camera
CN114018212A (en) * 2021-08-03 2022-02-08 广东省国土资源测绘院 Monocular distance measurement-oriented pitch angle correction method and system for dome camera
CN114200532A (en) * 2021-12-14 2022-03-18 中国航发南方工业有限公司 Device and method for detecting excess in casting case of aero-engine
CN114219850A (en) * 2021-11-16 2022-03-22 英博超算(南京)科技有限公司 Vehicle ranging system applying 360-degree panoramic looking-around technology
CN114509048A (en) * 2022-01-20 2022-05-17 中科视捷(南京)科技有限公司 Monocular camera-based overhead transmission line space three-dimensional information acquisition method and system
CN114684202A (en) * 2022-06-01 2022-07-01 浙江大旗新能源汽车有限公司 Intelligent system for automatically driving vehicle and integrated control method thereof
CN115507752A (en) * 2022-09-29 2022-12-23 苏州大学 Monocular vision distance measurement method and system based on parallel environment elements
TWI816166B (en) * 2021-04-22 2023-09-21 滿拓科技股份有限公司 Method and system for detecting object depth and horizontal distance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104331896A (en) * 2014-11-21 2015-02-04 天津工业大学 System calibration method based on depth information
US9741118B2 (en) * 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741118B2 (en) * 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104331896A (en) * 2014-11-21 2015-02-04 天津工业大学 System calibration method based on depth information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J. CARDILLO,ET AL: "《3-D position sensing using a passive monocular vision system》", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
冯春: "《基于单目视觉的目标识别与定位研究》", 《基于单目视觉的目标识别与定位研究》 *

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109883654A (en) * 2019-01-25 2019-06-14 武汉精立电子技术有限公司 A kind of chessboard trrellis diagram, generation method and localization method for OLED sub-pixel positioning
CN111508027A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111508027B (en) * 2019-01-31 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
CN111683193A (en) * 2019-03-11 2020-09-18 株式会社东芝 Image processing apparatus
CN109977853A (en) * 2019-03-25 2019-07-05 太原理工大学 A kind of mine group overall view monitoring method based on more identifiers
CN110065075A (en) * 2019-05-29 2019-07-30 哈尔滨工业大学 A kind of spatial cell robot external status cognitive method of view-based access control model
CN110672020A (en) * 2019-06-14 2020-01-10 浙江农林大学 Stand tree height measuring method based on monocular vision
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam
CN110374045A (en) * 2019-07-29 2019-10-25 哈尔滨工业大学 A kind of intelligence de-icing method
CN110374045B (en) * 2019-07-29 2021-09-28 哈尔滨工业大学 Intelligent deicing method
CN110555888A (en) * 2019-08-22 2019-12-10 浙江大华技术股份有限公司 Master-slave camera calibration method, storage device, computer equipment and system thereof
CN110515884B (en) * 2019-09-04 2023-02-03 西安工业大学 Construction site reinforcing bar range unit based on image analysis
CN110515884A (en) * 2019-09-04 2019-11-29 西安工业大学 Construction site reinforcing bar range unit based on image analysis
CN110728638A (en) * 2019-09-25 2020-01-24 深圳疆程技术有限公司 Image distortion correction method, vehicle machine and vehicle
CN110737942B (en) * 2019-10-12 2023-05-02 清华四川能源互联网研究院 Underwater building model building method, device, equipment and storage medium
CN110737942A (en) * 2019-10-12 2020-01-31 清华四川能源互联网研究院 Underwater building model establishing method, device, equipment and storage medium
CN111192235A (en) * 2019-12-05 2020-05-22 中国地质大学(武汉) Image measuring method based on monocular vision model and perspective transformation
CN111046843A (en) * 2019-12-27 2020-04-21 华南理工大学 Monocular distance measurement method under intelligent driving environment
CN111046843B (en) * 2019-12-27 2023-06-20 华南理工大学 Monocular ranging method in intelligent driving environment
CN111250406B (en) * 2020-03-16 2023-11-14 科为升视觉技术(苏州)有限公司 Automatic placement method and system for PCB detection assembly line based on visual positioning
CN111250406A (en) * 2020-03-16 2020-06-09 科为升视觉技术(苏州)有限公司 PCB detection production line automatic placement method and system based on visual positioning
FR3111222A1 (en) 2020-06-06 2021-12-10 Olivier Querbes Generation of scale 3D models from 2D images produced by a monocular imaging device
WO2021245290A1 (en) 2020-06-06 2021-12-09 Querbes Olivier Generation of full-scale 3d models from 2d images produced by a single-eye imaging device
CN111623776A (en) * 2020-06-08 2020-09-04 昆山星际舟智能科技有限公司 Method for measuring distance of target by using near infrared vision sensor and gyroscope
CN112001880A (en) * 2020-06-29 2020-11-27 浙江大学 Characteristic parameter detection method and device for planar component
CN112001880B (en) * 2020-06-29 2024-01-05 浙江大学 Method and device for detecting characteristic parameters of planar member
CN111798444A (en) * 2020-07-17 2020-10-20 太原理工大学 Unmanned workshop steel pipe length measuring method based on image distortion correction color separation processing
CN111798444B (en) * 2020-07-17 2023-06-27 太原理工大学 Unmanned workshop steel pipe length measurement method based on image distortion correction color separation processing
CN112135125A (en) * 2020-10-28 2020-12-25 歌尔光学科技有限公司 Camera internal reference testing method, device, equipment and computer readable storage medium
CN112489116A (en) * 2020-12-07 2021-03-12 青岛科美创视智能科技有限公司 Method and system for estimating target distance by using single camera
CN112798812A (en) * 2020-12-30 2021-05-14 中山联合汽车技术有限公司 Target speed measuring method based on monocular vision
CN112798812B (en) * 2020-12-30 2023-09-26 中山联合汽车技术有限公司 Target speed measuring method based on monocular vision
CN112686961A (en) * 2020-12-31 2021-04-20 杭州海康机器人技术有限公司 Method and device for correcting calibration parameters of depth camera
CN112686961B (en) * 2020-12-31 2024-06-04 杭州海康机器人股份有限公司 Correction method and device for calibration parameters of depth camera
CN112907462A (en) * 2021-01-28 2021-06-04 黑芝麻智能科技(上海)有限公司 Distortion correction method and system for ultra-wide-angle camera device and shooting device comprising distortion correction system
CN112528974A (en) * 2021-02-08 2021-03-19 成都睿沿科技有限公司 Distance measuring method and device, electronic equipment and readable storage medium
CN113091607A (en) * 2021-03-19 2021-07-09 华南农业大学 Calibration-free space point coordinate measuring method for single smart phone
CN113034565A (en) * 2021-03-25 2021-06-25 奥比中光科技集团股份有限公司 Monocular structured light depth calculation method and system
CN113034618A (en) * 2021-04-20 2021-06-25 延锋伟世通汽车电子有限公司 Method and system for measuring imaging distance of automobile head-up display
TWI816166B (en) * 2021-04-22 2023-09-21 滿拓科技股份有限公司 Method and system for detecting object depth and horizontal distance
CN113137920A (en) * 2021-05-19 2021-07-20 重庆大学 Underwater measurement equipment and underwater measurement method
CN113137920B (en) * 2021-05-19 2022-09-23 重庆大学 Underwater measurement equipment and underwater measurement method
CN113344906B (en) * 2021-06-29 2024-04-23 阿波罗智联(北京)科技有限公司 Camera evaluation method and device in vehicle-road cooperation, road side equipment and cloud control platform
CN113344906A (en) * 2021-06-29 2021-09-03 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113686314A (en) * 2021-07-28 2021-11-23 武汉科技大学 Monocular water surface target segmentation and monocular distance measurement method of shipborne camera
CN113686314B (en) * 2021-07-28 2024-02-27 武汉科技大学 Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
CN114018212A (en) * 2021-08-03 2022-02-08 广东省国土资源测绘院 Monocular distance measurement-oriented pitch angle correction method and system for dome camera
CN114018212B (en) * 2021-08-03 2024-05-14 广东省国土资源测绘院 Spherical camera monocular ranging-oriented pitch angle correction method and system
CN113838150A (en) * 2021-08-30 2021-12-24 上海大学 Moving target three-dimensional trajectory tracking method based on electro-hydraulic adjustable-focus lens
CN113838150B (en) * 2021-08-30 2024-03-19 上海大学 Moving target three-dimensional track tracking method based on electrohydraulic adjustable focus lens
CN113888640A (en) * 2021-09-07 2022-01-04 浙江大学 Improved calibration method suitable for unmanned aerial vehicle pan-tilt camera
CN113888640B (en) * 2021-09-07 2024-02-02 浙江大学 Improved calibration method suitable for unmanned aerial vehicle pan-tilt camera
CN113720299B (en) * 2021-09-18 2023-07-14 兰州大学 Ranging method based on sliding scene of three-dimensional camera or monocular camera on guide rail
CN113720299A (en) * 2021-09-18 2021-11-30 兰州大学 Distance measurement method based on scene with three-dimensional camera or monocular camera sliding on guide rail
CN114219850A (en) * 2021-11-16 2022-03-22 英博超算(南京)科技有限公司 Vehicle ranging system applying 360-degree panoramic looking-around technology
CN114219850B (en) * 2021-11-16 2024-05-10 英博超算(南京)科技有限公司 Vehicle ranging system applying 360-degree panoramic looking-around technology
CN114200532A (en) * 2021-12-14 2022-03-18 中国航发南方工业有限公司 Device and method for detecting excess in casting case of aero-engine
CN114200532B (en) * 2021-12-14 2024-05-14 中国航发南方工业有限公司 Device and method for detecting residues in casting case of aero-engine
CN114509048B (en) * 2022-01-20 2023-11-07 中科视捷(南京)科技有限公司 Overhead transmission line space three-dimensional information acquisition method and system based on monocular camera
CN114509048A (en) * 2022-01-20 2022-05-17 中科视捷(南京)科技有限公司 Monocular camera-based overhead transmission line space three-dimensional information acquisition method and system
CN114684202A (en) * 2022-06-01 2022-07-01 浙江大旗新能源汽车有限公司 Intelligent system for automatically driving vehicle and integrated control method thereof
CN114684202B (en) * 2022-06-01 2023-03-10 浙江大旗新能源汽车有限公司 Intelligent system for automatically driving vehicle and integrated control method thereof
CN115507752B (en) * 2022-09-29 2023-07-07 苏州大学 Monocular vision ranging method and system based on parallel environment elements
CN115507752A (en) * 2022-09-29 2022-12-23 苏州大学 Monocular vision distance measurement method and system based on parallel environment elements

Also Published As

Publication number Publication date
CN109146980B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN109146980A (en) The depth extraction and passive ranging method of optimization based on monocular vision
CN109035320A (en) Depth extraction method based on monocular vision
CN109269430B (en) Multi-standing-tree breast height diameter passive measurement method based on deep extraction model
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
CN109255818A (en) A kind of extracting method of novel target and its sub-pixel angle point
CN102376089B (en) Target correction method and system
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN106595528A (en) Digital speckle-based telecentric microscopic binocular stereoscopic vision measurement method
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN107977996B (en) Space target positioning method based on target calibration positioning model
KR101759798B1 (en) Method, device and system for generating an indoor two dimensional plan view image
CN109974618B (en) Global calibration method of multi-sensor vision measurement system
CN109961485A (en) A method of target positioning is carried out based on monocular vision
CN110763204B (en) Planar coding target and pose measurement method thereof
CN103473771A (en) Method for calibrating camera
CN107084680A (en) Target depth measuring method based on machine monocular vision
CN104463791A (en) Fisheye image correction method based on spherical model
CN109448043A (en) Standing tree height extracting method under plane restriction
CN104537661A (en) Monocular camera area measuring method and system
Wang et al. Error analysis and improved calibration algorithm for LED chip localization system based on visual feedback
CN102914295A (en) Computer vision cube calibration based three-dimensional measurement method
Mei et al. Monocular vision for pose estimation in space based on cone projection
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant