CN109146980B - Monocular vision based optimized depth extraction and passive distance measurement method - Google Patents

Monocular vision based optimized depth extraction and passive distance measurement method Download PDF

Info

Publication number
CN109146980B
CN109146980B CN201810918876.2A CN201810918876A CN109146980B CN 109146980 B CN109146980 B CN 109146980B CN 201810918876 A CN201810918876 A CN 201810918876A CN 109146980 B CN109146980 B CN 109146980B
Authority
CN
China
Prior art keywords
camera
image
value
target object
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810918876.2A
Other languages
Chinese (zh)
Other versions
CN109146980A (en
Inventor
徐爱俊
武新梅
周素茵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang A&F University ZAFU
Original Assignee
Zhejiang A&F University ZAFU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang A&F University ZAFU filed Critical Zhejiang A&F University ZAFU
Priority to CN201810918876.2A priority Critical patent/CN109146980B/en
Publication of CN109146980A publication Critical patent/CN109146980A/en
Application granted granted Critical
Publication of CN109146980B publication Critical patent/CN109146980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an optimized depth extraction and passive distance measurement method based on monocular vision, which is characterized by comprising the following steps of: calibrating a mobile phone camera to obtain internal parameters and image resolution of the camera; step two: establishing a depth extraction model
Figure DDA0003050725010000011
Step three: acquiring pixel values u 'and v' of a target point by acquiring an image of a target object to be detected; step four: calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the steps and combining with a camera depth extraction model
Figure DDA0003050725010000012
The monocular vision-based optimized depth extraction and passive distance measurement method can be suitable for cameras with different parameters such as field angles, focal lengths, image resolutions and the like, improves the distance measurement precision, and provides support for target object measurement and real scene three-dimensional reconstruction in machine vision.

Description

Monocular vision based optimized depth extraction and passive distance measurement method
Technical Field
The invention relates to the field of ground close-range photogrammetry, in particular to a passive distance measurement method for a pinhole camera under a monocular vision system.
Background
Target object distance measurement and positioning based on images mainly comprises two methods of active distance measurement and passive distance measurement[1]. The active distance measurement is carried out by mounting a laser distance measurement device on a machine (such as a camera)[2-4]. The passive distance measurement is to calculate the depth information of the target object in the two-dimensional digital image through machine vision, and then calculate the distance of the target object according to the image pixel information and the camera imaging principle[5-6]. Machine vision distance measurementIt is divided into monocular vision distance measurement and binocular vision distance measurement[7-9]. In the distance measurement process, the key step is the acquisition of the depth information of a target object, the early depth information acquisition method mainly comprises binocular stereo vision and camera motion information, and a plurality of images are needed to finish the acquisition of the depth information of the image[10-16]. Compared with binocular vision ranging, monocular ranging image acquisition does not need strict hardware conditions and has more competitive advantages.
In the prior art, there are various methods for acquiring the depth information of a target object of a monocular vision system. If a corresponding point calibration method is adopted to obtain the depth information of the target object to be measured[17-19]. Document [17]]A robot target positioning and ranging method based on monocular vision is researched, and the method generally comprises the steps of obtaining internal and external parameters of a camera through camera calibration and solving a conversion relation between an image coordinate system and a world coordinate system by combining a projection model so as to calculate depth information of a target object. The method has the defects that target images in different directions need to be acquired, corresponding coordinates of each point in a world coordinate system and an image coordinate system are accurately recorded, and calibration precision has a large influence on measurement precision.
Document [20] places a reference on a road surface, measures the distance, selects an appropriate mathematical model, fits the correspondence between the distance of the reference and the pixel, and extracts depth information in real time using the correspondence. Unfortunately, the method of document [20] suffers from a distance measurement error and a fitting error in terms of accuracy.
Document [21] designs a vertical target image, establishes a mapping relationship between an image ordinate pixel value and an actual measurement angle by detecting corner data of the image, and obtains vehicle-mounted depth information in the image by using the relationship and combining with the known height of a vehicle-mounted monocular camera. Due to the fact that internal parameters of different camera devices are different, for camera devices of different models, target image information needs to be collected again, a camera depth information extraction model needs to be established, and due to the fact that different vehicle-mounted cameras are manufactured and assembled, pitching angles of the cameras are different, and therefore the method of the document [21] is poor in universality.
In addition, the method of document [21] adopts a vertical target to study the relationship between the imaging angle of the image point on the vertical plane and the pixel value of the ordinate, and applies the relationship to the measurement of the object distance on the horizontal plane, so that the distance measurement accuracy is relatively low, because the distortion rules of the camera in the horizontal direction and the vertical direction are not completely the same. The invention application with the application number of 201710849961.3 discloses an improved camera calibration model and a distortion correction model (hereinafter referred to as an improved calibration model with a nonlinear distortion term) suitable for an intelligent mobile terminal camera, which can help to correct a calibration plate picture and obtain internal and external parameters of the camera with higher precision, and has the defect that the method is not expanded to the nonlinear distortion correction of an image to be measured and the measurement of a target object.
Reference documents:
[1] the unmanned aerial vehicle target positioning method based on Monte Carlo Kalman filtering [ J ] is reported by northwest university of industry, 2017,35(3): 435-.
[2]Lin F,Dong X,Chen B M,et al.A Robust Real-Time Embedded Vision System on an Unmanned Rotorcraft for Ground Target Following[J].IEEE Trans on Industrial Electronics,2012,59(2):1038-1049.
[3] Zhang 29740, Lin, Huzhengliang, Zhujian army, and the like, a target position resolving method [ J ] in a single-soldier comprehensive sighting instrument, an electronic measurement technology, 2014,37(11) and 1-3.
[4] Sun Junling, Sun Guangmen, Marpenge, et al, laser target location [ J ] based on symmetric wavelet denoising and asymmetric Gaussian fitting, Chinese laser, 2017,44(6): 178-.
[5] Shijie, Li Yin ya, Chi Qing, et al. Passive tracking algorithm based on machine vision [ J ] proceedings of the university of science and technology in Huazhong, 2017,45(6):33-37 under incomplete measurement.
[6] Xucheng, Huangdaqing, hole-clandeering, a passive target positioning and precision analysis of small unmanned aerial vehicle [ J ] instrumental and instrumental reporting, 2015,36(5): 1115-clandestine 1122.
[7] Lekehong, ginger sensitivity, Gong Yong Ying, 2-D to 3-D image/video conversion image depth extraction method reviews [ J ]. Chinese graphic newspapers, 2014,19(10): 1393-.
[8] Wanghao, xu Zhi Wen, Xikun, etc. binocular ranging system based on OpenCV [ J ] Jilin university bulletin 2014,32(2): 188-.
[9]Sun W,Chen L,Hu B,et al.Binocular vision-based position determination algorithm and system[C]//Proceedings of the 2012International Conference on Computer Distributed Control and Intelligent Environmental Monitoring.Piscataway:IEEE Computer Society,2012:170-173.
[10]Ikeuchi K.Determining a depth map using a dual photometric stereo[J].The International Journal of Robotics Research,1987,6(1):15-31.
[11]Shao M,Simchony T,Chellappa R.New algorithms from reconstruction of a 3-ddepth map from one or more images[C]//Proceedings of CVPR’88.Ann Arbor:IEEE,1988:530-535.
[12]Matthies L,Kanade T,Szeliski R.Kalman filter-based algorithms for estimating depth from image sequences[J].International Journal of Computer Vision,1989,3(3):209-238.
[13]Mathies L,Szeliski R,Kanade T.Incremental estimation of dense depth maps from image sequence[C]//Proceedings of CVPR’88.Ann Arbor:IEEE,1988:366-374.
[14]Mori T,Yamamoto M.A dynamic depth extraction method[C]//Proceedings of Third International Conference on Computer Vision.Osaka:IEEE,1990:672-676.
[15]Inoue H,Tachikawa T,Inaba M.Robot vision system with a correlation chip for real-time tracking,optical flow and depth map generation[C]//Proceeding of Robotics and Automation.Nice:IEEE,1992:1621-1626.
[16] Hutianxiang, Zhengzheng, Zhoudouping, Tree image distance measurement method based on binocular vision [ J ] agricultural machinery science and newspaper, 2010,41(11): 158-.
[17] The monocular vision-based robot target positioning and ranging method is researched J computer measurement and control 2012,20(10): 2654-.
[18] Wu Dynasty, Tang Zhen Min, distance measurement research in monocular autonomous robot visual navigation [ J ] robot, 2010,32(6): 828-.
[19] Study on monocular vision-based front vehicle detection and distance measurement methods [ J ] video application and engineering, 2011,35(1): 125-.
[20]Wu C F,Lin C J,Lee C Y,et al.Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement[J].IEEE Transactions on Systems,Man and Cybernetics-Part C:Applications and Reviews,2012,42(4):577-589.
[21] Yellow clouds, peaks, xu nationality, etc. monocular depth information extraction based on a single vertical target image [ J ]. the university of aerospace, Beijing, 2015,41(4): 649-.
Disclosure of Invention
The invention aims to provide an optimized depth extraction and passive distance measurement method based on monocular vision, which can be suitable for cameras with different parameters such as field angles, focal lengths, image resolutions and the like, improve the distance measurement precision and provide support for target object measurement and real scene three-dimensional reconstruction in machine vision.
In order to achieve the purpose, the invention adopts the following technical scheme:
a monocular vision-based optimized depth extraction and passive ranging method is characterized by comprising the following steps:
the method comprises the following steps: calibrating a mobile phone camera to obtain internal parameters and image resolution of the camera
Correcting the internal parameters of the camera by adopting a Zhangyingyou calibration method and introducing an improved calibration model with a nonlinear distortion term
First, the physical size of each pixel on the image plane is set to be dx dy, and the coordinate of the origin of the image coordinate system (x, y) in the pixel coordinate system (u, v) is set to be (u, v)0,v0) And (x, y) is the normalized coordinates of the image points in the actual image, and any pixel in the image satisfies the following relations in two coordinate systems:
Figure GDA0003050723000000041
fx、fyat any point P in the camera coordinate system for normalized focal length on the x-axis and y-axisc(Xc,Yc,Zc) Projected onto the image coordinate system as (x)c,ycF), the plane of the image coordinate system is vertical to the z axis of the optical axis, the distance value between the plane of the image coordinate system and the origin is the focal length f of the camera, and the following can be obtained according to the similar triangle principle:
Figure GDA0003050723000000051
the improved calibration model with the nonlinear distortion term is introduced, the calibration model comprises radial distortion caused by lens shape defects and tangential distortion caused by different degrees of eccentricity of an optical system, and a radial distortion mathematical model is as follows:
Figure GDA0003050723000000052
wherein r is2=x2+y2(x ', y') is a normalized coordinate value of the ideal linear camera coordinate system without distortion terms after correction, the radial distortion value is related to the position of the image point in the image, the radial distortion value at the edge of the image is larger,
the tangential distortion mathematical model is:
Figure GDA0003050723000000053
wherein contains k1、k2、k3、p1、p2The distortion correction function model obtained by the equations (3) and (4) is as follows:
Figure GDA0003050723000000054
the transformation from world coordinates to camera coordinates has the following relationship:
Pc=R·(PW-C)=R·PW+T (6)
combining equations (1) - (6), expressed in terms of homogeneous coordinates and matrix form, can be:
Figure GDA0003050723000000055
Mint、Mextrespectively, camera calibration inner and outer parameter matrixes, wherein the camera inner parameter comprises a coordinate u of an image coordinate system (x, y) origin in a pixel coordinate system (u, v)0、v0,fx、fyThe calibration of the mobile phone camera is realized by combining Java with OpenCV for normalized focal lengths on an x axis and a y axis, and the internal parameters, the camera lens distortion parameters and the image resolution v of the mobile phone camera are obtainedmax、umax
Step two: establishing a depth extraction model
Setting an abstract function according to the linear relation between the target object imaging angle alpha and the ordinate pixel value v, establishing a spatial relation model containing three parameters of the target object imaging angle alpha, the ordinate pixel value v and the camera rotation angle beta, namely alpha is F (v, beta),
under different models of equipment and camera rotation angles, the vertical coordinate pixel value of a shot object and an imaging angle are in extremely obvious negative linear correlation relationship, the slope and intercept of the linear relationship are different, so that the following conditions are set:
α=F(v,β)=a·v+b (17)
wherein the parameters a, b are related to the camera model and the camera rotation angle,
when alpha takes the minimum value alpha ═ alphaminWhen the angle is 90-theta-beta, theta is half of the vertical field angle of the camera, namely when the object is projected to the bottom end of the picture, v is vmax(vmaxFor camera CMOS or CCD image sensor column coordinate effective pixel count), the substitution of equation (17) can be derived:
90-β-θ=a·vmax+b (18)
when alpha ismin+2θ>90 deg., i.e. theta>Beta, the upward angle of view of the cameraAbove horizontal, at infinity from the ground plane, α infinity is close to 90 °, at which time v infinity approaches v0-tanβ*fy,fyFor normalized focal length on the y-axis, the same holds for negative values of β, i.e., counterclockwise rotation of the camera, and thus, formula (17) can be substituted:
90=a·(v0-tanβ·fy)+b (19)
when alpha ismin+2θ<90 deg., i.e. theta<When the angle is beta, the upward angle of the camera is lower than the horizontal line, the imaging angle alpha of the target object at the infinite distance from the ground plane is the maximum value alphamax=αminWhen +2 θ is 90 — β + θ, that is, when the object is projected to the highest point of the picture, v is 0, and formula (17) is substituted with:
90-β+θ=b (20)
according to the pinhole camera construction principle, the tangent value of half of the vertical field angle theta of the camera is equal to half of the side length of the CMOS or CCD image sensor of the camera divided by the focal length of the camera, so that the value of theta:
Figure GDA0003050723000000061
l in the formula (21)CMOSFor the side length of the camera CMOS or CCD image sensor, combining equations (18) to (21), F (α, β) is:
Figure GDA0003050723000000071
in the formula (10), δ is a nonlinear distortion term error of the camera, and a mobile phone camera depth extraction model is established according to a trigonometric function by combining the shooting height h of the mobile phone camera:
Figure GDA0003050723000000072
step three: acquiring pixel values u 'and v' of a target point by acquiring an image of a target object to be detected;
in the step of image acquisition of the target object to be detected, the method further comprises nonlinear distortion correction and pretreatment of the target object image to be detected, namely:
acquiring images through a mobile phone camera, and establishing a projection geometric model, wherein f is a camera focal length, theta is a half of a vertical field angle of the camera, h is a camera photographing height, beta is a rotation angle of the camera along an ox axis of a camera coordinate system, a clockwise rotation beta value of the camera is positive, an anticlockwise rotation beta value of the camera is negative, the beta value is obtained through a gravity sensor in the camera, and alpha is a target object imaging angle;
combining the camera lens distortion parameters obtained by the camera calibration in the step one, and carrying out nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image; substituting the corrected ideal linear normalized coordinate values (x, y) into a formula (1), calculating pixel coordinate values of each point of the corrected image, and performing interpolation processing on the corrected pixel values by a bilinear interpolation method to obtain the corrected image; preprocessing the corrected image by adopting computer vision and image processing technologies, wherein the preprocessing comprises image binarization, image morphological operation and target object contour edge detection to obtain the edge of a target object, and further calculating the geometric center pixel value (u ', v') of the edge of the target object in contact with the ground;
step four: calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the steps and combining with a camera depth extraction model
Selecting a corresponding depth extraction model according to the magnitude relation between the camera rotation angle beta and a half of the camera vertical field angle theta, and calculating the pixel value v of the camera internal parameter image center point obtained in the step one0Normalized focal length f on the y-axisyAnd image resolution vmaxSubstituting the vertical coordinate pixel value v' of the target object to be detected, the camera rotation angle beta and the mobile phone camera shooting height h into the depth extraction model, calculating the depth value D of the target point, and calculating the vertical distance T from the target point to the optical axis directionx
Figure GDA0003050723000000081
From equations (11) to (12), the distance L between an arbitrary point on the image and the capturing camera can be calculated:
Figure GDA0003050723000000082
compared with the prior art, the invention has the beneficial effects that: due to the adoption of the technical scheme, the device has the advantages that,
(1) compared with other monocular vision passive ranging methods, the method does not need a large scene calibration field, and errors caused by data fitting are avoided;
(2) the established depth extraction model has equipment universality, camera rotation angles are introduced into the model, and for cameras of different models, the depth of any image point on a single picture can be calculated only after camera internal parameters are obtained through camera calibration for the first time;
(3) by verification, when the method is used for short-distance ranging within 0.5-2.6 m, the average relative error of depth value measurement is 0.937%, and when the distance is 3-10 m, the relative error of measurement is 1.71%. Therefore, the method has higher measurement accuracy in distance measurement.
Drawings
FIG. 1 is a flow chart of a ranging method according to the present invention;
FIG. 2 is a schematic view of a novel target;
FIG. 3 is a schematic diagram of an implementation flow of a corner detection algorithm;
FIG. 4 is a diagram of a geometric model taken with an upward camera view above horizontal;
FIG. 5 is a schematic diagram of a geometric model taken with an upward camera view below horizontal;
FIG. 6 is a schematic model of a camera shot projection geometry model;
FIG. 7 is a schematic diagram of coordinate systems in a pinhole model;
FIG. 8 is a schematic view of a camera stereo imaging system;
FIG. 9 is a diagram illustrating the relationship between the ordinate pixel value and the imaging angle of three types of equipment objects;
fig. 10 is a diagram illustrating the relationship between the object ordinate pixel value and the actual imaging angle at different camera rotation angles.
Detailed Description
In order to make the technical solution of the present invention clearer, the present invention will be described in detail with reference to fig. 1 to 10. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention relates to an optimized depth extraction and passive distance measurement method based on monocular vision, which comprises the following steps:
firstly, calibrating a camera to obtain internal parameters and image resolution of the camera. The calibration adopts a Zhangyingyou calibration method, and an improved calibration model with a nonlinear distortion term is introduced to correct the internal parameters of the camera.
First, the physical size of each pixel on the image plane is set to dx dy (unit: mm), and the coordinates of the origin of the image coordinate system (x, y) in the pixel coordinate system (u, v) are set to (u, v)0,v0) And (x, y) is the normalized coordinates of the image points in the actual image, and any pixel in the image satisfies the following relations in two coordinate systems:
Figure GDA0003050723000000091
fx、fyat any point P in the camera coordinate system for normalized focal length on the x-axis and y-axisc(Xc,Yc,Zc) Projected onto the image coordinate system as (x)c,ycF), the plane of the image coordinate system is vertical to the z axis of the optical axis, the distance value between the plane of the image coordinate system and the origin is the focal length f of the camera, and the following can be obtained according to the similar triangle principle:
Figure GDA0003050723000000092
the improved calibration model with the nonlinear distortion term is introduced, the calibration model comprises radial distortion caused by lens shape defects and tangential distortion caused by different degrees of eccentricity of an optical system, and a radial distortion mathematical model is as follows:
Figure GDA0003050723000000101
wherein r is2=x2+y2(x ', y') is a normalized coordinate value of the ideal linear camera coordinate system without distortion terms after correction, the radial distortion value is related to the position of the image point in the image, the radial distortion value at the edge of the image is larger,
the mathematical model of the tangential distortion model is as follows:
Figure GDA0003050723000000102
wherein contains k1、k2、k3、p1、p2The distortion correction function model obtained by the equations (3) and (4) is as follows:
Figure GDA0003050723000000103
the transformation from world coordinates to camera coordinates has the following relationship:
Pc=R·(PW-C)=R·PW+T (6)
combining equations (1) - (6), expressed in terms of homogeneous coordinates and matrix form, can be:
Figure GDA0003050723000000104
Mint、Mextrespectively, camera calibration inner and outer parameter matrixes, wherein the camera inner parameter comprises a coordinate u of an image coordinate system (x, y) origin in a pixel coordinate system (u, v)0、v0,fx、fyThe calibration of the mobile phone camera is realized by combining Java with OpenCV for normalized focal lengths on an x axis and a y axis, and the internal parameters, the camera lens distortion parameters and the image resolution v of the mobile phone camera are obtainedmax、umax
And secondly, establishing a camera depth extraction model through acquisition of the novel target image. The existing target is a black and white checkerboard target with equal length and width. The novel target of the invention is different from the existing target in that the size of a row of squares of the target closest to a camera is set to be d x d mm, then the width of each row of squares is fixed, and the value of the next row of squares is increased compared with the previous row of squares
Figure GDA0003050723000000111
In the formula xiIs the actual distance from the ith corner point to the camera, yiThe length of each square grid is the difference delta d between the lengths of the adjacent square gridsiComprises the following steps:
Figure GDA0003050723000000112
assuming that the relationship between the calculated length and the actual distance of each square is f (x), the following formula (8) can be obtained:
Figure GDA0003050723000000113
through Pearson correlation analysis, the length and the actual distance have a very significant linear correlation (p <0.01), the correlation coefficient r is equal to 0.975, the derivative f' (x) of f (x) can be calculated through a least square method,
therefore, when the size of a row of squares of the target closest to the camera is d x d mm (the measurement accuracy is highest when d ranges from 30 mm to 60 mm), each row is fixed in width and increased in length
Figure GDA0003050723000000114
D f' (x) mm, the novel target is shown in figure 2,
the perspective transformation phenomenon exists when objects on the horizontal ground are shot, so that the robustness of common corner detection algorithms such as Harris and Shi-Tomasi is poor, and the detection fails when the camera rotates anticlockwise along the ox axis of a camera coordinate system, so that the invention combines a growth-based checkerboard corner detection method provided by Andrea Geiger and the like and a corner SubPix () function provided by OpenCV to detect the position of a sub-pixel-level corner, the algorithm has high robustness, and the extraction effect on pictures with large distortion degree is good,
the implementation flow of the corner detection algorithm is shown in fig. 3, and the sub-pixel level corner extraction step of the novel target of the present invention comprises:
1) searching angular points on the image according to similarity parameters of each pixel point in the image and the template, and positioning the angular point positions of the targets;
firstly, defining two different corner point templates, wherein one corner point template is used for a corner point parallel to a coordinate axis, and the other corner point template is used for a corner point rotated by 45 degrees, each template consists of 4 filtering kernels { A, B, C and E } and is used for carrying out convolution operation on a subsequent image; the similarity of each corner to a corner is then calculated using the two corner templates:
Figure GDA0003050723000000121
wherein
Figure GDA0003050723000000122
Represents the convolution response of a convolution kernel X (X ═ A, B, C, E) and a template i (i ═ 1,2) at a certain pixel point,
Figure GDA0003050723000000123
and
Figure GDA0003050723000000124
representing the similarity of two possible inflection points of the template i, and calculating the similarity of each pixel point in the image to obtain a corner similarity graph; processing the corner pixel image by using a non-maximum suppression algorithm to obtain candidate points; then, the candidate points are verified in a local n x n neighborhood by a gradient statistical method, and the local is checked firstPerforming sobel filtering on the domain gray level map, then calculating a weighted direction histogram (32bins), and finding two main modes gamma in the weighted direction histogram by using a meanshift algorithm1And gamma2(ii) a Depending on the direction of the edge, for a desired gradient strength
Figure GDA0003050723000000125
A template T is constructed.
Figure GDA0003050723000000126
And (the prime symbol represents a cross-correlation operator) and the product of the corner similarity as a corner score, and then judging by using a threshold value to obtain an initial corner.
2) Performing sub-pixel level fine extraction on the positions and the directions of the angular points;
performing sub-pixel-level corner positioning by applying a cornerSubPix () function in OpenCV, and positioning corners to sub-pixels so as to obtain a sub-pixel-level corner detection effect; to refine the edge direction vectors, their standard deviation ratio is minimized according to the image gradient values:
Figure GDA0003050723000000127
wherein
Figure GDA0003050723000000128
Is the gradient value m of the neighboring pixel set and the module ii=[cos(γi)sin(γi)]TAnd (4) matching. (the calculation scheme is according to Geiger A, Moosmann F, Car
Figure GDA0003050723000000129
et al.Automatic camera and range sensor calibration using a single shot[C]//Robotics and Automation(ICRA),2012IEEE International Conference on.IEEE,2012:3936-3943.)
3) Marking the angular points and outputting sub-pixel level coordinates of the angular points, growing and reconstructing a checkerboard according to an energy function, marking the angular points and outputting sub-pixel level angular point coordinates;
according to the document "Geiger A, Moosmann F, Car
Figure GDA0003050723000000131
et al.Automatic camera and range sensor calibration using a single shot[C]The method provided by// Robotics and Automation (ICRA),2012IEEE International Conference on.IEEE,2012: 3936-:
E(x,y)=Ecorners(y)+Estruct(x,y) (16)
wherein E iscornersIs the negative value of the total number of corner points of the current chessboard, EstructIs the degree of matching of two adjacent corners and the predicted corner; corner pixel values are output via OpenCV.
The SPSS 22 is used for carrying out linear correlation analysis on the imaging angle and the ordinate pixel value of the object, a Pearson correlation coefficient is output, after verification, the object ordinate pixel value and the actual imaging angle have a very significant negative correlation (p <0.01) under different types of equipment and camera rotation angles, in addition, the invention also carries out significance test on the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different types of equipment and camera rotation angles, and the result shows that the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different types of equipment and camera rotation angles is very significant (p <0.01), which indicates that the equipment and the camera rotation angles of different types have different depth extraction models,
setting an abstract function according to the linear relation between the target object imaging angle alpha and the ordinate pixel value v, establishing a spatial relation model containing three parameters of the target object imaging angle alpha, the ordinate pixel value v and the camera rotation angle beta, namely alpha is F (v, beta),
under different models of equipment and camera rotation angles, the vertical coordinate pixel value of a shot object and an imaging angle are in extremely obvious negative linear correlation relationship, the slope and intercept of the linear relationship are different, so that the following conditions are set:
α=F(v,β)=a·v+b (17)
wherein the parameters a, b are related to the camera model and the camera rotation angle,
when alpha takes the minimum value alpha ═ alphaminWhen the angle is 90-theta-beta, theta is half of the vertical field angle of the camera, namely when the object is projected to the bottom end of the picture, v is vmax,vmaxFor the number of camera CMOS or CCD image sensor column coordinates valid pixels, the substitution of equation (17) can be obtained:
90-β-θ=a·vmax+b (18)
when alpha ismin+2θ>90 deg., i.e. theta>When the angle of view of the camera is higher than the horizontal line, the camera shoots a projection geometric model as shown in fig. 4, the ground plane is at infinity, alpha is infinity close to 90 degrees, and v is infinity close to v0-tanβ*fy,fyFor normalized focal length on the y-axis, the same holds for negative values of β, i.e., counterclockwise rotation of the camera, and thus, formula (17) can be substituted:
90=a·(v0-tanβ·fy)+b (19)
when alpha ismin+2θ<90 deg., i.e. theta<Beta, the upward angle of view of the camera is lower than the horizontal line, the camera shoots a projection geometric model as shown in figure 5, the imaging angle alpha of the target object at the infinite distance from the ground plane is the maximum value, alphamax=αminWhen +2 θ is 90 — β + θ, that is, when the object is projected to the highest point of the picture, v is 0, and formula (17) is substituted with:
90-β+θ=b (20)
according to the pinhole camera construction principle, the tangent value of half of the vertical field angle theta of the camera is equal to half of the side length of the CMOS or CCD image sensor of the camera divided by the focal length of the camera, so that the value of theta:
Figure GDA0003050723000000141
l in the formula (21)CMOSFor the side length of the camera CMOS or CCD image sensor, combining equations (18) to (21), F (α, β) is:
Figure GDA0003050723000000142
and (3) in the formula (10), delta is a nonlinear distortion term error of the camera, and a mobile phone camera depth extraction model is established according to a trigonometric function principle by combining the shooting height h of the mobile phone camera:
Figure GDA0003050723000000143
and thirdly, acquiring pixel values u and v of the target points through image acquisition of the target object to be detected. Acquiring images through a mobile phone camera, and establishing a projection geometric model as shown in fig. 6, wherein f is a camera focal length, theta is a half of a vertical field angle of the camera, h is a camera photographing height, beta is a rotation angle of the camera along an ox axis of a camera coordinate system, a clockwise rotation beta value of the camera is positive, an anticlockwise rotation beta value is negative, the beta value is acquired through a gravity sensor in the camera, and alpha is an imaging angle of a target object; combining the camera lens distortion parameters obtained by the first step of camera calibration, and performing nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image; substituting the corrected ideal linear normalized coordinate values (x, y) into a formula (1), calculating pixel coordinate values of each point of the corrected image, and performing interpolation processing on the corrected pixel values by a bilinear interpolation method to obtain the corrected image; and (3) preprocessing the corrected image by adopting computer vision and image processing technologies, wherein the preprocessing comprises image binarization, image morphological operation and target object contour edge detection to obtain the edge of the target object, and further calculating the pixel value (u, v) of the geometric center point of the edge of the target object, which is in contact with the ground.
And fourthly, calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the previous step and combining with a camera depth extraction model. Selecting a corresponding depth model according to the magnitude relation between the camera rotation angle beta and a half of the camera vertical field angle theta, and calculating the pixel value v of the camera internal parameter image center point obtained in the step0Normalized focal length f on the y-axisyAnd image resolution vmaxThe vertical coordinate pixel value v of the object to be measured and the rotation of the camera calculated in the above stepsSubstituting the angle beta and the shooting height h of the mobile phone camera into the depth extraction model to calculate a target point depth value D,
FIG. 7 is a schematic diagram of a camera stereo imaging system, where point P is the camera position, the straight line on which point A, B lies is parallel to the image plane, A has coordinates (X, Y, Z) in the camera coordinate system, and B has coordinates (X + T)xY, Z), projected onto an image plane a' (x)l,yl)、B’(xr,yr) Above, it can be obtained from equation (2):
Figure GDA0003050723000000151
combining equation (1) and equation (22), the horizontal parallax d of two points a ', B' with the same Y value and the same depth Z can be derived:
Figure GDA0003050723000000152
thus, the camera focal length f, the image center point coordinates (u) are known0,v0) And the physical size d of each pixel in the x-axis direction on the image planexThen, combining with the depth extraction model, the vertical distance T from the target point to the optical axis direction is calculatedx
Figure GDA0003050723000000161
In the pinhole model, the transformation relationship between the coordinate systems of the camera is shown in FIG. 8, and the target depth D and the vertical distance T from the target depth D to the optical axis are calculatedxBased on equations (11) to (12), the distance L between an arbitrary point on the image and the camera can be calculated:
Figure GDA0003050723000000162
example 1
The following takes a millet 3(MI 3) mobile phone as an example to specifically describe the monocular vision-based optimized depth extraction and passive ranging method of the present invention.
Firstly, calibrating a camera to acquire internal parameters and image resolution of the camera
Using a checkerboard with 8 × 9 rows and 20 × 20 sizes as an experimental material for camera calibration, acquiring 20 calibration board pictures at different angles by using a millet 3 mobile phone camera, calibrating the millet 3(MI 3) mobile phone camera by using OpenCV according to the improved camera calibration model with the nonlinear distortion term,
firstly, reading a calibration board picture by using a fin () function, and acquiring the image resolution of a first picture through cols and rows; then extracting sub-pixel level corner points in the calibration plate picture through a find4QuadCornerSubpix () function, and marking the corner points by using a drawChessbosdCorers () function; calling a calibretacarama () function to calibrate the camera, using the obtained internal and external parameters of the camera to perform re-projection calculation on the three-dimensional points of the space to obtain new projection points, and calculating the error between the new projection points and the old projection points; outputting and storing the camera internal parameter matrix and the distortion parameter,
the internal parameters of the camera obtained by calibration are as follows: f. ofx=3486.5637,u0=1569.0383,fy=3497.4652,v0When 2107.9899, the image resolution is 3120 × 4208, and the camera lens distortion parameters are: [0.0981, -0.1678,0.0003, -0.0025,0.0975],
Secondly, establishing a camera depth extraction model through acquisition of novel target images
The invention uses a traditional checkerboard calibration plate of 45 × 45mm as an initial experimental material for target design, and in order to calculate the difference value of the lengths of adjacent squares, the invention designs 6 groups of experiments, extracts the corner point value of the traditional checkerboard with the square size of 45 × 45mm, and calculates the actual physical distance represented by unit pixels between adjacent corners in a world coordinate system, and in order to ensure that the difference values of longitudinal coordinate pixels between the corners are approximately equal, the length y of each squareiThe values of (A) are shown in Table 1,
TABLE 1 calculated Width of squares
Table 1Computing width of each grid
Figure GDA0003050723000000171
Pearson correlation analysis shows that the length and the actual distance have a very significant linear correlation (p <0.01), the correlation coefficient r is equal to 0.975, and the derivative f' (x) of f (x) can be calculated to be 0.262 by the least square method, so that when the size of a row of squares of the target closest to the camera is 45mm, the width of each row is fixed, the width increase value delta d is 11.79mm,
extracting the corner points of the novel target by a corner point extraction algorithm in a concrete implementation step,
the method selects three different types of smart phones, namely millet, Huashi and iPhone, as image acquisition equipment, and the rotation angle beta of a camera is { -10 degrees, 0 degrees, 10 degrees, 20 degrees and 30 degrees. The corner detection algorithm is used for acquiring data and performing function fitting on the relationship, fig. 9 is the relationship between the ordinate pixel values of three different types of smart phones and the object imaging angles when beta is 10 degrees, fig. 10 is the relationship between the ordinate pixel values and the object imaging angles under different camera rotation angles,
camera equipment and camera rotation angles of different models show a descending trend of an object imaging angle along with the increase of a vertical coordinate pixel value, different linear function relations are shown between the pixel value and the imaging angle due to the difference of the equipment models and the camera rotation angles, the object imaging angle and the vertical coordinate pixel value are subjected to linear correlation analysis by using the SPSS 22, and an output Pearson correlation coefficient r is shown in a table 2.
TABLE 2 correlation coefficient of object ordinate pixel value and imaging angle
Table 2 Pearson correlation coefficient of image ordinate pixel values and actual imaging angles
Figure GDA0003050723000000181
Note: indicates extreme significance (p < 0.01).
Note:**represents very significant correlation(p<0.01).
Through verification, under the rotation angles of the equipment and the camera with the same model, the pixel value of the vertical coordinate of the object and the actual imaging angle form an extremely obvious negative correlation relationship (p is less than 0.01), and the correlation coefficient r is more than 0.99. In addition, the method also carries out significance test on the slope difference of the linear function between the object ordinate pixel value and the imaging angle under different equipment models and camera rotation angles. The result shows that the slope difference of the linear function between the object ordinate pixel value and the imaging angle is extremely obvious (p <0.01) under different equipment models and camera rotation angles, which indicates that the depth extraction models of the equipment models and the camera rotation angles are different.
According to the depth extraction model of the specific embodiment, the internal parameters of the millet 3 mobile phone camera are substituted into a formula (10) to obtain: :
Figure GDA0003050723000000191
the specific depth extraction model of the equipment obtained according to the trigonometric function principle is as follows:
Figure GDA0003050723000000192
and thirdly, acquiring pixel values u and v of the target points through image acquisition of the target object to be detected.
Using a millet cell phone 3(MI 3) camera as a picture taking device, taking a picture through a camera tripod, and measuring the height h from the camera to the ground to be 305mm, the rotation angle beta of the camera to be 0 DEG,
carrying out nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image;
calibrating the acquired camera lens distortion parameters according to the camera in the first step: [0.0981, -0.1678,0.0003, -0.0025,0.0975], calculating the ideal linear normalized coordinate value after correction according to the formula (5):
Figure GDA0003050723000000193
calculating pixel coordinate values of each point of the corrected image by combining the formulas (1) and (2), and obtaining the corrected image through bilinear interpolation processing;
the depth and the distance of a cuboid box placed on a horizontal ground are measured by taking the cuboid box as an example, firstly, binarization processing is carried out on an acquired image, then, edge detection is carried out on the cuboid box by using a Canny operator, and the outline of a target object is extracted. The pixel value of the central point of the bottom edge of the cuboid box is extracted as (1851.23,3490).
And fourthly, calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the previous step and combining with a camera depth extraction model.
The actual imaging angle of the target object can be calculated to be 69.58 degrees by substituting the internal parameters of the camera, the photographing height h of the camera, the rotating angle beta and the vertical coordinate pixel value v of the central point of the bottom edge of the cuboid box into a formula (24). The target point depth value D (in mm) is calculated according to the trigonometric function principle:
D=305*tan 69.58°=819.21 (27)
will be the parameter fx,u0D and substituting the horizontal coordinate pixel value u of the bottom edge center point of the cuboid box into a formula (12) can calculate the vertical distance T from the geometric center point of the target object to the optical axis directionx
Figure GDA0003050723000000201
Therefore, the distance L from the rectangular box to the ground projection point of the shooting camera is as follows:
Figure GDA0003050723000000202
the distance between the rectangular box and the ground projection point of the camera is 827mm through measuring by a tape, so that the relative error of the method for measuring the distance is 0.62 percent.

Claims (1)

1. A monocular vision-based optimized depth extraction and passive ranging method is characterized by comprising the following steps:
the method comprises the following steps: calibrating a mobile phone camera to obtain internal parameters and image resolution of the camera
Correcting the internal parameters of the camera by adopting a Zhangyingyou calibration method and introducing an improved calibration model with a nonlinear distortion term
First, the physical size of each pixel on the image plane is set to dx*dyThe coordinate of the origin of the image coordinate system (x, y) in the pixel coordinate system (u, v) is (u)0,v0) And (x, y) is the normalized coordinates of the image points in the actual image, and any pixel in the image satisfies the following relations in two coordinate systems:
Figure FDA0003050722990000011
fx、fyat any point P in the camera coordinate system for normalized focal length on the x-axis and y-axisc(Xc,Yc,Zc) Projected onto the image coordinate system as (x)c,ycF), the plane of the image coordinate system is vertical to the z axis of the optical axis, the distance value between the plane of the image coordinate system and the origin is the focal length f of the camera, and the following can be obtained according to the similar triangle principle:
Figure FDA0003050722990000012
the improved calibration model with the nonlinear distortion term is introduced, the calibration model comprises radial distortion caused by lens shape defects and tangential distortion caused by different degrees of eccentricity of an optical system, and a radial distortion mathematical model is as follows:
Figure FDA0003050722990000013
wherein r is2=x2+y2(x ', y') is a normalized coordinate value of the ideal linear camera coordinate system without distortion terms after correction, the radial distortion value is related to the position of the image point in the image, the radial distortion value at the edge of the image is larger,
the tangential distortion mathematical model is:
Figure FDA0003050722990000014
wherein contains k1、k2、k3、p1、p2The distortion correction function model obtained by the equations (3) and (4) is as follows:
Figure FDA0003050722990000021
the transformation from world coordinates to camera coordinates has the following relationship:
Pc=R·(PW-C)=R·PW+T (6)
combining equations (1) - (6), expressed in terms of homogeneous coordinates and matrix form, can be:
Figure FDA0003050722990000022
Mint、Mextrespectively, camera calibration inner and outer parameter matrixes, wherein the camera inner parameter comprises a coordinate u of an image coordinate system (x, y) origin in a pixel coordinate system (u, v)0、v0,fx、fyThe calibration of the mobile phone camera is realized by combining Java with OpenCV for normalized focal lengths on an x axis and a y axis, and the internal parameters, the camera lens distortion parameters and the image resolution v of the mobile phone camera are obtainedmax、umax
Step two: establishing a depth extraction model
Setting an abstract function according to the linear relation between the target object imaging angle alpha and the ordinate pixel value v, establishing a spatial relation model containing three parameters of the target object imaging angle alpha, the ordinate pixel value v and the camera rotation angle beta, namely alpha is F (v, beta),
under different models of equipment and camera rotation angles, the vertical coordinate pixel value of a shot object and an imaging angle are in extremely obvious negative linear correlation relationship, the slope and intercept of the linear relationship are different, so that the following conditions are set:
α=F(v,β)=a·v+b (17)
wherein the parameters a, b are related to the camera model and the camera rotation angle,
when alpha takes the minimum value alpha ═ alphaminWhen the angle is 90-theta-beta, theta is half of the vertical field angle of the camera, namely when the object is projected to the bottom end of the picture, v is vmax,vmaxFor the number of camera CMOS or CCD image sensor column coordinates valid pixels, the substitution of equation (17) can be obtained:
90-β-θ=a·vmax+b (18)
when alpha ismin+2θ>90 deg., i.e. theta>When the angle of view of the camera is higher than the horizontal line, the ground plane is at infinity, alpha is infinite close to 90 degrees, and v is infinite close to v0-tanβ*fy,fyFor normalized focal length on the y-axis, the same holds for negative values of β, i.e., counterclockwise rotation of the camera, and thus, formula (17) can be substituted:
90=a·(v0-tanβ·fy)+b (19)
when alpha ismin+2θ<90 deg., i.e. theta<When the angle is beta, the upward angle of the camera is lower than the horizontal line, the imaging angle alpha of the target object at the infinite distance from the ground plane is the maximum value alphamax=αminWhen +2 θ is 90 — β + θ, that is, when the object is projected to the highest point of the picture, v is 0, and formula (17) is substituted with:
90-β+θ=b (20)
according to the pinhole camera construction principle, the tangent value of half of the vertical field angle theta of the camera is equal to half of the side length of the CMOS or CCD image sensor of the camera divided by the focal length of the camera, so that the value of theta:
Figure FDA0003050722990000031
l in the formula (21)CMOSFor the side length of the camera CMOS or CCD image sensor, combining equations (18) to (21), F (α, β) is:
Figure FDA0003050722990000032
and (3) in the formula (10), delta is a nonlinear distortion term error of the camera, and a mobile phone camera depth extraction model is established according to a trigonometric function principle by combining the shooting height h of the mobile phone camera:
Figure FDA0003050722990000033
step three: acquiring pixel values u 'and v' of a target point by acquiring an image of a target object to be detected;
in the step of image acquisition of the target object to be detected, the method further comprises nonlinear distortion correction and pretreatment of the target object image to be detected, namely:
acquiring images through a mobile phone camera, and establishing a projection geometric model, wherein f is a camera focal length, theta is a half of a vertical field angle of the camera, h is a camera photographing height, beta is a rotation angle of the camera along an ox axis of a camera coordinate system, a clockwise rotation beta value of the camera is positive, an anticlockwise rotation beta value of the camera is negative, the beta value is obtained through a gravity sensor in the camera, and alpha is a target object imaging angle;
combining the camera lens distortion parameters obtained by the camera calibration in the step one, and carrying out nonlinear distortion correction on radial distortion and tangential distortion errors existing in the image; substituting the corrected ideal linear normalized coordinate values (x, y) into a formula (1), calculating pixel coordinate values of each point of the corrected image, and performing interpolation processing on the corrected pixel values by a bilinear interpolation method to obtain the corrected image; preprocessing the corrected image by adopting computer vision and image processing technologies, wherein the preprocessing comprises image binarization, image morphological operation and target object contour edge detection to obtain the edge of a target object, and further calculating the geometric center pixel value (u ', v') of the edge of the target object in contact with the ground;
step four: calculating the distance L between any point on the image of the target object to be detected and the mobile phone by using the camera internal parameters and the target point pixel values obtained in the steps and combining with a camera depth extraction model
Selecting a corresponding depth extraction model according to the magnitude relation between the camera rotation angle beta and a half of the camera vertical field angle theta, and calculating the pixel value v of the camera internal parameter image center point obtained in the step one0Normalized focal length f on the y-axisyAnd image resolution vmaxSubstituting the vertical coordinate pixel value v' of the target object to be detected, the camera rotation angle beta and the mobile phone camera shooting height h into the depth extraction model, calculating the depth value D of the target point, and calculating the vertical distance T from the target point to the optical axis directionx
Figure FDA0003050722990000041
From equations (11) to (12), the distance L between an arbitrary point on the image and the capturing camera can be calculated:
Figure FDA0003050722990000042
CN201810918876.2A 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method Active CN109146980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810918876.2A CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810918876.2A CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Publications (2)

Publication Number Publication Date
CN109146980A CN109146980A (en) 2019-01-04
CN109146980B true CN109146980B (en) 2021-08-10

Family

ID=64793074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810918876.2A Active CN109146980B (en) 2018-08-12 2018-08-12 Monocular vision based optimized depth extraction and passive distance measurement method

Country Status (1)

Country Link
CN (1) CN109146980B (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109883654B (en) * 2019-01-25 2021-11-09 武汉精立电子技术有限公司 Checkerboard graph for OLED (organic light emitting diode) sub-pixel positioning, generation method and positioning method
CN111508027B (en) * 2019-01-31 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for calibrating external parameters of camera
JP7051740B2 (en) * 2019-03-11 2022-04-11 株式会社東芝 Image processing equipment, ranging equipment, methods and programs
CN109977853B (en) * 2019-03-25 2023-07-14 太原理工大学 Underground worker panoramic monitoring method based on multiple identification devices
CN110065075B (en) * 2019-05-29 2021-11-02 哈尔滨工业大学 Space cell robot external state sensing method based on vision
CN110672020A (en) * 2019-06-14 2020-01-10 浙江农林大学 Stand tree height measuring method based on monocular vision
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam
CN110374045B (en) * 2019-07-29 2021-09-28 哈尔滨工业大学 Intelligent deicing method
CN110555888B (en) * 2019-08-22 2022-10-04 浙江大华技术股份有限公司 Master-slave camera calibration method, storage device, computer equipment and system thereof
CN110515884B (en) * 2019-09-04 2023-02-03 西安工业大学 Construction site reinforcing bar range unit based on image analysis
CN110728638A (en) * 2019-09-25 2020-01-24 深圳疆程技术有限公司 Image distortion correction method, vehicle machine and vehicle
CN110737942B (en) * 2019-10-12 2023-05-02 清华四川能源互联网研究院 Underwater building model building method, device, equipment and storage medium
CN111192235B (en) * 2019-12-05 2023-05-26 中国地质大学(武汉) Image measurement method based on monocular vision model and perspective transformation
CN111046843B (en) * 2019-12-27 2023-06-20 华南理工大学 Monocular ranging method in intelligent driving environment
CN111250406B (en) * 2020-03-16 2023-11-14 科为升视觉技术(苏州)有限公司 Automatic placement method and system for PCB detection assembly line based on visual positioning
FR3111222B1 (en) 2020-06-06 2023-04-28 Olivier Querbes Generation of scaled 3D models from 2D images produced by a monocular imaging device
CN111623776B (en) * 2020-06-08 2022-12-02 昆山星际舟智能科技有限公司 Method for measuring distance of target by using near infrared vision sensor and gyroscope
CN112001880B (en) * 2020-06-29 2024-01-05 浙江大学 Method and device for detecting characteristic parameters of planar member
CN111798444B (en) * 2020-07-17 2023-06-27 太原理工大学 Unmanned workshop steel pipe length measurement method based on image distortion correction color separation processing
CN112135125A (en) * 2020-10-28 2020-12-25 歌尔光学科技有限公司 Camera internal reference testing method, device, equipment and computer readable storage medium
CN112489116A (en) * 2020-12-07 2021-03-12 青岛科美创视智能科技有限公司 Method and system for estimating target distance by using single camera
CN112798812B (en) * 2020-12-30 2023-09-26 中山联合汽车技术有限公司 Target speed measuring method based on monocular vision
CN112686961B (en) * 2020-12-31 2024-06-04 杭州海康机器人股份有限公司 Correction method and device for calibration parameters of depth camera
CN112528974B (en) * 2021-02-08 2021-05-14 成都睿沿科技有限公司 Distance measuring method and device, electronic equipment and readable storage medium
CN113091607A (en) * 2021-03-19 2021-07-09 华南农业大学 Calibration-free space point coordinate measuring method for single smart phone
CN113034565B (en) * 2021-03-25 2023-07-04 奥比中光科技集团股份有限公司 Depth calculation method and system for monocular structured light
CN113034618A (en) * 2021-04-20 2021-06-25 延锋伟世通汽车电子有限公司 Method and system for measuring imaging distance of automobile head-up display
TWM619722U (en) * 2021-04-22 2021-11-11 滿拓科技股份有限公司 System for detecting depth and horizontal distance of object
CN113137920B (en) * 2021-05-19 2022-09-23 重庆大学 Underwater measurement equipment and underwater measurement method
CN113344906B (en) * 2021-06-29 2024-04-23 阿波罗智联(北京)科技有限公司 Camera evaluation method and device in vehicle-road cooperation, road side equipment and cloud control platform
CN113686314B (en) * 2021-07-28 2024-02-27 武汉科技大学 Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
CN114018212B (en) * 2021-08-03 2024-05-14 广东省国土资源测绘院 Spherical camera monocular ranging-oriented pitch angle correction method and system
CN113838150B (en) * 2021-08-30 2024-03-19 上海大学 Moving target three-dimensional track tracking method based on electrohydraulic adjustable focus lens
CN113888640B (en) * 2021-09-07 2024-02-02 浙江大学 Improved calibration method suitable for unmanned aerial vehicle pan-tilt camera
CN113720299B (en) * 2021-09-18 2023-07-14 兰州大学 Ranging method based on sliding scene of three-dimensional camera or monocular camera on guide rail
CN114219850B (en) * 2021-11-16 2024-05-10 英博超算(南京)科技有限公司 Vehicle ranging system applying 360-degree panoramic looking-around technology
CN114200532B (en) * 2021-12-14 2024-05-14 中国航发南方工业有限公司 Device and method for detecting residues in casting case of aero-engine
CN114509048B (en) * 2022-01-20 2023-11-07 中科视捷(南京)科技有限公司 Overhead transmission line space three-dimensional information acquisition method and system based on monocular camera
CN114684202B (en) * 2022-06-01 2023-03-10 浙江大旗新能源汽车有限公司 Intelligent system for automatically driving vehicle and integrated control method thereof
CN115507752B (en) * 2022-09-29 2023-07-07 苏州大学 Monocular vision ranging method and system based on parallel environment elements

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104331896A (en) * 2014-11-21 2015-02-04 天津工业大学 System calibration method based on depth information
US9741118B2 (en) * 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741118B2 (en) * 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
CN104034514A (en) * 2014-06-12 2014-09-10 中国科学院上海技术物理研究所 Large visual field camera nonlinear distortion correction device and method
CN104331896A (en) * 2014-11-21 2015-02-04 天津工业大学 System calibration method based on depth information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《3-D position sensing using a passive monocular vision system》;J. Cardillo,et al;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;19911231;第13卷(第8期);第809-813页 *
《基于单目视觉的目标识别与定位研究》;冯春;《基于单目视觉的目标识别与定位研究》;20141215(第12期);第I138-48页 *

Also Published As

Publication number Publication date
CN109146980A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN109035320B (en) Monocular vision-based depth extraction method
CN109269430B (en) Multi-standing-tree breast height diameter passive measurement method based on deep extraction model
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
US10339390B2 (en) Methods and apparatus for an imaging system
Alismail et al. Automatic calibration of a range sensor and camera system
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN110378969B (en) Convergent binocular camera calibration method based on 3D geometric constraint
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN112270719B (en) Camera calibration method, device and system
CN108362205B (en) Space distance measuring method based on fringe projection
CN109974618B (en) Global calibration method of multi-sensor vision measurement system
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
CN113012234A (en) High-precision camera calibration method based on plane transformation
CN115854866A (en) Optical target three-dimensional measurement system and method, electronic equipment and storage medium
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN113808070B (en) Binocular digital speckle image related parallax measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant