WO2018049949A1 - 一种基于手持式光场相机的距离估计方法 - Google Patents

一种基于手持式光场相机的距离估计方法 Download PDF

Info

Publication number
WO2018049949A1
WO2018049949A1 PCT/CN2017/096573 CN2017096573W WO2018049949A1 WO 2018049949 A1 WO2018049949 A1 WO 2018049949A1 CN 2017096573 W CN2017096573 W CN 2017096573W WO 2018049949 A1 WO2018049949 A1 WO 2018049949A1
Authority
WO
WIPO (PCT)
Prior art keywords
main lens
distance
calibration point
light field
light
Prior art date
Application number
PCT/CN2017/096573
Other languages
English (en)
French (fr)
Inventor
金欣
陈艳琴
戴琼海
Original Assignee
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学深圳研究生院 filed Critical 清华大学深圳研究生院
Publication of WO2018049949A1 publication Critical patent/WO2018049949A1/zh
Priority to US16/212,344 priority Critical patent/US10482617B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Definitions

  • the invention relates to the field of computer vision and digital image processing, and in particular to a distance estimation method based on a handheld light field camera.
  • the depth estimation method based on single clue mainly adopts the principle of stereo matching. By searching the corresponding regions of the extracted subaperture images and analyzing their correlations, a single clue for depth estimation can be obtained.
  • the depth estimation method based on multi-cue fusion tries to pass Various depth-related clues are extracted from the light field image data and effectively fused to estimate depth, such as the consistency of the fusion sub-aperture image and the ambiguity.
  • the baseline of the light field camera is small and the resolution of the subaperture image is low, the above two types of algorithms can only obtain a low-precision depth map, and the high complexity of the algorithm itself results in low depth estimation efficiency.
  • the present invention provides a distance estimation method based on a handheld light field camera, which has high efficiency and high accuracy.
  • the invention discloses a distance estimation method based on a handheld light field camera, comprising the following steps:
  • S1 extracting parameters of the light field camera, including a focal length, a radius of curvature, a pupil diameter, a center thickness of the main lens of the light field camera, and a focal length of the microlens of the light field camera;
  • step S4 the parameters of the light field camera extracted in step S1 and the distance between the main lens and the microlens array acquired in step S3 and the imaging diameter of the recorded calibration point on the refocused image Input into the mathematical model of light propagation, and output the distance of the calibration point.
  • the reference plane set in step S2 there is no occlusion relationship between the reference plane set in step S2 and the calibration point, and no overlap occurs when the calibration point is imaged.
  • the step S3 specifically includes refocusing the collected light field image on the reference plane by using the following formula:
  • L represents light field image
  • L represents a refocusing image
  • subscript a indicates the focus plane of the object space, i.e. in this step of the reference plane
  • the subscript j represents the coordinates of the microlens in the vertical direction.
  • the step of acquiring the distance between the main lens and the microlens array in step S3 specifically includes:
  • the distance between each intersection on the F u plane is the baseline of the light field camera, f is the focal length of the main lens, and mi is the slope of the light between the sensor of the light field camera and the main lens;
  • y' j represents the ordinate of the object on the reference plane, and d out represents the distance between the reference plane and the main lens;
  • y j represents the ordinate of the center of the microlens subscripted with j.
  • the mathematical model of light propagation in step S4 specifically includes a light incident portion and a light exit portion.
  • the mathematical model of propagation of the incident portion of the light ray specifically includes:
  • the light emitted by the calibration point is at an angle Entering the main lens, Satisfy the relationship:
  • d' out represents the axial distance of the calibration point from the center of the main lens
  • R represents the radius of curvature of the main lens
  • D represents the pupil radius of the main lens
  • T represents the main lens The thickness of the center; the light emitted by the calibration point is refracted after entering the main lens, and the following formula is satisfied:
  • n 1 represents the refractive index of the main lens
  • represents the refraction angle
  • ⁇ 1 satisfies:
  • the mathematical model of propagation of the light exiting portion specifically includes:
  • the light emitted by the calibration point is refracted inside the main lens to reach the main lens Shoot the position (p, q) and exit from the exit position (p, q), satisfying the following formula:
  • the light emitted by the calibration point has three imaging modes after the main lens is emitted, respectively, focusing on the rear of the sensor, between the sensor and the microlens array, or the microlens array and the main lens. Between, where:
  • D 1 is the imaging diameter of the calibration point recorded in step S3 on the refocused image
  • f x is the focal length of the microlens
  • d in is the main lens and the microlens array acquired in step S3
  • D 1 is the imaging diameter of the calibration point recorded in step S3 on the refocused image
  • f x is the focal length of the microlens
  • d in is the main lens and the microlens array acquired in step S3
  • the beneficial effects of the present invention are: the distance estimation method based on the handheld light field camera of the present invention, refocusing the captured light field image on a reference plane of a known distance, the refocused image It is equivalent to the light field image obtained by focusing the light field camera. Since the distance of the subject (reference plane) is known, the distance between the main lens and the microlens array of the light field camera can be further determined. An unfocused object is equivalent to being imaged on the sensor after being captured by the focused light field camera.
  • the calibration point is set to the object to be estimated distance (non-focus), and the distance between the imaging diameter of the calibration point on the refocused image and the distance of the calibration point is analyzed to estimate the distance of the object; the distance estimation method
  • the absolute distance of all objects can be estimated by one refocusing, which is highly efficient and greatly improves the accuracy of distance estimation, and has a good application prospect in industrial ranging.
  • FIG. 1 is a flow chart of a distance estimation method based on a handheld light field camera according to a preferred embodiment of the present invention
  • Figure 3 is a light exiting model of a preferred embodiment of the present invention.
  • Figure 5 is a refocusing ray tracing model of a preferred embodiment of the present invention.
  • a preferred embodiment of the present invention discloses a distance estimation method based on a handheld light field camera, comprising the following steps:
  • S1 extracting parameters of the light field camera, including a focal length of the main lens of the light field camera, a radius of curvature, a pupil diameter, a center thickness, and a focal length of the microlens of the light field camera;
  • the focal length of the main lens and the focal length of the microlens are used to obtain the distance between the main lens and the microlens array.
  • the radius of curvature of the main lens, the pupil diameter, the center thickness, and the focal length of the microlens are used for the mathematical model of light propagation.
  • the distance between the reference plane and the main lens is used for refocusing, and is used to obtain the main lens after refocusing. The distance between the microlens arrays.
  • L represents light field image
  • L represents a refocusing image
  • subscript a indicates the focus plane of the object space, also in this step is the reference plane
  • y ⁇ x
  • y ⁇ represents the position information on the light field image
  • v ⁇ u
  • v ⁇ represents the direction information on the light field image
  • the subscript j represents the coordinates of the microlens in the vertical direction, and the value starts from 0.
  • the image obtained after one refocusing is equivalent to the light field image obtained by focusing the light field camera.
  • the image of the other unfocused object on the refocused image is equivalent to the light field camera after the focusing.
  • the plane in which these unfocused objects are located is the plane a' shown in FIG. 4, and the imaging diameter of the light emitted from the object on the plane a' on the sensor, that is, the imaging diameter D 1 on the refocused image. record it.
  • the refocusing plane is the set reference plane (plane a)
  • the distance d out between the plane and the main lens 1 is known.
  • the ray tracing method shows that the distance between the intersections on the F u plane is the baseline of the light field camera.
  • the coordinates of the intersection points on the F u plane are calculated as:
  • f is the focal length of the main lens 1
  • mi is the slope of the light between the sensor 3 and the main lens 1; as long as the focal length and diameter of the microlens and the number of pixels covered under each microlens are known, m i Can be found. Then the slope k i of the light emitted by the object in the object space is calculated as:
  • y j represents the ordinate of the center of the microlens subscripted j.
  • step S4 inputting the parameters of the light field camera extracted in step S1 and the distance between the main lens and the microlens array acquired in step S3 and the imaging diameter of the recorded calibration point on the refocused image into the mathematical model of light propagation, Output the distance of the calibration point;
  • S41 The mathematical model of light propagation is divided into two parts: the incident part of the light and the part of the light exiting.
  • d' out denotes the axial distance between the calibration point and the center of the main lens of the light field camera
  • R, D, T are the parameters of the main lens, which respectively represent the radius of curvature, the pupil diameter and the center thickness of the main lens.
  • n 1 represents the refractive index of the main lens
  • is the refraction angle
  • ⁇ 1 satisfies:
  • S42 There are three imaging modes after the light is emitted, as shown by the three rays of 4A, 4B, and 4C in FIG. 4, which are respectively focused on the rear of the sensor 3 (4A), between the sensor 3 and the microlens array 2 (4B), Between the lens array 2 and the main lens 1 (4C).
  • D 1 is the imaging diameter of the calibration point recorded in step S3;
  • f x is the focal length of the microlens;
  • d is the focal plane of the calibration point in the image space The distance between the sensors.
  • the exit position (p, q) of the light is on the surface of the main lens, so the following relationship is satisfied:
  • the formula (13) can be used to approximate d:
  • the distance estimation method based on the handheld light field camera of the preferred embodiment of the present invention first refocuses the captured light field image on a reference plane of a known distance, which is equivalent to that of the light field camera after focusing.
  • Light field image due to the distance of the known subject (reference plane), the distance between the main lens of the light field camera and the microlens array can be further determined, and all other unfocused objects are equivalent
  • the calibration point is set on the object to be estimated distance (non-focus), and the distance of the object is estimated by analyzing the relationship between the imaging diameter of the calibration point on the refocused image and the distance of the calibration point.
  • the specific implementation steps are: extracting light field camera parameters, including focal length, radius of curvature, pupil diameter and center thickness of the main lens, and focal length of the microlens; setting a reference plane and a calibration point, wherein the calibration point is set at an estimated distance On the object, obtain the distance between the reference plane and the main lens of the light field camera; refocus the collected light field image on the reference plane to obtain the distance between the main lens of the light field camera and the microlens array, and record the mark
  • the imaging diameter of the fixed point image on the refocused image after analyzing the imaging system of the light field camera, a mathematical model of light propagation is proposed; the parameters of the light field camera, the imaging diameter, and the distance between the obtained main lens and the microlens array are input to the light propagation.
  • the distance of the calibration point is output. Since the calibration point is on the object, the estimated distance of the calibration point is the distance of the object.
  • the distance estimation method of the present invention can estimate the absolute distance between all objects and the main lens of the light field camera through one refocusing, has high efficiency and high accuracy, and its efficiency and accuracy make the invention The method has great application prospects in industrial ranging.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本发明公开了一种基于手持式光场相机的距离估计方法,包括:S1:提取光场相机的参数;S2:设定参考平面以及标定点,其中标定点设置在需估计距离的物体上,获取参考平面与主镜头之间的距离;S3:将采集到的光场图像重聚焦在参考平面上,以获取主镜头与光场相机的微透镜阵列之间的距离,同时记录标定点在重聚焦图像上的成像直径;S4:将步骤S1中提取的光场相机的参数以及步骤S3中获取的主镜头与微透镜阵列之间的距离和记录的标定点在重聚焦图像上的成像直径输入到光线传播数学模型中,输出标定点的距离。本发明提出的基于手持式光场相机的距离估计方法,具有高效性以及较高的准确性。

Description

一种基于手持式光场相机的距离估计方法 技术领域
本发明涉及计算机视觉与数字图像处理领域,尤其涉及一种基于手持式光场相机的距离估计方法。
背景技术
光场相机在计算机视觉领域的应用引发了研究者们的广泛关注,其通过在传统相机的主镜头与传感器之间插入一块微透镜阵列,记录物体的位置与方向信息。利用光场相机拍摄的单张原始光场图像,除了能够实现数字重聚焦、视点合成、景深扩展以外,还能利用相关算法对图像上的场景进行深度估计。
研究者们现已提出大量的针对光场图像的深度估计方法,其大致可以分为两类:基于单线索的深度估计方法和基于多线索融合的深度估计方法。其中基于单线索的深度估计方法主要采用立体匹配的原理,通过搜寻提取的子孔径图像的对应区域并分析其相关性,获取可进行深度估计的单一线索;基于多线索融合的深度估计方法力图通过从光场图像数据中提取与深度相关的多种线索并进行有效融合来估计深度,比如融合子孔径图像的一致性以及模糊度这两种线索。但是,由于光场相机的基线小且子孔径图像的分辨率低,以上两类算法只能得到低精度的深度图,且算法本身的高复杂度导致深度估计效率低。
以上背景技术内容的公开仅用于辅助理解本发明的构思及技术方案,其并不必然属于本专利申请的现有技术,在没有明确的证据表明上述内容在本专利申请的申请日已经公开的情况下,上述背景技术不应当用于评价本申请的新颖性和创造性。
发明内容
为解决上述技术问题,本发明提出一种基于手持式光场相机的距离估计方法,具有高效性以及较高的准确性。
为达到上述目的,本发明采用以下技术方案:
本发明公开了一种基于手持式光场相机的距离估计方法,包括以下步骤:
S1:提取光场相机的参数,包括所述光场相机的主镜头的焦距、曲率半径、光瞳直径、中心厚度以及所述光场相机的微透镜的焦距;
S2:设定参考平面以及标定点,其中所述标定点设置在需估计距离的物体上,获取所述参考平面与所述主镜头之间的距离;
S3:将采集到的光场图像重聚焦在所述参考平面上,以获取所述主镜头与所述光场相机的微透镜阵列之间的距离,同时记录所述标定点在重聚焦图像上的成像直径;
S4:将步骤S1中提取的所述光场相机的参数以及步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离和记录的所述标定点在重聚焦图像上的成像直径输入到光线传播数学模型中,输出所述标定点的距离。
优选地,步骤S2中设定的所述参考平面与所述标定点之间不存在遮挡关系,且所述标定点成像时不发生重叠。
优选地,步骤S3中具体包括采用下式将采集到的光场图像重聚焦在所述参考平面上:
Figure PCTCN2017096573-appb-000001
其中,L表示光场图像,La表示重聚焦图像,下标a表示物空间的聚焦平面,在此步骤中也即所述参考平面;y={x,y}表示光场图像上的位置信息,v={u,v}表示光场图像上的方向信息,下标m表示每个微透镜下一维方向上的像素个数,c=(m-1)/2,i是整数,取值范围为[-c,c];下标j表示微透镜竖直方向上的坐标。
优选地,步骤S3中的获取所述主镜头与所述微透镜阵列之间的距离的步骤具体包括:
通过光线追迹方法得到Fu平面上各交点的坐标计算公式为:
Fi=mi×f          (2)
其中,Fu平面上各交点之间的距离为光场相机的基线,f是所述主镜头的 焦距,mi是所述光场相机的传感器与所述主镜头之间的光线的斜率;
所述标定点的所述物体在物空间发出的光线的斜率ki的计算公式为:
Figure PCTCN2017096573-appb-000002
其中,y′j表示所述物体在所述参考平面上的纵坐标,dout表示所述参考平面与所述主镜头之间的距离;
根据式(3)计算得到所述物体发出的光线到达所述主镜头的入射位置和出射位置(p′,q′),根据出射位置(p′,q′)计算得到所述主镜头与所述微透镜阵列之间的距离din为:
Figure PCTCN2017096573-appb-000003
其中,yj表示下标为j的微透镜的中心的纵坐标。
优选地,步骤S4中的光线传播数学模型具体包括光线入射部分和光线出射部分。
优选地,所述光线入射部分的传播数学模型具体包括:
所述标定点发出的光线以角度
Figure PCTCN2017096573-appb-000004
进入所述主镜头,
Figure PCTCN2017096573-appb-000005
满足关系式:
Figure PCTCN2017096573-appb-000006
其中,d′out表示所述标定点与所述主镜头的中心的轴向距离,R表示所述主镜头的曲率半径,D表示所述主镜头的光瞳半径,T表示所述主镜头的中心厚度;所述标定点发出的光线进入所述主镜头后发生折射,满足下式:
Figure PCTCN2017096573-appb-000007
其中,n1表示所述主镜头的折射率,ψ表示折射角,θ1满足:
Figure PCTCN2017096573-appb-000008
优选地,所述光线出射部分的传播数学模型具体包括:
所述标定点发出的光线在所述主镜头内部进行折射后到达所述主镜头的出 射位置(p,q),并从出射位置(p,q)处出射,满足下式:
n1sin(θ1-ψ+θ2)=sinω          (8)
其中,ω表示出射角,θ2满足:
Figure PCTCN2017096573-appb-000009
所述标定点发出的光线在所述主镜头出射后有三种成像方式,分别为聚焦在所述传感器后方、所述传感器与所述微透镜阵列之间或者所述微透镜阵列与所述主镜头之间,其中:
当所述标定点发出的光线在所述主镜头出射后聚焦在所述传感器后方时,满足下式:
Figure PCTCN2017096573-appb-000010
其中,D1为步骤S3中记录的所述标定点在重聚焦图像上的成像直径,fx为微透镜的焦距,din为步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离,d为所述标定点在像空间的聚焦平面与所述传感器之间的距离;出射位置(p,q)在所述主镜头的曲面上,满足下式:
Figure PCTCN2017096573-appb-000011
(R-T/2+p)2+q2=R2          (12)
当所述标定点发出的光线在所述主镜头出射后聚焦在所述传感器与所述微透镜阵列之间或者所述微透镜阵列与所述主镜头之间时,满足下式:
Figure PCTCN2017096573-appb-000012
其中,D1为步骤S3中记录的所述标定点在重聚焦图像上的成像直径,fx为微透镜的焦距,din为步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离,d为所述标定点在像空间的聚焦平面与所述传感器之间的距离;出射位置(p,q)在所述主镜头的曲面上,满足下式:
Figure PCTCN2017096573-appb-000013
(R-T/2+p)2+q2=R2            (15)。
与现有技术相比,本发明的有益效果在于:本发明的基于手持式光场相机的距离估计方法,将拍摄得到的光场图像重聚焦在已知距离的参考平面上,该重聚焦图像相当于对光场相机进行调焦后拍摄得到的光场图像,由于已知拍摄物(参考平面)的距离,可以进一步求得光场相机的主镜头与微透镜阵列之间的距离,其他所有非聚焦的物体相当于被此调焦后的光场相机拍摄后在传感器上成像。因此,将标定点设置在需估计距离的物体(非聚焦),通过分析标定点在重聚焦图像上的成像直径与标定点的距离之间的关系,从而估计出物体的距离;该距离估计方法可以通过一次重聚焦就能够估计出所有物体的绝对距离,具有高效性且大大提高了距离估计的准确性,在工业测距上有很好的应用前景。
附图说明
图1是本发明优选实施例的基于手持式光场相机的距离估计方法流程图;
图2是本发明优选实施例的光线入射模型;
图3是本发明优选实施例的光线出射模型;
图4是本发明优选实施例的标定点的成像方式;
图5是本发明优选实施例的重聚焦的光线追迹模型。
具体实施方式
下面对照附图并结合优选的实施方式对本发明作进一步说明。
如图1所示,本发明的优选实施例公开了一种基于手持式光场相机的距离估计方法,包括以下步骤:
S1:提取光场相机的参数,包括光场相机的主镜头的焦距、曲率半径、光瞳直径、中心厚度以及光场相机的微透镜的焦距;
其中主镜头的焦距与微透镜的焦距用于获取主镜头与微透镜阵列之间的距离,主镜头的曲率半径、光瞳直径、中心厚度以及微透镜的焦距用于光线传播数学模型。
S2:设定参考平面以及标定点,其中所述标定点设置在需估计距离的物体上,获取所述参考平面与所述主镜头之间的距离;
其中设定的参考平面与标定点之间不存在遮挡关系,所述标定点成像时不发生重叠;参考平面与主镜头之间的距离用于重聚焦,同时用于获取重聚焦后主镜头与微透镜阵列之间的距离。
S3:将采集到的光场图像重聚焦在所述参考平面上,以获取主镜头与光场相机的微透镜阵列之间的距离,同时记录标定点在重聚焦图像上的成像直径;
(1)重聚焦公式可表述为:
Figure PCTCN2017096573-appb-000014
其中,L表示光场图像,La表示重聚焦图像,下标a表示物空间的聚焦平面,在此步骤中也即为参考平面;y={x,y}表示光场图像上的位置信息,v={u,v}表示光场图像上的方向信息,下标m表示微透镜图像的分辨率,即每个微透镜下一维方向上的像素个数,c=(m-1)/2,i是整数,取值范围为[-c,c];下标j表示微透镜竖直方向上的坐标,取值从0开始。
进行一次重聚焦后得到的图像相当于对光场相机进行调焦后拍摄得到的光场图像,此时其他非聚焦的物体在重聚焦图像上的像相当于用此调焦后的光场相机拍摄得到;而这些非聚焦物体所在的平面即为图4所示的平面a′,将平面a′上的物体发射出来的光线在传感器上的成像直径,即重聚焦图像上的成像直径D1记录下来。
(2)结合图4和图5所示,由于重聚焦平面是设定的参考平面(平面a),该平面与主镜头1之间的距离dout已知。光线追迹方法表明,Fu平面上各交点之间的距离为光场相机的基线。Fu平面上各交点的坐标计算公式为:
Fi=mi×f              (2)
其中,f是主镜头1的焦距,mi是传感器3与主镜头1之间的光线的斜率;只要微透镜的焦距、直径以及每个微透镜下覆盖的像素个数已知,mi即可求出。 则物体在物空间发出的光线的斜率ki,其计算公式为:
Figure PCTCN2017096573-appb-000015
其中y′j表示物体在平面a(即参考平面)上的纵坐标。继而,我们可以算出物体发出的光线到达主镜头1的入射位置和出射位置(p′,q′)。因此,主镜头1与微透镜阵列2之间的距离din的计算公式为:
Figure PCTCN2017096573-appb-000016
其中yj表示下标为j的微透镜的中心的纵坐标。
S4:将步骤S1中提取的光场相机的参数以及步骤S3中获取的主镜头与微透镜阵列之间的距离和记录的标定点在重聚焦图像上的成像直径输入到光线传播数学模型中,输出标定点的距离;
S41:光线传播数学模型分为两部分:光线入射部分以及光线出射部分。
(1)光线入射部分:如图2所示,标定点发出的光线以角度
Figure PCTCN2017096573-appb-000017
进入主镜头,
Figure PCTCN2017096573-appb-000018
满足关系式:
Figure PCTCN2017096573-appb-000019
其中d′out表示标定点与光场相机主镜头中心的轴向距离;R、D、T是主镜头的参数,分别表示主镜头的曲率半径、光瞳直径以及中心厚度。进入主镜头后光线发生折射,利用折射公式得到:
Figure PCTCN2017096573-appb-000020
其中,n1表示主镜头的折射率,ψ为折射角,θ1满足:
Figure PCTCN2017096573-appb-000021
(2)光线出射部分:如图3所示,光线在主镜头内部进行折射后到达主镜头的(p,q)位置,进而从(p,q)处出射,再次利用折射公式得到:
n1sin(θ1-ψ+θ2)=sinω              (8)
其中ω是出射角,θ2满足:
Figure PCTCN2017096573-appb-000022
S42:光线出射后有三种成像方式,如图4中4A、4B、4C三条光线所示,它们会分别聚焦在传感器3后方(4A),传感器3与微透镜阵列2之间(4B),微透镜阵列2与主镜头1之间(4C)。
当光线聚焦在传感器3的后方时,即光线4A的成像情况,分析主镜头1、微透镜阵列2以及传感器3的几何关系,使用相似三角形并做近似,得到:
Figure PCTCN2017096573-appb-000023
其中D1为步骤S3中记录的标定点的成像直径;fx为微透镜的焦距;din为主镜头1与微透镜阵列2之间的距离;d为标定点在像空间的聚焦平面与传感器之间的距离。光线的出射位置(p,q)在主镜头的曲面上,因此满足以下关系式:
Figure PCTCN2017096573-appb-000024
(R-T/2+p)2+q2=R2           (12)
当光线聚焦在传感器3与微透镜阵列2之间或者微透镜阵列2与主镜头1之间时,即光线4B、4C两种成像情况,同样使用相似三角形并做近似,得到:
Figure PCTCN2017096573-appb-000025
Figure PCTCN2017096573-appb-000026
(R-T/2+p)2+q2=R2              (15)
S43:输入重聚焦后光场相机主镜头与微透镜阵列之间的距离din和标定点的成像直径D1
对于光线聚焦在传感器3的后方时,即光线4A的成像情况,利用公式(10) 可近似求得d:
Figure PCTCN2017096573-appb-000027
将计算出的d代入公式(11),联合公式(12)算出p和q。将q代入公式(9)计算得到θ2
Figure PCTCN2017096573-appb-000028
然后将得到的θ2代入到公式(11)中计算得到ω:
Figure PCTCN2017096573-appb-000029
再利用公式(7)、(8)、(6)依次得到角度θ1、ψ和
Figure PCTCN2017096573-appb-000030
Figure PCTCN2017096573-appb-000031
Figure PCTCN2017096573-appb-000032
Figure PCTCN2017096573-appb-000033
最后再利用公式(5)算出标定点的绝对距离d′out
Figure PCTCN2017096573-appb-000034
对于光线聚焦在传感器3与微透镜阵列2之间或者微透镜阵列2与主镜头1之间时,即光线4B、4C两种成像情况,利用公式(13)可近似计算出d:
Figure PCTCN2017096573-appb-000035
将计算出的d代入公式(14),联合公式(15)算出p和q。之后所有角度的求解公式同上所述(17)-(21),最后利用公式(5)算出标定点的绝对距离d′out
本发明的优选实施例的基于手持式光场相机的距离估计方法首先将拍摄到的光场图像重聚焦在已知距离的参考平面上,该聚焦图像相当于光场相机进行调焦后拍摄得到的光场图像;由于已知拍摄物(参考平面)的距离,可以进一步求得光场相机的主镜头与微透镜阵列之间的距离,则其他所有非聚焦的物体相当于 被此调焦后的光场相机拍摄后在传感器上成像。因此,将标定点设置在需估计距离的物体上(非聚焦),通过分析标定点在重聚焦图像上的成像直径与标定点的距离之间的关系,估计出物体的距离。具体的实施步骤为:提取光场相机参数,包括主镜头的焦距、曲率半径、光瞳直径和中心厚度,以及微透镜的焦距;设定参考平面及标定点,其中标定点设置在需估计距离的物体上,获取参考平面与光场相机主镜头之间的距离;将采集到的光场图像重聚焦在参考平面上,获取光场相机主镜头与微透镜阵列之间的距离,同时记录标定点在重聚焦图像上的成像直径;分析光场相机的成像系统后,提出光线传播数学模型;将光场相机参数、成像直径以及得到的主镜头与微透镜阵列之间的距离输入到光线传播数学模型中,输出标定点的距离。由于标定点在物体上,因此估计出的标定点的距离即为物体的距离。通过本发明的距离估计方法能够经过一次重聚焦就估计出所有物体与光场相机的主镜头之间的绝对距离,具有高效性以及较高的准确性,其高效性以及准确性使得本发明的方法在工业测距有很大的应用前景。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变形,而且性能或用途相同,都应当视为属于本发明的保护范围。

Claims (7)

  1. 一种基于手持式光场相机的距离估计方法,其特征在于,包括以下步骤:
    S1:提取光场相机的参数,包括所述光场相机的主镜头的焦距、曲率半径、光瞳直径、中心厚度以及所述光场相机的微透镜的焦距;
    S2:设定参考平面以及标定点,其中所述标定点设置在需估计距离的物体上,获取所述参考平面与所述主镜头之间的距离;
    S3:将采集到的光场图像重聚焦在所述参考平面上,以获取所述主镜头与所述光场相机的微透镜阵列之间的距离,同时记录所述标定点在重聚焦图像上的成像直径;
    S4:将步骤S1中提取的所述光场相机的参数以及步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离和记录的所述标定点在重聚焦图像上的成像直径输入到光线传播数学模型中,输出所述标定点的距离。
  2. 根据权利要求1所述的距离估计方法,其特征在于,步骤S2中设定的所述参考平面与所述标定点之间不存在遮挡关系,且所述标定点成像时不发生重叠。
  3. 根据权利要求1所述的距离估计方法,其特征在于,步骤S3中具体包括采用下式将采集到的光场图像重聚焦在所述参考平面上:
    Figure PCTCN2017096573-appb-100001
    其中,L表示光场图像,La表示重聚焦图像,下标a表示所述参考平面;y={x,y}表示光场图像上的位置信息,v={u,v}表示光场图像上的方向信息,下标m表示每个微透镜下一维方向上的像素个数,c=(m-1)/2,i是整数,取值范围为[-c,c];下标j表示微透镜竖直方向上的坐标。
  4. 根据权利要求3所述的距离估计方法,其特征在于,步骤S3中的获取所述主镜头与所述微透镜阵列之间的距离的步骤具体包括:
    通过光线追迹方法得到Fu平面上各交点的坐标计算公式为:
    Fi=mi×f                    (2)
    其中,Fu平面上各交点之间的距离为光场相机的基线,f是所述主镜头的焦距,mi是所述光场相机的传感器与所述主镜头之间的光线的斜率;
    所述标定点的所述物体在物空间发出的光线的斜率ki的计算公式为:
    Figure PCTCN2017096573-appb-100002
    其中,y′j表示所述物体在所述参考平面上的纵坐标,dout表示所述参考平面与所述主镜头之间的距离;
    根据式(3)计算得到所述物体发出的光线到达所述主镜头的入射位置和出射位置(p′,q′),根据出射位置(p′,q′)计算得到所述主镜头与所述微透镜阵列之间的距离din为:
    Figure PCTCN2017096573-appb-100003
    其中,yj表示下标为j的微透镜的中心的纵坐标。
  5. 根据权利要求1至4任一项所述的距离估计方法,其特征在于,步骤S4中的光线传播数学模型具体包括光线入射部分和光线出射部分。
  6. 根据权利要求5所述的距离估计方法,其特征在于,所述光线入射部分的传播数学模型具体包括:
    所述标定点发出的光线以角度
    Figure PCTCN2017096573-appb-100004
    进入所述主镜头,
    Figure PCTCN2017096573-appb-100005
    满足关系式:
    Figure PCTCN2017096573-appb-100006
    其中,d′out表示所述标定点与所述主镜头的中心的轴向距离,R表示所述主镜头的曲率半径,D表示所述主镜头的光瞳半径,T表示所述主镜头的中心厚度;所述标定点发出的光线进入所述主镜头后发生折射,满足下式:
    Figure PCTCN2017096573-appb-100007
    其中,n1表示所述主镜头的折射率,ψ表示折射角,θ1满足:
    Figure PCTCN2017096573-appb-100008
  7. 根据权利要求6所述的距离估计方法,其特征在于,所述光线出射部分的传播数学模型具体包括:
    所述标定点发出的光线在所述主镜头内部进行折射后到达所述主镜头的出射位置(p,q),并从出射位置(p,q)处出射,满足下式:
    n1sin(θ1-ψ+θ2)=sinω             (8)
    其中,ω表示出射角,θ2满足:
    Figure PCTCN2017096573-appb-100009
    所述标定点发出的光线在所述主镜头出射后有三种成像方式,分别为聚焦在所述传感器后方、所述传感器与所述微透镜阵列之间或者所述微透镜阵列与所述主镜头之间,其中:
    当所述标定点发出的光线在所述主镜头出射后聚焦在所述传感器后方时,满足下式:
    Figure PCTCN2017096573-appb-100010
    其中,D1为步骤S3中记录的所述标定点在重聚焦图像上的成像直径,fx为微透镜的焦距,din为步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离,d为所述标定点在像空间的聚焦平面与所述传感器之间的距离;出射位置(p,q)在所述主镜头的曲面上,满足下式:
    Figure PCTCN2017096573-appb-100011
    (R-T/2+p)2+q2=R2                 (12)
    当所述标定点发出的光线在所述主镜头出射后聚焦在所述传感器与所述微透镜阵列之间或者所述微透镜阵列与所述主镜头之间时,满足下式:
    Figure PCTCN2017096573-appb-100012
    其中,D1为步骤S3中记录的所述标定点在重聚焦图像上的成像直径,fx为微透镜的焦距,din为步骤S3中获取的所述主镜头与所述微透镜阵列之间的距离,d为所述标定点在像空间的聚焦平面与所述传感器之间的距离;出射位置(p,q)在所述主镜头的曲面上,满足下式:
    Figure PCTCN2017096573-appb-100013
    (R-T/2+p)2+q2=R2                    (15)。
PCT/CN2017/096573 2016-09-18 2017-08-09 一种基于手持式光场相机的距离估计方法 WO2018049949A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/212,344 US10482617B2 (en) 2016-09-18 2018-12-06 Distance estimation method based on handheld light field camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610828558.8 2016-09-18
CN201610828558.8A CN106373152B (zh) 2016-09-18 2016-09-18 一种基于手持式光场相机的距离估计方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/212,344 Continuation US10482617B2 (en) 2016-09-18 2018-12-06 Distance estimation method based on handheld light field camera

Publications (1)

Publication Number Publication Date
WO2018049949A1 true WO2018049949A1 (zh) 2018-03-22

Family

ID=57897614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096573 WO2018049949A1 (zh) 2016-09-18 2017-08-09 一种基于手持式光场相机的距离估计方法

Country Status (3)

Country Link
US (1) US10482617B2 (zh)
CN (1) CN106373152B (zh)
WO (1) WO2018049949A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310337A (zh) * 2019-06-24 2019-10-08 西北工业大学 一种基于光场基本矩阵的多视光场成像系统全参数估计方法
CN114051087A (zh) * 2021-11-11 2022-02-15 中汽创智科技有限公司 一种多传感器相机
CN114858053A (zh) * 2021-01-20 2022-08-05 四川大学 一种确定工业相机的入瞳中心空间坐标的方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106373152B (zh) * 2016-09-18 2019-02-01 清华大学深圳研究生院 一种基于手持式光场相机的距离估计方法
CN107230232B (zh) * 2017-04-27 2020-06-30 东南大学 聚焦型光场相机的f数匹配方法
CN107610170B (zh) * 2017-08-04 2020-05-19 中国科学院自动化研究所 多目图像重聚焦的深度获取方法及系统
CN108364309B (zh) * 2018-02-09 2020-09-01 清华大学深圳研究生院 一种基于手持式光场相机的空间光场恢复方法
CN109239698B (zh) * 2018-08-02 2022-12-20 华勤技术股份有限公司 摄像头测距方法及电子设备
CN109089025A (zh) * 2018-08-24 2018-12-25 中国民航大学 一种基于光场成像技术的影像仪数字聚焦方法
KR20200067020A (ko) * 2018-12-03 2020-06-11 삼성전자주식회사 캘리브레이션 방법 및 장치
CN109883391B (zh) * 2019-03-20 2021-09-24 北京环境特性研究所 基于微透镜阵列数字成像的单目测距方法
CN110009693B (zh) * 2019-04-01 2020-12-11 清华大学深圳研究生院 一种光场相机的快速盲标定方法
CN110673121A (zh) * 2019-09-26 2020-01-10 北京航空航天大学 光场相机前置镜焦平面定位方法和装置
CN111679337B (zh) * 2019-10-15 2022-06-10 上海大学 一种水下主动激光扫描成像系统中散射背景抑制方法
CN111258046A (zh) * 2020-02-26 2020-06-09 清华大学 基于前置微透镜阵列的光场显微系统及方法
CN115032756B (zh) * 2022-06-07 2022-12-27 北京拙河科技有限公司 一种光场相机的微透镜阵列定位方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295829A1 (en) * 2008-01-23 2009-12-03 Georgiev Todor G Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering
CN106373152A (zh) * 2016-09-18 2017-02-01 清华大学深圳研究生院 一种基于手持式光场相机的距离估计方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209307B (zh) * 2013-04-18 2016-05-25 清华大学 编码重聚焦计算摄像方法及装置
CN104050662B (zh) * 2014-05-30 2017-04-12 清华大学深圳研究生院 一种用光场相机一次成像直接获取深度图的方法
EP3144879A1 (en) * 2015-09-17 2017-03-22 Thomson Licensing A method and an apparatus for generating data representative of a light field
CN105551050B (zh) * 2015-12-29 2018-07-17 深圳市未来媒体技术研究院 一种基于光场的图像深度估计方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295829A1 (en) * 2008-01-23 2009-12-03 Georgiev Todor G Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering
CN106373152A (zh) * 2016-09-18 2017-02-01 清华大学深圳研究生院 一种基于手持式光场相机的距离估计方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN, YANQIN ET AL.: "Distance Measurement Based on Light Field Geometry and Ray Tracing", OPTICS EXPRESS, vol. 25, no. 1, 1 September 2017 (2017-09-01), XP055479752 *
HAHNE, C. ET AL.: "Light Field Geometry of a Standard Plenoptic Camera", OPTICS EXPRESS, vol. 22, no. 22, 3 November 2014 (2014-11-03), XP055421615 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310337A (zh) * 2019-06-24 2019-10-08 西北工业大学 一种基于光场基本矩阵的多视光场成像系统全参数估计方法
CN110310337B (zh) * 2019-06-24 2022-09-06 西北工业大学 一种基于光场基本矩阵的多视光场成像系统全参数估计方法
CN114858053A (zh) * 2021-01-20 2022-08-05 四川大学 一种确定工业相机的入瞳中心空间坐标的方法
CN114858053B (zh) * 2021-01-20 2023-03-10 四川大学 一种确定工业相机的入瞳中心空间坐标的方法
CN114051087A (zh) * 2021-11-11 2022-02-15 中汽创智科技有限公司 一种多传感器相机

Also Published As

Publication number Publication date
CN106373152A (zh) 2017-02-01
US20190114796A1 (en) 2019-04-18
CN106373152B (zh) 2019-02-01
US10482617B2 (en) 2019-11-19

Similar Documents

Publication Publication Date Title
WO2018049949A1 (zh) 一种基于手持式光场相机的距离估计方法
JP6855587B2 (ja) 視点から距離情報を取得するための装置及び方法
DK2239706T3 (en) A method for real-time camera and obtaining visual information of three-dimensional scenes
JP4807986B2 (ja) 画像入力装置
CN103793911A (zh) 一种基于集成图像技术的场景深度获取方法
CN105043350A (zh) 一种双目视觉测量方法
CN108364309B (zh) 一种基于手持式光场相机的空间光场恢复方法
CN109883391B (zh) 基于微透镜阵列数字成像的单目测距方法
WO2020024079A1 (zh) 图像识别系统
CN110462679B (zh) 快速多光谱光场成像方法和系统
CN114419568A (zh) 一种基于特征融合的多视角行人检测方法
Shi et al. An improved lightweight deep neural network with knowledge distillation for local feature extraction and visual localization using images and LiDAR point clouds
Martel et al. Real-time depth from focus on a programmable focal plane processor
KR20180054622A (ko) 광학 취득 시스템을 교정하기 위한 장치 및 방법
KR20180054737A (ko) 픽셀 빔을 나타내는 데이터를 생성하기 위한 장치 및 방법
JP7300895B2 (ja) 画像処理装置および画像処理方法、プログラム、並びに記憶媒体
JP2001208522A (ja) 距離画像生成装置および距離画像生成方法、並びにプログラム提供媒体
KR20160120533A (ko) 라이트 필드 영상에서 객체 분할방법
WO2021148050A1 (zh) 一种三维空间相机及其拍照方法
TWI668411B (zh) 位置檢測方法及其電腦程式產品
JP7075892B2 (ja) ピクセルビームを表すデータを生成する装置および方法
CN112288669A (zh) 一种基于光场成像的点云地图获取方法
CN107610170B (zh) 多目图像重聚焦的深度获取方法及系统
CN111272271A (zh) 振动测量方法、系统、计算机设备和存储介质
Chien et al. Improved visual odometry based on transitivity error in disparity space: A third-eye approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17850151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.08.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17850151

Country of ref document: EP

Kind code of ref document: A1