WO2021121037A1 - 一种应用深度采样进行光场重构的方法及系统 - Google Patents

一种应用深度采样进行光场重构的方法及系统 Download PDF

Info

Publication number
WO2021121037A1
WO2021121037A1 PCT/CN2020/133347 CN2020133347W WO2021121037A1 WO 2021121037 A1 WO2021121037 A1 WO 2021121037A1 CN 2020133347 W CN2020133347 W CN 2020133347W WO 2021121037 A1 WO2021121037 A1 WO 2021121037A1
Authority
WO
WIPO (PCT)
Prior art keywords
light field
pixel value
image
depth
depth sampling
Prior art date
Application number
PCT/CN2020/133347
Other languages
English (en)
French (fr)
Inventor
段福洲
郭甜
关鸿亮
苏文博
徐翎丰
孟祥慈
杨帆
Original Assignee
首都师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 首都师范大学 filed Critical 首都师范大学
Priority to AU2020408599A priority Critical patent/AU2020408599B2/en
Publication of WO2021121037A1 publication Critical patent/WO2021121037A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the field of light field reconstruction, in particular to a method and system for light field reconstruction using depth sampling.
  • the process of collecting the light field is the process of obtaining the intersection point between the light and the two reference planes.
  • the position of the reference plane determines the type of the light field collecting device.
  • One is to record light by setting a reference plane on the object side, such as a camera array, and the other is to record light by setting a reference plane on the image side, such as a plenoptic camera.
  • the camera array is composed of multiple traditional cameras.
  • the camera array forms a virtual projection reference plane composed of multiple lens projection centers and a virtual imaging plane composed of multiple CCDs (CMOS).
  • CMOS CCDs
  • the camera array obtains the light radiation intensity of different viewing angles at the same point of the target scene, and the image taken by each camera can be regarded as a sampled image of a certain angle of the light field.
  • the plenoptic camera mainly places the microlens array in front of the sensor, forming two reference planes of the lens array and CCD (CMOS). Each microlens captures the angular distribution of light at the main lens, which is the angle of the image side light field. sampling. Obviously, these two kinds of light field collection devices are mainly performed by sampling the angle of light. In addition to these two methods of directly collecting the light field, researchers are also exploring the use of various different collection methods to synthesize the light field. CK Liang et al. used multiple exposures to sample the sub-aperture of the main lens to record the light field. The method is similar to that of a plenoptic camera. LiuK et al. use structured light to reconstruct the object light field. This method uses structured light to obtain the depth distribution of the image side, and combines ordinary images and depth distribution to reconstruct the light field. This method is not a direct light field acquisition method.
  • the camera array requires dozens or hundreds of traditional cameras, which requires more equipment, is expensive, and it is difficult to control the time synchronization accuracy and accuracy of each camera. Relative position accuracy.
  • the plenoptic camera is simple to operate. It can directly collect the light field through one exposure, but the angular resolution and spatial resolution of the light field collected in this way are mutually restricted, and this mutual restriction leads to its spatial resolution The rate is much lower than traditional cameras.
  • the purpose of the present invention is to provide a method and system for light field reconstruction using depth sampling, which can quickly perform light field reconstruction and improve the spatial resolution of imaging.
  • the present invention provides the following solutions:
  • the four-dimensional light field is reconstructed according to the theorem of reconstructing the image by projection according to the projected pixel value.
  • the acquiring depth sampling pixel values of the target image in different scenes specifically includes:
  • the sampling pixel values according to the depth to obtain the projected pixel values of multiple same rays at the same position on different planes specifically includes:
  • the depth sampling pixel value adopts the formula Obtain the projected pixel values of multiple same rays at the same position on different planes
  • I(x m , y m , d m ) is the pixel value of the depth sampled image
  • Is the projected pixel value of the same ray at the same position on different planes
  • (u, v) is the reference plane coordinate of the light source direction
  • (x m ,y m ) is the reference plane coordinate of the light imaging direction
  • d m is the different image Distance
  • the reconstruction of the four-dimensional light field by the theorem of reconstructing the image by projection according to the projected pixel value specifically includes:
  • L rec (x, y, u, v) is a four-dimensional light field; d is the reference image plane.
  • a system for applying depth sampling for light field reconstruction including:
  • the depth sampling pixel value obtaining module is used to obtain the depth sampling pixel value of the target image in different scenes;
  • the projection pixel value determination module is configured to sample the pixel value according to the depth to obtain the projection pixel value of a plurality of the same light rays at the same position on different planes;
  • the four-dimensional light field reconstruction module is used for reconstructing the four-dimensional light field according to the projection pixel value through the theorem of reconstructing the image.
  • the depth sampling pixel value acquisition module specifically includes:
  • the depth sampling pixel value obtaining unit is used to obtain the depth sampling pixel value of the target image in different scenes through a common camera.
  • the projection pixel value determination module specifically includes:
  • the projection pixel value determining unit is configured to adopt a formula according to the depth sampling pixel value Obtain the projected pixel values of multiple same rays at the same position on different planes
  • I(x m , y m , d m ) is the pixel value of the depth sampled image
  • Is the projected pixel value of the same ray at the same position on different planes
  • (u, v) is the reference plane coordinate of the light source direction
  • (x m ,y m ) is the reference plane coordinate of the light imaging direction
  • d m is the different image Distance
  • the four-dimensional light field reconstruction module specifically includes:
  • the four-dimensional light field reconstruction unit is used for reconstructing the image by projection theorem according to the projected pixel value using the formula Reconstruct the four-dimensional light field L rec (x,y,u,v);
  • L rec (x, y, u, v) is a four-dimensional light field; d is the reference image plane.
  • the present invention discloses the following technical effects:
  • the present invention provides a method and system for reconstructing light field by applying depth sampling. By performing depth sampling on the target scene, images of different depth planes of the target scene are obtained, and then a four-dimensional light field is recovered from the depth sampling data.
  • the invention can quickly reconstruct the light field and can improve the spatial resolution of imaging.
  • Fig. 1 is a flow chart of a method for applying depth sampling to perform light field reconstruction in the present invention
  • Figure 2 is a schematic diagram of the depth sampling of the present invention
  • Fig. 3 is a schematic diagram of the two-plane representation of the light field of the present invention.
  • FIG. 4 is a schematic diagram of imaging of different focal planes of the present invention.
  • FIG. 5 is the depth sampling data at different focusing distances obtained by using Canon cameras in the present invention.
  • Fig. 6 is a partial enlarged image of the sub-aperture image and the left side of the sub-aperture image of the present invention
  • Fig. 7 is a sub-aperture image reconstructed from different depth sampling data according to the present invention.
  • FIG. 8 is a schematic diagram of the comparison of sub-aperture images reconstructed by depth sampling and angle sampling according to the present invention.
  • Fig. 9 is a structural diagram of a system for light field reconstruction using depth sampling according to the present invention.
  • Fig. 1 is a flow chart of a method for applying depth sampling to perform light field reconstruction in the present invention. As shown in Fig. 1, a method of applying depth sampling for light field reconstruction includes:
  • Step 101 Obtain the depth sampling pixel values of the target image in different scenes, which specifically includes: obtaining the depth sampling pixel values of the target image in different scenes through a common camera.
  • FIG. 2 is a schematic diagram of depth sampling of the present invention
  • FIG. 3 is a schematic diagram of dual plane representation of the light field of the present invention
  • FIG. 4 is a schematic diagram of imaging of different focal planes of the present invention
  • x m represents different image planes
  • D m represents different image distances.
  • I (x, y, d) represents the pixel value at (x, y) at the image plane d.
  • Step 102 According to the depth sampling pixel value, obtain the projected pixel value of multiple same rays at the same position on different planes, which specifically includes:
  • the depth sampling pixel value adopts the formula Obtain the projected pixel values of multiple same rays at the same position on different planes
  • I(x m , y m , d m ) is the pixel value of the depth sampled image
  • Is the projected pixel value of the same ray at the same position on different planes
  • (u, v) is the reference plane coordinate of the light source direction
  • (x m ,y m ) is the reference plane coordinate of the light imaging direction
  • d m is the different image Distance
  • the depth sampling can be expressed as:
  • Step 103 Reconstruct the four-dimensional light field according to the theorem of reconstructing the image by projection according to the projected pixel value, which specifically includes:
  • L rec (x, y, u, v) is a four-dimensional light field; d is the reference image plane.
  • the imaging of any point can be regarded as the integral of all the rays passing through the image point at different angles.
  • the algorithm is described as follows
  • i is the pixel value of the point
  • P ⁇ is the projection value of the ray passing through the point at a certain angle ⁇
  • T is the number of projection angles.
  • Each image in the depth sampling can also be regarded as a two-dimensional projection of a four-dimensional light field, and the projection on each image plane is The number of depth samples is equivalent to the number T of projection angles, and the four-dimensional light field recovered from the depth samples using formula (5) can be expressed as:
  • ⁇ m d m /d, which characterizes the image distance ratio
  • L rec (x,y,u,v) is the reconstructed four-dimensional light field, and for a certain (u,v), the light field is determined
  • a transmission direction is equivalent to determining a virtual ordinary camera to shoot the image (x, y) in the direction of the light.
  • M represents the depth-sampling Quantity
  • d represents the reference image plane, which can be any one of d m.
  • Depth sampling can be understood as a set of images I (x, y, d) focused on the target scene with different focus depths. It is a slice sampling of different depths of the light field, which is obviously different from the common lens array or camera array.
  • a method or device for sampling the angle of the light field If the depth sampling is regarded as images with different focus distances, then the depth sampling can be achieved by simpler equipment, such as a common commercial camera, which fixes the focal length and obtains the sampling data of different depth slices by acquiring images with different focus depths.
  • the experimental device of the present invention is Canon 5dmark III. In the experiment, the device is fixed at a position to obtain slice samples of different depths of the target scene, that is, images with different focus depths.
  • the depth sampling data obtained by Canon 5D mark III is as follows ⁇ (x 1 ,y 1 ,d 1 ),...,(x 4 ,y 4 ,d 4 ) ⁇ , a total of four images with different focus planes, just complete coverage
  • the entire experimental scene was taken, and the size of (x, y) was 1920 ⁇ 1280.
  • the present invention uses the camera control software digicamControl to control the camera to automatically shoot the scene on the computer.
  • the focus depths are respectively 0.75m, 0.84m, 0.96m, 1.03m.
  • the focal length of the equipment is adjusted to 105mm, the aperture is adjusted to 4.0, when the depth of focus is 1m , The depth of field is about 10cm, and the above-designed focusing distance can obtain ideal images with different focusing distances.
  • the acquired images with different focus depths are shown in FIG.
  • Figure 6 is the sub-aperture image and the partial enlarged image on the left side of the sub-aperture image of the present invention, where (a) is (20, The sub-aperture image of 0) and the partial enlarged image on the left of the sub-aperture image, (b) is the sub-aperture image of (0,0) and the partial enlarged image of the left of the sub-aperture image, and (c) is the sub-aperture of (-20,0) The image and the partial enlarged image on the left side of the sub-aperture image; Figure 6 shows that there are three groups of abc images. The upper part shows the acquired sub-aperture image, and the lower part is the partial enlarged image on the left side of the sub-aperture image.
  • FIG. 7 is a sub-aperture image reconstructed from different depth sampling data of the present invention; among them, (a) is a sub-aperture image reconstructed from 2 depth sampling data of the present invention, and (b) is a sub-aperture reconstructed from 3 depth sampling data of the present invention
  • the image, (c) is the sub-aperture image reconstructed from 4 pieces of depth sampling data of the present invention, and the sub-aperture images in FIG. 7 are all 0.
  • the Tenengrad function, Laplacian and variance function are selected to evaluate the sharpness of the above three sets of images.
  • the Tenengrad and Laplacian function is a gradient-based function that can be used to detect whether the image is sharp and sharp. The sharper the image, the greater the value.
  • Variance function is a measurement method used in probability theory to examine the degree of dispersion between discrete data and expectations. Since a clear image has a larger grayscale difference between pixels than a blurred image, the variance is used to evaluate the clarity of the image, the clearer the image The greater the variance value.
  • the above three definition evaluation functions are used to quantitatively evaluate the three sub-aperture images generated above. The results are shown in Table 1, where M represents the number of depth samples.
  • the depth-sampling reconstruction light field method only needs to use a common camera to automatically collect images of different focus planes to realize the light field calculation imaging, and the angle sampling is quite different in model and method. Because this method requires continuous photographing of the target scene, it is obviously more conducive to light field collection in a stationary or slowly moving scene. It is a way to reconstruct the light field through multiple shots, which is obviously different from the collected light field of a plenoptic camera for one shot.
  • the Senser of Lytro illum2 used in the experiment has a total of about 40 million pixels, the size of the sensor image (light field image) obtained is 7728*5368, the number of microlens arrays of Lytro illum2 is 541*434, and its angular resolution is 15*15, the number of pixels behind each microlens is 225.
  • lytro illum 2 and canon 5D mark III respectively, angle sampling and depth sampling of the same scene are performed, and the focal lengths of the two devices are both set to 105mm.
  • Fig. 8 is a schematic diagram of the comparison of sub-aperture images reconstructed by depth sampling and angle sampling according to the present invention. From the reconstructed sub-aperture image, it can be seen that the angular resolution of the depth sampling can reach the degree of the angular resolution of the angular sampling.
  • the spatial resolution of the images of different viewing angles obtained by the depth sampling is 1920 ⁇ 1280, which is the same as the original The sensor is the same size; while the spatial resolution of images with different viewing angles obtained by angular sampling is 625 ⁇ 433, and the original sensor spatial resolution is 7728 ⁇ 5368, which is much smaller than the size of the sensor. That is, the method of applying depth sampling for light field reconstruction provided by the present invention has an advantage compared with the existing method of angle sampling for light field reconstruction in that its spatial resolution can reach the level of a sensor and does not require any special hardware.
  • Fig. 9 is a structural diagram of a system for light field reconstruction using depth sampling according to the present invention. As shown in Fig. 9, a system that applies depth sampling for light field reconstruction includes:
  • the depth sampling pixel value obtaining module 201 is used to obtain the depth sampling pixel value of the target image in different scenes.
  • the projection pixel value determination module 202 is configured to sample the pixel value according to the depth to obtain the projection pixel value of multiple same rays at the same position on different planes.
  • the four-dimensional light field reconstruction module 203 is used for reconstructing the four-dimensional light field according to the projection pixel value through the theorem of reconstructing the image by projection.
  • the depth sampling pixel value acquisition module 201 specifically includes:
  • the depth sampling pixel value obtaining unit is used to obtain the depth sampling pixel value of the target image in different scenes through a common camera.
  • the projection pixel value determination module 202 specifically includes:
  • the projection pixel value determining unit is configured to adopt a formula according to the depth sampling pixel value Obtain the projected pixel values of multiple same rays at the same position on different planes
  • I(x m , y m , d m ) is the pixel value of the depth sampled image
  • Is the projected pixel value of the same ray at the same position on different planes
  • (u, v) is the reference plane coordinate of the light source direction
  • (x m ,y m ) is the reference plane coordinate of the light imaging direction
  • d m is the different image Distance
  • the four-dimensional light field reconstruction module 203 specifically includes:
  • the four-dimensional light field reconstruction unit is used for reconstructing the image by projection theorem according to the projected pixel value using the formula Reconstruct the four-dimensional light field L rec (x, y, u, v).
  • L rec (x, y, u, v) is a four-dimensional light field; d is the reference image plane.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明涉及一种应用深度采样进行光场重构的方法及系统。该方法包括:获取目标图像在不同场景下的深度采样像素值;根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值;根据所述投影像素值通过投影重建图像定理,重构四维光场。采用本发明的方法或系统能够迅速的进行光场重构,并且能够提高成像的空间分辨率。

Description

一种应用深度采样进行光场重构的方法及系统
本申请要求于2019年12月16日提交中国专利局、申请号为201911292417.9、发明名称为“一种应用深度采样进行光场重构的方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及光场重构领域,特别是涉及一种应用深度采样进行光场重构的方法及系统。
背景技术
采集光场的过程就是获取光线与两个参考平面的交点的过程,参考平面的位置决定了光场采集设备的类型。一种是通过在物方设置参考平面来记录光线,如相机阵列,一种是通过在像方设置参考平面来记录光线,如全光相机。相机阵列由多个传统相机组成,相机阵列形成了由多个镜头投影中心组成虚拟投影参考平面和多个CCD(CMOS)组成的虚拟成像平面。相机阵列获取目标场景同一点处不同视角的光线辐射强度,每个相机拍摄的图像可以看做是光场的某个角度的采样图像。全光相机主要是将微透镜阵列放置在传感器前面,形成了透镜阵列和CCD(CMOS)两个参考平面,每个微透镜捕获光线在主透镜处的角度分布,是对像方光场进行角度采样。显然,这两种光场采集设备主要是通过对光线进行角度采样来进行的。除了这两种直接采集光场的方式外,学者也在探讨利用各种不同的采集方式来合成光场,C.K.Liang等人通过多次曝光对主镜头的子孔径进行采样来记录光场,其方式和全光相机类似。LiuK等人通过结构光来重建物方光场,该方法通过结构光来获取像方的深度分布,结合普通影像和深度分布来重建光场,这种方法不是一种光场的直接采集方式。
无论是相机阵列还是全光相机,都需要专用的设备来采集光场,如相机阵列要求几十上百台传统相机,要求的设备较多,造价昂贵,且难以控制各个相机的时间同步精度和相对位置精度。全光相机相对于相机阵列的方式操作简单,能够通过一次曝光来直接采集光场,但是通过这种方式采集的光场其角度分辨率和空间分辨率相互制约,这种相互制约导致其空间 分辨率远远低于传统相机。
发明内容
本发明的目的是提供一种应用深度采样进行光场重构的方法及系统,能够迅速的进行光场重构,并且能够提高成像的空间分辨率。
为实现上述目的,本发明提供了如下方案:
一种应用深度采样进行光场重构的方法,包括:
获取目标图像在不同场景下的深度采样像素值;
根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值;
根据所述投影像素值通过投影重建图像定理,重构四维光场。
可选的,所述获取目标图像在不同场景下的深度采样像素值,具体包括:
通过普通相机获取目标图像在不同场景下的深度采样像素值。
可选的,所述根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值,具体包括:
根据所述深度采样像素值采用公式
Figure PCTCN2020133347-appb-000001
得到多个同一根光线在不同平面同一位置的投影像素值
Figure PCTCN2020133347-appb-000002
其中,I(x m,y m,d m)为深度采样图像像素值,
Figure PCTCN2020133347-appb-000003
为同一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
可选的,所述根据所述投影像素值通过投影重建图像定理,重构四维光场,具体包括:
根据所述投影像素值通过投影重建图像定理采用公式
Figure PCTCN2020133347-appb-000004
重构四维光场L rec(x,y,u,v);
其中,L rec(x,y,u,v)为四维光场;
Figure PCTCN2020133347-appb-000005
d为参考的像平面。
一种应用深度采样进行光场重构的系统,包括:
深度采样像素值获取模块,用于获取目标图像在不同场景下的深度采样像素值;
投影像素值确定模块,用于根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值;
四维光场重构模块,用于根据所述投影像素值通过投影重建图像定理,重构四维光场。
可选的,所述深度采样像素值获取模块,具体包括:
深度采样像素值获取单元,用于通过普通相机获取目标图像在不同场景下的深度采样像素值。
可选的,所述投影像素值确定模块,具体包括:
投影像素值确定单元,用于根据所述深度采样像素值采用公式
Figure PCTCN2020133347-appb-000006
得到多个同一根光线在不同平面同一位置的投影像素值
Figure PCTCN2020133347-appb-000007
其中,I(x m,y m,d m)为深度采样图像像素值,
Figure PCTCN2020133347-appb-000008
为同一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
可选的,所述四维光场重构模块,具体包括:
四维光场重构单元,用于根据所述投影像素值通过投影重建图像定理采用公式
Figure PCTCN2020133347-appb-000009
重构四维光场L rec(x,y,u,v);
其中,L rec(x,y,u,v)为四维光场;
Figure PCTCN2020133347-appb-000010
d为参考的像平面。
根据本发明提供的具体实施例,本发明公开了以下技术效果:
本发明提供一种应用深度采样进行光场重构的方法及系统,通过对目标场景进行深度采样,获取目标场景不同深度面的图像,然后从深度采样数据中恢复四维光场。本发明能够迅速的进行光场重构,并且能够提高成像的空间分辨率。
说明书附图
下面结合附图对本发明作进一步说明:
图1为本发明应用深度采样进行光场重构的方法流程图;
图2为本发明深度采样示意图;
图3为本发明光场的双平面表示示意图;
图4为本发明不同的对焦平面成像示意图;
图5为本发明利用佳能相机获取的不同对焦距离下的深度采样数据;
图6为本发明子孔径图像和子孔径图像左边的局部放大图像;
图7为本发明不同张深度采样数据重建的子孔径图像;
图8为本发明深度采样与角度采样重建的子孔径图像对比示意图;
图9为本发明应用深度采样进行光场重构的系统结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
图1为本发明应用深度采样进行光场重构的方法流程图。如图1所示,一种应用深度采样进行光场重构的方法包括:
步骤101:获取目标图像在不同场景下的深度采样像素值,具体包括:通过普通相机获取目标图像在不同场景下的深度采样像素值。
在深度采样重建光场的理论模型中,引入两个相互平行的平面来参数化表征光场,其中主透镜平面(u,v)表示光线来源方向的参考平面,成像平面(x,y)表示光线成像方向的参考平面,图2为本发明深度采样示意图;图3为本发明光场的双平面表示示意图;图4为本发明不同的对焦平面成像示意图;图中x m表示不同的像平面,d m表示不同的像距。
在深度采样中,假设深度采样表示为
Figure PCTCN2020133347-appb-000011
其中I(x,y,d)表示在像平面d处,(x,y)处的像素值。
由图3和图4可知,对于同一根光线,式1中等式两边是两种等价的 表示方法:
Figure PCTCN2020133347-appb-000012
由三角形相似定理可求得
Figure PCTCN2020133347-appb-000013
同理
Figure PCTCN2020133347-appb-000014
则:
Figure PCTCN2020133347-appb-000015
步骤102:根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值,具体包括:
根据所述深度采样像素值采用公式
Figure PCTCN2020133347-appb-000016
得到多个同一根光线在不同平面同一位置的投影像素值
Figure PCTCN2020133347-appb-000017
其中,I(x m,y m,d m)为深度采样图像像素值,
Figure PCTCN2020133347-appb-000018
为同一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
通过式2可将深度采样表示为:
Figure PCTCN2020133347-appb-000019
将等式两边做变换可得
Figure PCTCN2020133347-appb-000020
其中
Figure PCTCN2020133347-appb-000021
表示同一根光线在每个像平面同一位置的投 影像素值。
步骤103:根据所述投影像素值通过投影重建图像定理,重构四维光场,具体包括:
根据所述投影像素值通过投影重建图像定理采用公式
Figure PCTCN2020133347-appb-000022
重构四维光场L rec(x,y,u,v)。
其中,L rec(x,y,u,v)为四维光场;
Figure PCTCN2020133347-appb-000023
d为参考的像平面。
根据投影重建图像的定理,任意一点的成像都可以看做是以不同角度通过该像点的所有光线的积分,其算法描述如下
Figure PCTCN2020133347-appb-000024
其中i即为该点的像素值,P θ是指在某个角度θ下经过该点的射线的投影值,T为投影角度的个数。深度采样中的每张图像也可以看做是四维光场的二维投影,在每个像平面上的投影为
Figure PCTCN2020133347-appb-000025
而深度采样数量相当于投影角度的个数T,则利用公式(5)从深度采样中恢复的四维光场可表示为:
Figure PCTCN2020133347-appb-000026
上式中,α m=d m/d,表征像距比,L rec(x,y,u,v)即为重建的四维光场,对于确定的(u,v),就确定了光线的一个传输方向,相当于确定了一个虚 拟普通相机,拍摄光线该方向下的图像(x,y),给定不同的(u,v),就可以得到不同视角的图像,其中M表示深度采样的数量,其中d代表参考的像平面,可以是d m中的任意一个。
对深度和角度采样这两种重建光场方法,深入分析基于深度采样重建光场的效果。深度采样可以理解为一组对焦在目标场景不同对焦深度的图像I(x,y,d),它是光场的不同深度的切片式采样,其显然有别于常见的通过透镜阵列或者相机阵列进行光场角度采样的方法或设备。如果将深度采样视为不同对焦距离的图像,那么深度采样就可以通过较为简单的设备实现,例如普通商业相机,将焦距固定,通过获取不同对焦深度的影像来获取不同深度切片的采样数据。本发明实验设备为canon 5dmark III,实验中将设备固定在一个位置,获取目标场景的不同深度的切片采样,即不同对焦深度的影像。
在目标场景的选择上,实验中应用四张扑克牌作为场景,把每一张扑克牌当作一个对焦平面,原因是易于对焦,且重建光场后能够明显观察到视角的移动。利用佳能5D mark III获取的深度采样数据如下{(x 1,y 1,d 1),…,(x 4,y 4,d 4)},一共四张不同对焦平面的图像,刚好完整的覆盖了整个实验场景,其中(x,y)大小为1920×1280,为了达到比较好的拍摄效果,本发明使用了控制相机拍照软件digicamControl,在电脑上控制相机对场景进行自动拍摄,对焦深度分别为0.75m、0.84m、0.96m、1.03m,对于实验中使用的设备来说,为了尽量降低景深对数据采集的影响,将设备的焦距调为105mm,光圈调为4.0,当对焦深度为1m时,景深为10cm左右,上面设计的对焦距离能够获取到较为理想的不同对焦距离的影像。 获取的不同对焦深度的影像如图5所示,图5为本发明利用佳能相机获取的不同对焦距离下的深度采样数据;其中,(a)为本发明利用佳能相机获取的对焦距离0.75m下的深度采样数据,(b)为本发明利用佳能相机获取的对焦距离0.84m下的深度采样数据,(c)为本发明利用佳能相机获取的对焦距离0.96m下的深度采样数据,(d)为本发明利用佳能相机获取的对焦距离1.03m下的深度采样数据。
利用公式(6)来从深度采样中恢复光场,利用子孔径图像来可视化光场,由公式(6)可知,给定不同的(u,v),就可以得到不同视角的图像(x,y),其中v表示垂直方向的视角,u表示水平方向的视角,给定u和v不同的值表示获得的不同视角的图像(x,y),给定(u,v)的值分别为(20,0),(0,0),(-20,0),其中垂直方向v的值为0保持不变,水平方向u设置不同的值以观察视角的移动,其中(0,0)表示中心视角,得到的(x,y),也就是不同视角的图像如图6所示,图6为本发明子孔径图像和子孔径图像左边的局部放大图像,其中,(a)为(20,0)的子孔径图像和子孔径图像左边的局部放大图像,(b)为(0,0)的子孔径图像和子孔径图像左边的局部放大图像,(c)为(-20,0)的子孔径图像和子孔径图像左边的局部放大图像;由图6可知,共包含abc三组图像,上面表示获取的子孔径图像,下面是子孔径图像左边的局部放大图,通过图6能够清楚的看见视角的移动。abc三组图像的(u,v)分别为(20,0)(0,0)(-20,0),通过上述数值可知垂直方向的视角不变,水平方向的视角从左向右移动。
由实验结果可以看出,给定不同的(u,v),就能得到不同视角的子孔径图像(x,y)。下面探讨有关深度采样数量的问题,在设置的实验场景中, 选择每一张扑克牌作为一个对焦平面,四张刚好完全覆盖了整个实验场景,选择三张和两张没有完全覆盖实验场景的深度采样数据做对比。图7为本发明不同张深度采样数据重建的子孔径图像;其中,(a)为本发明2张深度采样数据重建的子孔径图像,(b)为本发明3张深度采样数据重建的子孔径图像,(c)为本发明4张深度采样数据重建的子孔径图像,图7的子孔径图像都为0。
由于没有参考的真值图像,因此选取Tenengrad函数,Laplacian和方差函数来对上面三组图像进行清晰度评价,Tenengrad和Laplacian函数是一种基于梯度的函数,可以用于检测图像是否具有清晰尖锐的边缘,图像越清晰其值越大。方差函数是概率论中用来考查离散数据和期望之间离散程度的度量方法,由于清晰图像相比模糊图像其像素之间有更大的灰度差异,利用方差评价图像清晰度,图像越清晰方差值越大。利用上述三个清晰度评价函数对上面三幅生成的子孔径图像进行定量评价,其结果如表1,其中M表示深度采样张数。
表1图像清晰度评价结果
Figure PCTCN2020133347-appb-000027
从上面三个清晰度评价函数可以看出,当深度采样覆盖整个实验场景 时,其重建的光场比没有完全覆盖整个实验场景要清晰,这主要是由于当深度采样没有完全覆盖实验场景时,总会有一部分场景没有清晰对焦,导致图像变模糊,利用公式(6)重建光场子孔径图像时肯定不如完全覆盖实验场景的深度采样清晰。
深度采样重建光场方法仅需采用普通相机自动对不同对焦平面的影像进行采集就能够实现光场计算成像,和角度采样在模型和方法上都有较大的不同。因为这个方法需要对目标场景进行连续的拍照,显然比较有利于静止或者缓慢移动的场景进行光场采集。是通过多次拍摄来重建光场的方式,其显然和一次拍摄的全光相机的采集光场具有较大的区别。
实验中采用的lytro illum2的Senser一共有4000万个像素左右,得到的传感器图像(光场图像)尺寸为7728*5368,lytro illum2的微透镜阵列个数为541*434个,其角度分辨率为15*15,每一个微透镜后面对应的像素个数为225个。分别利用lytro illum2和canon 5D mark III,对同一场景进行角度采样和深度采样,两个设备的焦距都设为105mm。
图8为本发明深度采样与角度采样重建的子孔径图像对比示意图。从重建的子孔径图像可以看出,深度采样的角度分辨率能够达到角度采样角度分辨率的程度,在空间分辨方面,深度采样获取的不同视角的图像的空间分辨率为1920×1280,和原始传感器一样大;而角度采样获取的不同视角的图像的空间分辨率为625×433,原始传感器空间分辨率大小为7728×5368,远小于传感器的大小。即本发明提供的一种应用深度采样进行光场重构的方法相较于现有的角度采样重建光场的方法优点在于其空间分辨率能够达到传感器的程度,且不需要任何特殊的硬件。
图9为本发明应用深度采样进行光场重构的系统结构图。如图9所示,一种应用深度采样进行光场重构的系统包括:
深度采样像素值获取模块201,用于获取目标图像在不同场景下的深度采样像素值。
投影像素值确定模块202,用于根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值。
四维光场重构模块203,用于根据所述投影像素值通过投影重建图像定理,重构四维光场。
所述深度采样像素值获取模块201,具体包括:
深度采样像素值获取单元,用于通过普通相机获取目标图像在不同场景下的深度采样像素值。
所述投影像素值确定模块202,具体包括:
投影像素值确定单元,用于根据所述深度采样像素值采用公式
Figure PCTCN2020133347-appb-000028
得到多个同一根光线在不同平面同一位置的投影像素值
Figure PCTCN2020133347-appb-000029
其中,I(x m,y m,d m)为深度采样图像像素值,
Figure PCTCN2020133347-appb-000030
为同一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
所述四维光场重构模块203,具体包括:
四维光场重构单元,用于根据所述投影像素值通过投影重建图像定理采用公式
Figure PCTCN2020133347-appb-000031
重构四维 光场L rec(x,y,u,v)。
其中,L rec(x,y,u,v)为四维光场;
Figure PCTCN2020133347-appb-000032
d为参考的像平面。
上面结合附图对本发明的实施方式作了详细说明,但是本发明并不限于上述实施方式,在所属技术领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。

Claims (8)

  1. 一种应用深度采样进行光场重构的方法,其特征在于,包括:
    获取目标图像在不同场景下的深度采样像素值;
    根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值;
    根据所述投影像素值通过投影重建图像定理,重构四维光场。
  2. 根据权利要求1所述的应用深度采样进行光场重构的方法,其特征在于,所述获取目标图像在不同场景下的深度采样像素值,具体包括:
    通过普通相机获取目标图像在不同场景下的深度采样像素值。
  3. 根据权利要求1所述的应用深度采样进行光场重构的方法,其特征在于,所述根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值,具体包括:
    根据所述深度采样像素值采用公式
    Figure PCTCN2020133347-appb-100001
    得到多个同一根光线在不同平面同一位置的投影像素值
    Figure PCTCN2020133347-appb-100002
    其中,I(x m,y m,d m)为深度采样图像像素值,
    Figure PCTCN2020133347-appb-100003
    为同一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
  4. 根据权利要求1所述的应用深度采样进行光场重构的方法,其特征在于,所述根据所述投影像素值通过投影重建图像定理,重构四维光场,具体包括:
    根据所述投影像素值通过投影重建图像定理采用公式
    Figure PCTCN2020133347-appb-100004
    重构四维光场L rec(x,y,u,v);
    其中,L rec(x,y,u,v)为四维光场;
    Figure PCTCN2020133347-appb-100005
    d为参考的像平面。
  5. 一种应用深度采样进行光场重构的系统,其特征在于,包括:
    深度采样像素值获取模块,用于获取目标图像在不同场景下的深度采样像素值;
    投影像素值确定模块,用于根据所述深度采样像素值,得到多个同一根光线在不同平面同一位置的投影像素值;
    四维光场重构模块,用于根据所述投影像素值通过投影重建图像定理,重构四维光场。
  6. 根据权利要求5所述的应用深度采样进行光场重构的系统,其特征在于,所述深度采样像素值获取模块,具体包括:
    深度采样像素值获取单元,用于通过普通相机获取目标图像在不同场景下的深度采样像素值。
  7. 根据权利要求5所述的应用深度采样进行光场重构的系统,其特征在于,所述投影像素值确定模块,具体包括:
    投影像素值确定单元,用于根据所述深度采样像素值采用公式
    Figure PCTCN2020133347-appb-100006
    得到多个同一根光线在不同平面同一位置的投影像素值
    Figure PCTCN2020133347-appb-100007
    其中,I(x m,y m,d m)为深度采样图像像素值,
    Figure PCTCN2020133347-appb-100008
    为同 一根光线在不同平面同一位置的投影像素值,(u,v)为光线来源方向的参考平面坐标,(x m,y m)为光线成像方向的参考平面坐标,d m为不同的像距,m=1,2,...m的正整数。
  8. 根据权利要求5所述的应用深度采样进行光场重构的系统,其特征在于,所述四维光场重构模块,具体包括:
    四维光场重构单元,用于根据所述投影像素值通过投影重建图像定理采用公式
    Figure PCTCN2020133347-appb-100009
    重构四维光场L rec(x,y,u,v);
    其中,L rec(x,y,u,v)为四维光场;
    Figure PCTCN2020133347-appb-100010
    d为参考的像平面。
PCT/CN2020/133347 2019-12-16 2020-12-02 一种应用深度采样进行光场重构的方法及系统 WO2021121037A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020408599A AU2020408599B2 (en) 2019-12-16 2020-12-02 Light field reconstruction method and system using depth sampling

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911292417.9 2019-12-16
CN201911292417.9A CN111080774B (zh) 2019-12-16 2019-12-16 一种应用深度采样进行光场重构的方法及系统

Publications (1)

Publication Number Publication Date
WO2021121037A1 true WO2021121037A1 (zh) 2021-06-24

Family

ID=70314673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133347 WO2021121037A1 (zh) 2019-12-16 2020-12-02 一种应用深度采样进行光场重构的方法及系统

Country Status (3)

Country Link
CN (1) CN111080774B (zh)
AU (1) AU2020408599B2 (zh)
WO (1) WO2021121037A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080774B (zh) * 2019-12-16 2020-09-15 首都师范大学 一种应用深度采样进行光场重构的方法及系统
CN111610634B (zh) 2020-06-23 2022-05-27 京东方科技集团股份有限公司 一种基于四维光场的显示系统及其显示方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9672657B2 (en) * 2014-01-17 2017-06-06 Intel Corporation Layered reconstruction for defocus and motion blur
CN108074218A (zh) * 2017-12-29 2018-05-25 清华大学 基于光场采集装置的图像超分辨率方法及装置
CN110047430A (zh) * 2019-04-26 2019-07-23 京东方科技集团股份有限公司 光场数据重构方法、光场数据重构器件及光场显示装置
CN111080774A (zh) * 2019-12-16 2020-04-28 首都师范大学 一种应用深度采样进行光场重构的方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562701B (zh) * 2009-03-25 2012-05-02 北京航空航天大学 一种用于光场成像的数字对焦方法及装置
US9412172B2 (en) * 2013-05-06 2016-08-09 Disney Enterprises, Inc. Sparse light field representation
CN104156916B (zh) * 2014-07-31 2017-03-29 北京航空航天大学 一种用于场景光照恢复的光场投影方法
CN104243823B (zh) * 2014-09-15 2018-02-13 北京智谷技术服务有限公司 光场采集控制方法和装置、光场采集设备
CN104463949B (zh) * 2014-10-24 2018-02-06 郑州大学 一种基于光场数字重聚焦的快速三维重建方法及其系统
CN106934110B (zh) * 2016-12-14 2021-02-26 北京信息科技大学 一种由聚焦堆栈重建光场的反投影方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9672657B2 (en) * 2014-01-17 2017-06-06 Intel Corporation Layered reconstruction for defocus and motion blur
CN108074218A (zh) * 2017-12-29 2018-05-25 清华大学 基于光场采集装置的图像超分辨率方法及装置
CN110047430A (zh) * 2019-04-26 2019-07-23 京东方科技集团股份有限公司 光场数据重构方法、光场数据重构器件及光场显示装置
CN111080774A (zh) * 2019-12-16 2020-04-28 首都师范大学 一种应用深度采样进行光场重构的方法及系统

Also Published As

Publication number Publication date
AU2020408599A1 (en) 2021-08-12
CN111080774B (zh) 2020-09-15
AU2020408599B2 (en) 2023-02-23
CN111080774A (zh) 2020-04-28

Similar Documents

Publication Publication Date Title
US9900510B1 (en) Motion blur for light-field images
Liang et al. Programmable aperture photography: multiplexed light field acquisition
US9843787B2 (en) Generation and use of a 3D radon image
US11169367B2 (en) Three-dimensional microscopic imaging method and system
WO2018024006A1 (zh) 一种聚焦型光场相机的渲染方法和系统
CN102436639B (zh) 一种去除图像模糊的图像采集方法和图像采集系统
JP2019532451A (ja) 視点から距離情報を取得するための装置及び方法
WO2021121037A1 (zh) 一种应用深度采样进行光场重构的方法及系统
KR102219624B1 (ko) 가상 광선 추적 방법 및 라이트 필드 동적 리포커싱 디스플레이 시스템
US20110267508A1 (en) Digital camera with coded aperture rangefinder
US10897608B2 (en) Capturing light-field images with uneven and/or incomplete angular sampling
JP2014057181A (ja) 画像処理装置、撮像装置、画像処理方法、および、画像処理プログラム
US9818199B2 (en) Method and apparatus for estimating depth of focused plenoptic data
CN206563985U (zh) 三维成像系统
CN105704371B (zh) 一种光场重聚焦方法
CN109118544A (zh) 基于透视变换的合成孔径成像方法
WO2020024079A1 (zh) 图像识别系统
JP6095266B2 (ja) 画像処理装置及びその制御方法
WO2016175044A1 (ja) 画像処理装置及び画像処理方法
JP6418770B2 (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、および記憶媒体
JP6285686B2 (ja) 視差画像生成装置
CN107710741B (zh) 一种获取深度信息的方法及摄像装置
KR102052564B1 (ko) 라이트 필드 이미지의 깊이 추정 방법 및 장치
CN106934110B (zh) 一种由聚焦堆栈重建光场的反投影方法和装置
KR102253320B1 (ko) 집적영상 현미경 시스템에서의 3차원 영상 디스플레이 방법 및 이를 구현하는 집적영상 현미경 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20901005

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020408599

Country of ref document: AU

Date of ref document: 20201202

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20901005

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20901005

Country of ref document: EP

Kind code of ref document: A1