WO2018024006A1 - 一种聚焦型光场相机的渲染方法和系统 - Google Patents

一种聚焦型光场相机的渲染方法和系统 Download PDF

Info

Publication number
WO2018024006A1
WO2018024006A1 PCT/CN2017/083301 CN2017083301W WO2018024006A1 WO 2018024006 A1 WO2018024006 A1 WO 2018024006A1 CN 2017083301 W CN2017083301 W CN 2017083301W WO 2018024006 A1 WO2018024006 A1 WO 2018024006A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
field
radius
microlens
Prior art date
Application number
PCT/CN2017/083301
Other languages
English (en)
French (fr)
Inventor
王好谦
刘帝
刘烨斌
王兴政
方璐
张永兵
戴琼海
Original Assignee
深圳市未来媒体技术研究院
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市未来媒体技术研究院, 清华大学深圳研究生院 filed Critical 深圳市未来媒体技术研究院
Publication of WO2018024006A1 publication Critical patent/WO2018024006A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Definitions

  • the invention belongs to the field of rendering technology of a light field camera, and in particular relates to a rendering method and system for a focused light field camera.
  • a light field camera is a device that can record light direction and position information. Compared with a conventional camera, there is one more microlens array in structure. The process of obtaining an image by using such a device is light field imaging. As a new direction in imaging technology, light field imaging can achieve refocusing without image processing after imaging, but through image processing calculation, and can achieve 3D reconstruction and multi-target point focusing, etc. There will be a lot of room for development in the future.
  • the light field camera can simultaneously capture the spatial (or position) and direction (or angle) information of the scene, that is, the four-dimensional information of the light field. In this way, the light field recorded after one exposure can be focused at any position by using software, that is, refocusing.
  • the first generation of handheld light field cameras appeared in 2005, but the final image resolution obtained by this device is affected by the number of microlenses, and its maximum spatial resolution is the number of microlenses, which limits the development of traditional light field cameras.
  • a second-generation light field camera also known as a focused light field camera, is proposed.
  • the main difference between the second generation and the first generation is that the spacing between the microlens array and the image sensor is adjustable, so that the microlens can be focused on the image plane of the main lens instead of the main lens plane.
  • This device can make a good trade-off between image spatial resolution and directional resolution, and can improve the spatial resolution of the image to make the image look clearer.
  • Light field camera rendering technology refers to an implementation method of image acquisition and reproduction based on image rendering technology.
  • the quality of post rendering processing directly affects the final image quality of the image.
  • the image obtained by selecting an intermediate block from each of the microlens subaperture images of the original image is the process of rendering processing.
  • the size of the area block is related to the depth of the scene. It can be determined manually or by its depth information. Different sizes are used in different scenes.
  • the shape of the region block is square, and the microlens array of the focused light field camera usually adopts a regular hexagon arrangement, so in the rendering process, it is necessary to pass a positive six.
  • the conversion of the coordinate system to the orthogonal coordinate system has a large amount of computation, which affects the rendering rate.
  • the present invention provides a rendering method and system for a focus type light field camera, which can avoid the conversion of the coordinate system and reduce the calculation amount of the rendering method.
  • the present invention provides a rendering method of a focus type light field camera, the method comprising the following steps: S1. inputting a picture taken by a focus type light field camera, recording position information and a center position of each microlens and subaperture map; S2. A planar image that needs to be refocused, the depth of field is calculated, and the radius R is determined according to the depth of field; S3. At the center of each subaperture map, a regular hexagonal block of radius R is taken; S4. A regular hexagonal region is obtained. Blocks are merged in the order of subaperture maps to merge the graphs; S5. The merged graphs are processed to obtain a final rendered graph.
  • the same radius R is selected at different depths of field, and the depth of field of the refocusing planar image is artificially determined.
  • the step S2 needs to refocus the planar image to be a full plane, different radii R are selected at different depths of field, and the depth of field of each planar image is calculated according to the depth estimation method, and then the relationship between the depth value and R is obtained.
  • the lookup determines the size of R.
  • the same radius R is selected at different depths of field, and the depth of the refocused planar image is calculated according to the depth estimation method, and then the relationship between the depth value and the R is obtained.
  • the lookup in the table determines the size of R.
  • the regular hexagon of the step S3 is a longitudinal regular hexagon.
  • the direction of the regular hexagon is adjusted according to the arrangement of the microlenses. If the arrangement of the microlenses is lateral, the direction of the regular hexagon is longitudinal; if the arrangement of the microlenses is longitudinal, the direction of the regular hexagon is transverse.
  • the edge pixels of the regular hexagonal area block of step S4 are integerized by rounding.
  • the processing of the merged graph in step S5 is: taking out the largest rectangular block among them, and discarding the extra corner information.
  • the processing of the merged graph in step S5 is: averaging the values at the pixel points ( ⁇ -R) in the original image for each pixel in the hexagonal region block in the merged graph to obtain an output image given A little pixel value, where ⁇ is the size of the microlens; then the largest rectangular block is taken out of the merged graph, and the excess corner information is discarded.
  • the expression for averaging the values at the pixel points ( ⁇ -R) in the original image is as follows:
  • a, b are the distance from the microlens plane to the sensor plane, the imaging plane of the main lens, p i is the position of the microlens, and f i (x) is the offset of the point x in the original image in the output image is i
  • I fi(x) is the pixel value corresponding to f i (x)
  • ⁇ i is its weight, and is related to f i (x)
  • q is the offset
  • q' is the actual offset.
  • the invention also provides a rendering system of a focusing type light field camera, the system comprising the following modules: a recording module, a radius module, a blocking module, a merging module, a processing module; the recording module is used for recording each microlens and subaperture map Position information and its center position; the radius module is used to calculate the depth of field according to the plane image that needs to be refocused, and the radius R is determined according to the depth of field; the blocking module is used to take the center position of each sub-aperture map to take a radius R The regular hexagonal area block; the merging module is used to merge the regular hexagonal area blocks into the merged picture in the order of the sub-aperture map; the processing module is used to process the merged picture to obtain a final rendered picture.
  • a recording module is used for recording each microlens and subaperture map Position information and its center position
  • the radius module is used to calculate the depth of field according to the plane image that needs to be refocused, and the radius R is determined
  • the invention has the beneficial effects that: by using the hexagonal arrangement characteristic of the microlens array, by taking a regular hexagonal region block on the microlens subaperture map, the focus light can be reduced in the rendering process without changing the coordinate system.
  • refocusing of a single planar image can be achieved by artificially determining the depth of field of the graphic and selecting the same radius R at different depths of field.
  • depth estimation Calculate the depth of field of each plane image, and select different radii R at different depths of field, which can achieve full image clarity, effectively eliminate Artifacts in the image, and improve the image quality of the final rendered image.
  • the depth of field of each planar image is calculated by depth estimation, and the same radius R is selected at different depths of field, and then each pixel in the hexagonal region block in the merged image is separated ( ⁇ -R) from the pixel in the original image.
  • the values are averaged to give a given pixel value for the output image, which enables a single planar image, that is, image refocusing at a specific depth, and no Artifacts.
  • FIG. 1 is a schematic flow chart of an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an optical structure of a focused light field camera according to an embodiment of the present invention, where A is a sensor, B is a microlens array, C is an image plane, D is a main lens, and E is an object.
  • FIG. 3 is a schematic view showing the actual arrangement of microlenses according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a method for taking a regular hexagonal region block for a single subaperture map according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a block of a regular hexagonal region for all subaperture diagrams according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the arrangement of regular hexagonal area blocks according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of boundary processing according to an embodiment of the present invention.
  • each subaperture map takes a regular hexagonal block of radius R.
  • the regular hexagonal area blocks are tiled and merged in the order of the subaperture maps to obtain a merged graph;
  • a light field camera is a device for acquiring light field information, and the package information includes a four-dimensional optical radiation field of spatial position and direction. Therefore, each sensor unit captures the light emitted by the object from a specific angle, not only the light is recorded.
  • the position information of the line also records its direction information, which is actually a sampling of the four-dimensional all-optical function.
  • the traditional light field camera has the disadvantage of too low spatial resolution.
  • the focused light field camera provides a compromise between spatial resolution and angular resolution.
  • the optical structure is shown in Figure 2.
  • the microlens arrangement is usually in a hexagonal arrangement, as shown in Figure 3, with a fill factor of up to 90%. Compared with the orthogonal arrangement with a maximum fill factor of 78.5%, the hexagonal arrangement has a much larger fill factor.
  • the filling factor of the microlens refers to the ratio of the effective light-passing area of the unit element to the total area of the unit, and characterizes the ability of the element to converge and diverge light energy, usually related to the shape and arrangement of the elements.
  • the size of R corresponds to different focal planes and therefore corresponds to different depths.
  • the size of the radius R is determined according to the depth of field of the planar image that needs to be refocused. According to the different objects of the refocused plane and the difference in the depth calculation method, the following three methods can be adopted.
  • the artificial determination is based on experience. At different depths of field, the same R is used, so that a specific plane focus map is obtained. This method can achieve a certain depth of image focus, but it is not possible to generate Artifacts in the focal plane.
  • Using the depth estimation method a more accurate depth value can be obtained, and the size of R can be obtained by looking up the table in the depth value array.
  • This method can effectively reduce the image Artifacts generated by the method (1), and all the images are clear, Without the final rendering of Artifacts, this method can achieve full focus of the image.
  • This method produces a focus image that is not a full-focus image, but a specific depth. Unlike the method in (1), this method does not produce Artifacts for parts that are not in the focal plane.
  • each microlens on the microlens array corresponds to an area on the sensor plane, corresponding to a subaperture map.
  • a regular hexagonal block of regions is taken for each subaperture map using the regular hexagonal arrangement of the microlenses themselves.
  • the microlens arrangement in Figure 3 is laterally positive. Hexagonal shape, so the shape of the regular hexagonal area block is a longitudinal regular hexagon, and the method of taking a regular hexagonal area block for a single sub-aperture diagram is shown in FIG. 4, and the block-taking manner of all sub-aperture diagrams is as follows.
  • Figure 5 shows.
  • step S3 For all the longitudinal regular hexagon blocks taken out in step S3, all the sub-aperture map center blocks are arranged and tiled according to the position information determined in step S1 according to the original position, and the schematic diagram is as shown in FIG. 6.
  • the shape of the sensor pixel is square, and the edge of each hexagonal block image is inevitably non-integer pixels, for the pixels of the edge, integerization is required, and the method of rounding can be adopted here. Since the opposite sides of the regular hexagon are parallel, the integerized pixels on the opposite sides are still well complemented.
  • the merged image is processed as follows: as shown in FIG. 7, the merged image obtained by tiling the above steps is an irregular pattern, and since the number of microlenses is large, The subaperture map formed by the adjacent microlenses has more similar parts, so for the final figure, one of the largest rectangular blocks that can be found is taken out. The rectangular block is already an integer pixel, and then the excess corner information is discarded. The final result is the shaded portion of Figure 7.
  • the processing of the merged graph needs to first average the values at the pixel points ( ⁇ -R) in the original image of each pixel in the hexagonal region block in the merged graph. The image is given a little pixel value; then the largest rectangular block is taken out of the merged graph, and the extra corner information is discarded.
  • the image block of R has an average pixel value interval of ( ⁇ -R), and all corresponding pixels of position f i (x) are averaged: among them, Is the pixel value corresponding to f i (x).
  • f i (x) p i +q', f i (x) is a position at which the offset x of the point x in the original image corresponds to i;
  • i 0, ⁇ 1, ⁇ 2, ..., a, b respectively represent the microlens plane to the sensor plane and the microlens plane to the main through The distance from the mirror imaging plane. Since ⁇ is a constant, the absolute value of i has a certain upper limit for sampling of a given image block size R.
  • R can be different, but there is an integral value for each point.
  • a microlens can be assigned a weight value for each pixel, that is, weighted average of f i (x) at different positions, and finally the result of no Artifacts focused at a fixed depth is obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种聚焦型光场相机的渲染方法和系统,该方法包括:S1.输入聚焦型光场相机拍摄的图片,记录每个微透镜和子孔径图的位置信息和中心位置;S2.根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小;S3.在每一个子孔径图的中心位置,取一个半径为R的正六边形区域块;S4.将正六边形区域块,按子孔径图的顺序平铺合并得合并图;S4.对合并图进行处理,得到最终渲染图。该方法利用微透镜阵列的六边形排列特性,通过对微透镜子孔径图取正六边形的区域块,在渲染过程中,无需经过坐标系的变换,能有效减少聚焦型光场相机的渲染方法的计算量。

Description

一种聚焦型光场相机的渲染方法和系统 技术领域
本发明属于光场相机的渲染技术领域,特别涉及一种聚焦光场相机的渲染方法和系统。
背景技术
光场相机是一种能记录光线方向和位置信息的装置,与传统相机相比,在结构上多一个微透镜阵列,采用这种装置得到图像的过程即为光场成像。光场成像作为成像技术中的一个新的方向,它可以实现拍摄后无需机械对焦,而是通过图像处理计算实现再聚焦,并且可以实现3D重建和多目标点聚焦等,这些特点都使得它在未来的生活中将会有很大的发展空间。
光场相机可以同时捕捉到场景的空间(或位置)和方向(或角度)信息,即光场的四维信息。这样,在一次曝光后记录到的光场利用软件就可以聚焦在任意位置,也就是再聚焦。第一代手持式光场相机于2005年出现,但是这种装置得到的最终图像分辨率受微透镜个数影响,其最大空间分辨率为其微透镜数目,限制了传统光场相机的发展。为提高图像分辨率,提出了第二代光场相机,又称为聚焦型光场相机。第二代与第一代的主要区别是微透镜阵列与图像传感器的间距可调,这样就可以使微透镜聚焦在主透镜的像面上,而不是主透镜平面上。这种装置可以使图像空间分辨率和方向分辨率之间有个很好的权衡关系,同时可以提高图像的空间分辨率,使图像看起来更加清晰。
光场相机中必不可少的一步是后期图像的处理,一般采用光场渲染技术。光场相机渲染技术是指基于图像渲染技术的场景的获取及重现的一种实现方法,后期渲染处理的好坏直接影响图像的最终成像质量。从原始图像的每一个微透镜子孔径图像中选取中间一个区域块得到的图像,即为渲染处理的过程。区域块的大小与场景的深度有关,可以人为确定,也可以由其深度信息确定,在不同的场景中使用不同的大小。
在传统的渲染方法中,区域块所取的形状为正方形,而聚焦型光场相机的微透镜阵列通常采用正六边形的排列方式,故在渲染过程中,需要先经过一个正六 边形坐标系到正交坐标系的转换,其渲染方法的计算量大,影响渲染的速率。
发明内容
为解决上述问题,本发明提供了一种聚焦型光场相机的渲染方法和系统,其能避免坐标系的转换,减少渲染方法的计算量。
本发明提供一种聚焦型光场相机的渲染方法,该方法包括如下步骤:S1.输入聚焦型光场相机拍摄的图片,记录每个微透镜和子孔径图的位置信息和中心位置;S2.根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小;S3.在每一个子孔径图的中心位置,取一个半径为R的正六边形区域块;S4.将正六边形区域块,按子孔径图的顺序平铺合并得合并图;S5.对合并图进行处理,得到最终渲染图。
优选地,所述步骤S2需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R,该重聚焦平面图像的景深人为确定。
优选地,所述步骤S2需要重聚焦的平面图像为全平面时,在不同景深处选取不同的半径R,各平面图像的景深根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
优选地,所述步骤S2需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R,该重聚焦平面图像的深度根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
优选地,所述步骤S3的正六边形为纵向正六边形。
正六边形的方向根据微透镜排列方式进行调整。如微透镜排列方式为横向,则正六边形的方向为纵向;如微透镜排列方式为纵向,则正六边形的方向为横向。
优选地,步骤S4的正六边形区域块的边缘像素通过四舍五入的方法进行整数化。
优选地,步骤S5对对合并图的处理为:取出其中最大的矩形块,舍弃多余的边角信息。
优选地,步骤S5对合并图的处理为:对合并图中六边形区域块内的每个像素,对其原始图像中像素点相隔(μ-R)处的值求平均得到输出图像给定的一点像素值,其中μ为微透镜的大小;再对合并图取出其中最大的矩形块,舍弃多余的边角信息。进一步地优选,对其原始图像中像素点相隔(μ-R)处的值求平均的表 达公式如下:
Figure PCTCN2017083301-appb-000001
其中,fi(x)=pi+q′
Figure PCTCN2017083301-appb-000002
Figure PCTCN2017083301-appb-000003
μ=R(a/b)
Figure PCTCN2017083301-appb-000004
i=0,±1,±2,…i的绝对值有一个确定的上限值;
a,b分别为微透镜平面到传感器平面、主透镜成像平面的距离,pi为微透镜位置,fi(x)为输出图像中一点x在原始图像中所对应的偏移数为i处的位置,Ifi(x)为fi(x)处所对应的像素值,ωi为其权重,与fi(x)有关,q为偏移量,q′为实际偏移量。
本发明还提供一种聚焦型光场相机的渲染系统,该系统包含如下模块:记录模块、半径模块、分块模块、合并模块、处理模块;记录模块用于记录每个微透镜和子孔径图的位置信息及其中心位置;半径模块用于根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小;分块模块用于将每一个子孔径图中心位置,取一个半径为R的正六边形区域块;合并模块用于将正六边形区域块,按子孔径图的顺序平铺合并得合并图;处理模块用于对合并图进行处理,得到最终渲染图。
本发明的有益效果为:利用微透镜阵列的六边形排列特性,通过对微透镜子孔径图取正六边形的区域块,在渲染过程中,无需经过坐标系的变换,能减少聚焦型光场相机的渲染方法的计算量。
在本发明的优选的方案中,还具有如下的有益效果:通过人为确定图形景深,在不同景深处选取相同的半径R,可以实现单个平面图像的重聚焦。通过深度估 计计算各平面图像的景深,在不同景深处选取不同的半径R,可以实现全图像清晰,有效消除图像中的Artifacts,改善最终渲染图的图像质量。通过深度估计计算各平面图像的景深,在不同景深处选取相同的半径R,再对合并图中六边形区域块内的每个像素,对其原始图像中像素点相隔(μ-R)处的值求平均得到输出图像给定的一点像素值,可以实现单个平面图像,即特定深度的图像重聚焦,且无Artifacts。通过对正六边形区域块的边缘像素进行四舍五入的方法进行整数化,使平铺合并时,相对两条边上的像素能很好地互补,减少最终渲染出图像的像差。
附图说明
图1为本发明实施例流程示意图。
图2为本发明实施例聚焦光场相机光学结构示意图,A为传感器,B为微透镜阵列,C为像平面,D为主透镜,E为物体。
图3为本发明实施例微透镜实际排列方式示意图。
图4为本发明实施例对单个子孔径图取正六边形区域块方法示意图。
图5为本发明实施例对所有子孔径图取正六边形区域块示意图。
图6为本发明实施例正六边形区域块排列示意图。
图7为本发明实施例边界处理示意图。
具体实施方式
下面结合附图对本发明的实施方式做进一步的说明,具体过程如下,其流程示意图如图1所示。
S1.输入聚焦型光场相机拍摄的图片,记录每个微透镜和子孔径图的位置信息和中心位置。
S2.根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小。
S3.在每一个子孔径图的中心位置,取一个半径为R的正六边形区域块。
S4.将正六边形区域块,按子孔径图的顺序平铺合并得合并图;
S5.对合并图进行处理,得到最终渲染图。
光场相机是获取光场信息的装置,包信息含空间位置和方向的四维光辐射场。因此每个传感器单元捕捉到的是物体从特定角度发出的光线,不仅记录了光 线的位置信息,也记录了其方向信息,实际上是四维全光函数的采样。但传统光场相机存在空间分辨率太低的缺点,聚焦光场相机则提供了一种空间分辨率和角度分辨率的折中,其光学结构示意图如图2所示。
微透镜排列方式通常采用六边形排列方式,如图3所示,其填充因子最大为90%。相比最大填充因子为78.5%的正交排列方式,采用六边形排列方式,其填充因子要大很多。微透镜的填充因子是指单位元件的有效通光面积与单元总面积之比,表征了元件对光能的汇聚和发散能力,通常与元件形状和排列方式有关。
对于区域块,R的大小对应着不同的聚焦面,因此对应着不同的深度。根据所需要重聚焦的平面图像的景深来确定其半径R的大小。根据重聚焦的平面的对象不同,以及其深度计算方法的不同,可以采取如下3种方式。
(1)当需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R;该重聚焦平面图像的景深人为确定。
人为确定是根据经验确定,在不同景深处,采用的都是相同的R,因此得到的是特定平面的聚焦图。该方法能够实现某个深处度的图像聚焦,但不在聚焦面的部分有可能会产生Artifacts。
(2)当要重聚焦的平面图像为全平面时,在不同景深处选取不同的半径R;各平面图像的景深根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
利用深度估计的方法,可以得到更精确的深度值,再通过深度值数组中查表得到R的大小,此方法可以有效减少方法(1)所产生的图像Artifacts,得出所有的图像都清晰、无Artifacts的最终渲染图,该方法可以实现图像的全聚焦。
(3)当需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R;该重聚焦平面图像的深度根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
该方法产生的不是全聚焦图像,而是特定深度的聚焦图,与(1)中方法不同的是,对于不在聚焦面的部分,此方法不会产生Artifacts。
在图2所示光学结构中,微透镜阵列上的每个微透镜对应传感器平面上的一块区域,对应一个子孔径图。利用微透镜本身的正六边形排列方式,对每个子孔径图取一个正六边形的区域块。值得注意的是,图3中微透镜排列方式是横向正 六边形,故所取的正六边形区域块的形状为纵向正六边形,对单个子孔径图取正六边形的区域块方法如图4所示,对所有子孔径图的取块方式如图5所示。
对于步骤S3中的所取出的所有纵向的正六边形块,由步骤S1中所确定的位置信息,按原来的位置排列并平铺所有的子孔径图中心块,其示意图如图6所示。其中,由于传感器像素的形状为正方形,而每一个六边形块图边缘难免有非整数像素,对于边缘的这些像素,需要整数化,这里可以采取四舍五入的方法。由于正六边形相对的两条边是平行的,因此相对两条边上的整数化后像素依然能够很好地互补。
对于如上的方法(1)和方法(2),合并图进行如下的处理:如图7所示,经上述步骤平铺合并得到的合并图是一个不规则的图形,由于微透镜数目较多,相邻微透镜所成的子孔径图有较多相似部分,因此对于最后的图形,取出其中所能找到的一个最大矩形块。矩形块中已经都是整数像素了,然后舍弃多余的边角信息。最终结果示意图即图7中阴影所示部分。
对于如上的方法(3),对合并图的处理需要先对合并图中六边形区域块内的每个像素,对其原始图像中像素点相隔(μ-R)处的值求平均得到输出图像给定的一点像素值;然后再对合并图取出其中最大的矩形块,舍弃多余的边角信息。
对于尺寸为μ的微透镜,R的图像块,被平均的像素值的间隔为(μ-R),位置为fi(x)的所有对应像素被平均:
Figure PCTCN2017083301-appb-000005
其中,
Figure PCTCN2017083301-appb-000006
为fi(x)处所对应的像素值。
fi(x)=pi+q′,fi(x)为输出图像中一点x在原始图像中所对应的偏移数为i处的位置;
Figure PCTCN2017083301-appb-000007
pi为微透镜位置;
Figure PCTCN2017083301-appb-000008
μ=R(a/b)
这里i=0,±1,±2,…,a,b分别表示微透镜平面到传感器平面和微透镜平面到主透 镜成像平面的距离。因为μ是一个常数,因此对于给定图像块尺寸R的采样,i的绝对值有一个确定的上限值,
Figure PCTCN2017083301-appb-000009
对于不同点,R可以不同,但对于每个点都有一个积分值。
不同视角的贡献可以用权重来表示,可以把一个微透镜每个像素规定一个权重值,即对不同位置的fi(x)进行加权平均,最后得到在固定深度处聚焦的无Artifacts的结果。

Claims (10)

  1. 一种聚焦型光场相机的渲染方法,其特在于,包括如下步骤:
    S1.输入聚焦型光场相机拍摄的图片,记录每个微透镜和子孔径图的位置信息和中心位置;
    S2.根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小;
    S3.在每一个子孔径图的中心位置,取一个半径为R的正六边形区域块;
    S4.将正六边形区域块,按子孔径图的顺序平铺合并得合并图;
    S5.对合并图进行处理,得到最终渲染图。
  2. 如权利要求1所述的方法,其特征在于,所述步骤S2需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R,重聚焦平面图像的景深人为确定。
  3. 如权利要求1所述的方法,其特征在于,所述步骤S2需要重聚焦的平面图像为全平面时,在不同景深处选取不同的半径R,各平面图像的景深根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
  4. 如权利要求1所述的方法,其特征在于,所述步骤S2需要重聚焦的平面图像为单个时,在不同景深处选取相同的半径R,该重聚焦平面图像的深度根据深度估计的方法计算得到,再通过深度值与R的关系表中查找确定R的大小。
  5. 如权利要求1所述的方法,其特征在于,所述步骤S3的正六边形为纵向正六边形。
  6. 如权利要求1所述的方法,其特征在于,所述步骤S4的正六边形区域块的边缘像素通过四舍五入的方法进行整数化。
  7. 如权利要求2或3所述的方法,其特征在于,所述步骤S5对合并图的处理为:取出其中最大的矩形块,舍弃多余的边角信息。
  8. 如权利要求4所述的方法,其特征在于,所述步骤S5对合并图的处理为:对合并图中六边形区域块内的每个像素,对其原始图像中像素点相隔(μ-R)处的值求平均得到输出图像给定的一点像素值,其中μ为微透镜的大小;再对合并图取出其中最大的矩形块,舍弃多余的边角信息。
  9. 如权利要求8所述的方法,其特征在于,所述步骤S5中对其原始图像中像素点相隔(μ-R)处的值求平均的表达公式如下:
    Figure PCTCN2017083301-appb-100001
    其中,
    fi(x)=pi+q′
    Figure PCTCN2017083301-appb-100002
    Figure PCTCN2017083301-appb-100003
    μ=R(a/b)
    Figure PCTCN2017083301-appb-100004
    i=0,±1,±2,…i的绝对值有一个确定的上限值;
    a,b分别为微透镜平面到传感器平面、主透镜成像平面的距离,pi为微透镜位置,fi(x)为输出图像中一点x在原始图像中所对应的偏移数为i处的位置,
    Figure PCTCN2017083301-appb-100005
    为fi(x)处所对应的像素值,ωi为其权重,与fi(x)有关,q为偏移量,q′为实际偏移量。
  10. 一种聚焦型光场相机的渲染系统,其特征在于,包含如下模块:记录模块、半径模块、分块模块、合并模块、处理模块;记录模块用于记录每个微透镜和子孔径图的位置信息及其中心位置;半径模块用于根据需要重聚焦的平面图像,计算其景深,根据景深确定半径R的大小;分块模块用于将每一个子孔径图中心位置,取一个半径为R的正六边形区域块;合并模块用于将正六边形区域块,按子孔径图的顺序平铺合并得合并图;处理模块用于对合并图进行处理,得到最终渲染图。
PCT/CN2017/083301 2016-08-04 2017-05-05 一种聚焦型光场相机的渲染方法和系统 WO2018024006A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610632949.2 2016-08-04
CN201610632949.2A CN106303228B (zh) 2016-08-04 2016-08-04 一种聚焦型光场相机的渲染方法和系统

Publications (1)

Publication Number Publication Date
WO2018024006A1 true WO2018024006A1 (zh) 2018-02-08

Family

ID=57665356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/083301 WO2018024006A1 (zh) 2016-08-04 2017-05-05 一种聚焦型光场相机的渲染方法和系统

Country Status (2)

Country Link
CN (1) CN106303228B (zh)
WO (1) WO2018024006A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325218A (zh) * 2020-01-21 2020-06-23 西安理工大学 基于光场图像的Hog特征检测与匹配方法
CN111369443A (zh) * 2020-03-19 2020-07-03 西安理工大学 光场跨尺度的零次学习超分辨率方法
CN111679337A (zh) * 2019-10-15 2020-09-18 上海大学 一种水下主动激光扫描成像系统中散射背景抑制方法
CN112686829A (zh) * 2021-01-11 2021-04-20 太原科技大学 基于角度信息的4d光场全聚焦图像获取方法
CN112816493A (zh) * 2020-05-15 2021-05-18 奕目(上海)科技有限公司 一种芯片打线缺陷检测方法及装置
US11087498B2 (en) 2017-02-01 2021-08-10 Omron Corporation Image processing system, optical sensor, and learning apparatus with irregular lens array
CN115037880A (zh) * 2022-07-13 2022-09-09 山西工程职业学院 一种机载相机快速对焦方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303228B (zh) * 2016-08-04 2019-09-13 深圳市未来媒体技术研究院 一种聚焦型光场相机的渲染方法和系统
US10643044B2 (en) * 2016-10-31 2020-05-05 Ncr Corporation Variable depth of field scanning devices and methods
CN107870035B (zh) * 2017-08-18 2019-11-05 黄爱霞 多功能油气车验证平台
CN107527096B (zh) * 2017-08-18 2018-08-28 余佩佩 一种验证油气车的方法
CN107360373B (zh) * 2017-08-24 2018-04-27 浙江镇石物流有限公司 装油车辆油气收集平台
CN107909578A (zh) * 2017-10-30 2018-04-13 上海理工大学 基于六边形拼接算法的光场图像重聚焦方法
CN108093237A (zh) * 2017-12-05 2018-05-29 西北工业大学 高空间分辨率光场采集装置与图像生成方法
CN108337434B (zh) * 2018-03-27 2020-05-22 中国人民解放军国防科技大学 一种针对光场阵列相机的焦外虚化重聚焦方法
CN110009693B (zh) * 2019-04-01 2020-12-11 清华大学深圳研究生院 一种光场相机的快速盲标定方法
CN111127379B (zh) * 2019-12-25 2023-04-25 清华大学深圳国际研究生院 光场相机2.0的渲染方法及电子设备
CN112464727A (zh) * 2020-11-03 2021-03-09 电子科技大学 一种基于光场相机的自适应人脸识别方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras
CN105704371A (zh) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 一种光场重聚焦方法
CN106303228A (zh) * 2016-08-04 2017-01-04 深圳市未来媒体技术研究院 一种聚焦型光场相机的渲染方法和系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0363626A (ja) * 1989-07-31 1991-03-19 Sharp Corp 投影型カラー液晶表示装置
JPH1039107A (ja) * 1996-07-25 1998-02-13 Idec Izumi Corp レンズアレイおよび表示装置
CN103439090B (zh) * 2013-09-01 2015-11-18 中国科学院光电技术研究所 一种用于子孔径拼接检测的数据采样路径规划方法
CN103841327B (zh) * 2014-02-26 2017-04-26 中国科学院自动化研究所 一种基于原始图像的四维光场解码预处理方法
CN104469183B (zh) * 2014-12-02 2015-10-28 东南大学 一种x射线闪烁体成像系统的光场捕捉和后处理方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras
CN105704371A (zh) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 一种光场重聚焦方法
CN106303228A (zh) * 2016-08-04 2017-01-04 深圳市未来媒体技术研究院 一种聚焦型光场相机的渲染方法和系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAN, YUJIE: "Research on Digital Light Field Photography Based on Microlens Array", CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 March 2016 (2016-03-15), pages 1 - 54 *
ZHANG, CHI ET AL.: "Light Field Photography and Its Application in Computer Vision", JOURNAL OF IMAGE AND GRAPHICS, vol. 21, no. 3, 31 March 2016 (2016-03-31), pages 263 - 278 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087498B2 (en) 2017-02-01 2021-08-10 Omron Corporation Image processing system, optical sensor, and learning apparatus with irregular lens array
CN111679337A (zh) * 2019-10-15 2020-09-18 上海大学 一种水下主动激光扫描成像系统中散射背景抑制方法
CN111679337B (zh) * 2019-10-15 2022-06-10 上海大学 一种水下主动激光扫描成像系统中散射背景抑制方法
CN111325218A (zh) * 2020-01-21 2020-06-23 西安理工大学 基于光场图像的Hog特征检测与匹配方法
CN111325218B (zh) * 2020-01-21 2023-04-18 西安理工大学 基于光场图像的Hog特征检测与匹配方法
CN111369443A (zh) * 2020-03-19 2020-07-03 西安理工大学 光场跨尺度的零次学习超分辨率方法
CN111369443B (zh) * 2020-03-19 2023-04-28 浙江昕微电子科技有限公司 光场跨尺度的零次学习超分辨率方法
CN112816493A (zh) * 2020-05-15 2021-05-18 奕目(上海)科技有限公司 一种芯片打线缺陷检测方法及装置
CN112686829A (zh) * 2021-01-11 2021-04-20 太原科技大学 基于角度信息的4d光场全聚焦图像获取方法
CN112686829B (zh) * 2021-01-11 2024-03-26 太原科技大学 基于角度信息的4d光场全聚焦图像获取方法
CN115037880A (zh) * 2022-07-13 2022-09-09 山西工程职业学院 一种机载相机快速对焦方法

Also Published As

Publication number Publication date
CN106303228A (zh) 2017-01-04
CN106303228B (zh) 2019-09-13

Similar Documents

Publication Publication Date Title
WO2018024006A1 (zh) 一种聚焦型光场相机的渲染方法和系统
EP3516626B1 (en) Device and method for obtaining distance information from views
TWI510086B (zh) 數位重對焦方法
US9063345B2 (en) Super light-field lens with doublet lenslet array element
JP5224124B2 (ja) 撮像装置
US9063323B2 (en) Super light-field lens and image processing methods
CN108337434B (zh) 一种针对光场阵列相机的焦外虚化重聚焦方法
CN110120071B (zh) 一种面向光场图像的深度估计方法
US9818199B2 (en) Method and apparatus for estimating depth of focused plenoptic data
KR20170005009A (ko) 3d 라돈 이미지의 생성 및 사용
WO2021093635A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN107615747A (zh) 图像处理设备、摄像设备、图像处理方法和存储介质
CN110662014B (zh) 一种光场相机四维数据大景深三维显示的方法
US10230911B1 (en) Preview generation for plenoptic imaging systems
CN108805921A (zh) 图像获取系统及方法
JP2016208075A (ja) 画像出力装置およびその制御方法、撮像装置、プログラム
US10110869B2 (en) Real-time color preview generation for plenoptic imaging systems
CN111127379B (zh) 光场相机2.0的渲染方法及电子设备
KR102253320B1 (ko) 집적영상 현미경 시스템에서의 3차원 영상 디스플레이 방법 및 이를 구현하는 집적영상 현미경 시스템
WO2020244273A1 (zh) 双摄像机三维立体成像系统和处理方法
CN110312123B (zh) 利用彩色图像和深度图像的集成成像显示内容生成方法
CN112866554B (zh) 对焦方法和装置、电子设备、计算机可读存储介质
AU2011213803A1 (en) Super light-field lens with focus control and non-spherical lenslet arrays
US9197799B2 (en) Super light field lens with focus control and non spherical lenslet arrays
CN115514877B (zh) 图像处理装置和降低噪声的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17836190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17836190

Country of ref document: EP

Kind code of ref document: A1