CN109447930B - Wavelet Domain Light Field All-Focus Image Generation Algorithm - Google Patents

Wavelet Domain Light Field All-Focus Image Generation Algorithm Download PDF

Info

Publication number
CN109447930B
CN109447930B CN201811259275.1A CN201811259275A CN109447930B CN 109447930 B CN109447930 B CN 109447930B CN 201811259275 A CN201811259275 A CN 201811259275A CN 109447930 B CN109447930 B CN 109447930B
Authority
CN
China
Prior art keywords
image
fusion
light field
images
focus image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811259275.1A
Other languages
Chinese (zh)
Other versions
CN109447930A (en
Inventor
武迎春
谢颖贤
李素月
赵贤凌
王安红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN201811259275.1A priority Critical patent/CN109447930B/en
Publication of CN109447930A publication Critical patent/CN109447930A/en
Application granted granted Critical
Publication of CN109447930B publication Critical patent/CN109447930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明的小波域光场全聚焦图像生成算法属于全聚焦图像融合领域,本发明有效避免传统空域光场图像融合算法的块效应,获得较高质量的光场全聚焦图像,通过对微透镜阵列光场相机获得的4D光场数据进行空间变换与投影,得到用于全聚焦图像融合的多聚焦图像,对各帧多聚焦图像进行小波分解提取高、低频子图像集,提出区域均衡拉普拉斯算子、像素可见度函数分别构建融合图像的高、低频小波系数实现图像融合,其性能优于传统的区域清晰度评价函数,实验验证了本发明所提方法的正确性和有效性,采用Lytro光场相机的原始数据计算了融合全聚焦图像,与传统图像融合算法相比,人眼视觉效果更好,客观图像指标也得到了提高。

Figure 201811259275

The wavelet domain light field all-focus image generation algorithm of the present invention belongs to the field of all-focus image fusion, the invention effectively avoids the block effect of the traditional spatial domain light-field image fusion algorithm, and obtains a high-quality light field all-focus image. The 4D light field data obtained by the light field camera is spatially transformed and projected to obtain a multi-focus image for all-focus image fusion. Wavelet decomposition is performed on each frame of the multi-focus image to extract high and low frequency sub-image sets, and a regional balanced Laplacian is proposed. The high-frequency and low-frequency wavelet coefficients of the fused image are respectively constructed by the Si operator and the pixel visibility function to realize image fusion, and its performance is better than the traditional area definition evaluation function. The raw data of the light field camera is used to calculate the fused all-in-focus image, and compared with the traditional image fusion algorithm, the visual effect of the human eye is better, and the objective image index has also been improved.

Figure 201811259275

Description

小波域光场全聚焦图像生成算法Wavelet Domain Light Field All-Focus Image Generation Algorithm

技术领域technical field

本发明属于全聚焦图像融合领域,特别是涉及一种小波域光场全聚焦图像生成算法。The invention belongs to the field of all-focus image fusion, in particular to an all-focus image generation algorithm in a wavelet domain light field.

背景技术Background technique

伴随着计算摄影学新型学科领域的兴起及光场成像理论的发展,光场相机成为近十年来国内外诸多领域关注的热点。相对于传统相机,微透镜光场相机在主镜头后加入了微透镜阵列同时记录空间光线的位置与方向信息。多维度光场信息的记录为光场相机后期图像的处理及应用提供了便利,如利用光场相机数字重聚焦、全聚焦图像生成及深度信息计算等。光场相机可以在单次拍照后计算空间任意深度的重聚焦图像,是光场相机得以普遍关注的最突出技术亮点。基于此,各类光场高质量纹理图像获取及高精度深度信息计算得到深入研究。由于不受传统相机多次对焦拍摄获取多聚焦图像的限制,基于光场数字重聚焦技术的全聚焦图像融合成为光场相机的一个重要分支应用,其对于后期纹理图像及深度图像的超分辨率重构、光场视频文件的生成也具有重要意义。With the rise of the new subject field of computational photography and the development of light field imaging theory, light field cameras have become the focus of attention in many fields at home and abroad in the past decade. Compared with the traditional camera, the microlens light field camera adds a microlens array after the main lens and records the position and direction information of the spatial light. The recording of multi-dimensional light field information provides convenience for post-image processing and application of light field cameras, such as digital refocusing, all-focus image generation and depth information calculation using light field cameras. The light field camera can calculate the refocusing image of any depth in space after a single shot, which is the most prominent technical highlight of the light field camera. Based on this, the acquisition of high-quality texture images of various light fields and the calculation of high-precision depth information have been deeply studied. Since it is not limited by the traditional camera to obtain multi-focus images through multiple focus shooting, all-focus image fusion based on light field digital refocusing technology has become an important branch application of light field cameras. Reconstruction, generation of light field video files is also significant.

目前,针对传统图像的全聚焦融合主要分为空间域和变换域,空间域基于像素或块进行清晰度评价,从不同图像中提取质量好的像素来构成全聚焦图像,计算时间快但存在块效应问题。变换域将图像分解为不同分辨率层或不同频带的子图像,通过评价重建分辨层或子图像构建重聚焦图像,可有效避免块效应。作为变换域的一种普遍方法,小波变换将待融合图像分解到一系列频率信道中,利用其分解后的塔形结构构建高、低频子图像,分别对高、低频子图像进行融合后经小波反变换得到全聚焦图像。小波变换融合图像的质量决定于高、低频子图像的融合规则选择:对于低频子图像,一般采用均值计算法实现融合;对于高频子图像,常采用Sobel算子、Prewitt算子及拉普拉斯算子等进行评价建立融合规则。At present, all-focus fusion for traditional images is mainly divided into spatial domain and transform domain. The spatial domain performs sharpness evaluation based on pixels or blocks, and extracts high-quality pixels from different images to form an all-focus image. The calculation time is fast but there are blocks. effect problem. The transform domain decomposes the image into sub-images of different resolution layers or different frequency bands. By evaluating the reconstructed resolution layers or sub-images to construct a refocusing image, the blocking effect can be effectively avoided. As a general method in the transform domain, wavelet transform decomposes the image to be fused into a series of frequency channels, and uses the decomposed tower structure to construct high and low frequency sub-images. The inverse transformation results in an all-in-focus image. The quality of wavelet transform fusion image depends on the selection of fusion rules for high and low frequency sub-images: for low-frequency sub-images, the mean value calculation method is generally used to achieve fusion; for high-frequency sub-images, Sobel operator, Prewitt operator and Laplacian are often used. Evaluate and establish fusion rules by using Si operator and so on.

传统的拉普拉斯算子在x方向和y方向的二阶导数极可能出现符号相反的情况,而且现有拉普拉斯算子算法的抗噪能力低,微透镜标定误差会引起重聚焦图像产生局部噪声。另外,现有的低频信号融合加权平均法会降低融合图像的对比度并丢失原图像中一些有用信息。The second derivative of the traditional Laplacian operator in the x and y directions may have opposite signs, and the existing Laplacian algorithm has low anti-noise ability, and the microlens calibration error will cause refocusing The image produces local noise. In addition, the existing low-frequency signal fusion weighted average method will reduce the contrast of the fused image and lose some useful information in the original image.

发明内容SUMMARY OF THE INVENTION

本发明针对现有民用级光场相机拍摄图像对比度不高、经数字重聚焦技术获得的多聚焦图像集分辨率有限、且存在由标定误差造成的局部噪声的问题,旨在提供一种基于小波域光场全聚焦图像生成算法,本算法通过建立高频系数区域均衡拉普拉斯算子、低频系数像素可见度函数的清晰度评价函数实现光场全聚焦图像生成,有效实现了光场原始数据到全聚焦图像的转化,且图像融合质量较传统算法有所提高。Aiming at the problems of low contrast of images captured by existing civilian-grade light field cameras, limited resolution of multi-focus image sets obtained by digital refocusing technology, and local noise caused by calibration errors, the present invention aims to provide a wavelet-based Domain light field all-focus image generation algorithm, this algorithm realizes light field all-focus image generation by establishing high-frequency coefficient area equalization Laplacian operator and low-frequency coefficient pixel visibility function. The conversion to an all-focus image, and the image fusion quality is improved compared with the traditional algorithm.

为解决上述技术问题,本发明采用的技术方案为:小波域光场全聚焦图像生成算法,按照以下步骤实现:In order to solve the above-mentioned technical problems, the technical solution adopted in the present invention is: a wavelet domain light field all-focus image generation algorithm is implemented according to the following steps:

步骤1):将光场原图经数据解码后得到4D光场,选择不同的αn(n=1,2,3…),利用数字重聚焦技术得到不同空间深度的重聚焦图像

Figure BDA0001843520970000021
Step 1): Decode the original image of the light field to obtain a 4D light field, select different α n (n=1, 2, 3...), and use digital refocusing technology to obtain refocusing images of different spatial depths
Figure BDA0001843520970000021

步骤2)计算每一帧重聚焦图像的小波高、低频子图像

Figure BDA0001843520970000022
Step 2) Calculate the wavelet high and low frequency sub-images of the refocusing image of each frame
Figure BDA0001843520970000022

步骤3)对高、低频子图像分别采用区域均衡拉普拉斯BL算子和像素可见度PV函数作为图像融合清晰度评价指标,实现高、低频系数的融合;Step 3) respectively adopting the regional balanced Laplacian BL operator and the pixel visibility PV function as the image fusion sharpness evaluation index for the high and low frequency sub-images to realize the fusion of the high and low frequency coefficients;

步骤4)将高低频系数经过小波逆变换得到融合后的全聚焦图像。Step 4) The high and low frequency coefficients are subjected to wavelet inverse transformation to obtain a fused all-focus image.

进一步地,步骤3)中采用的BL算子为区域均衡拉普拉斯算子,该算子的表达式如下:Further, the BL operator adopted in step 3) is the regional equilibrium Laplacian operator, and the expression of this operator is as follows:

Figure BDA0001843520970000023
Figure BDA0001843520970000023

其中,其中S×T表示均衡区域大小,且S、T只能取奇数;s、t表示水平垂直方向二阶导步长;

Figure BDA0001843520970000024
表示权重因子,距离中心点越近的点,权重因子越大,对拉普拉斯算子值贡献越大,反之,距离中心点越远,对拉普拉斯算子值贡献越小。Among them, S×T represents the size of the equalization area, and S and T can only take odd numbers; s and t represent the second-order guide step size in the horizontal and vertical directions;
Figure BDA0001843520970000024
Indicates the weight factor. The closer the point is to the center point, the larger the weight factor, and the greater the contribution to the Laplacian value. On the contrary, the farther away from the center point, the smaller the contribution to the Laplacian value.

进一步地,步骤3)中PV函数为像素可见度函数,具体表达式如下:Further, in step 3), the PV function is a pixel visibility function, and the specific expression is as follows:

Figure BDA0001843520970000025
Figure BDA0001843520970000025

其中,S×T表示以当前像素点为中心的矩形邻域,且S、T只能取奇数;s、t表示在矩形邻域内水平垂直方向的扫描步长;

Figure BDA0001843520970000026
表示S×T区域像素的平均灰度值。Among them, S×T represents the rectangular neighborhood with the current pixel as the center, and S and T can only take odd numbers; s and t represent the scanning step size in the horizontal and vertical directions in the rectangular neighborhood;
Figure BDA0001843520970000026
Represents the average gray value of pixels in the S×T area.

进一步地,低频系数和高频系数融合规则相同,以高频系数融合为例,规则如下:Further, the fusion rules of low-frequency coefficients and high-frequency coefficients are the same. Taking the fusion of high-frequency coefficients as an example, the rules are as follows:

Figure BDA0001843520970000027
Figure BDA0001843520970000027

其中,

Figure BDA0001843520970000028
代表不同空间深度各重聚焦图像经小波分解后各自的高频子图像,n=1,2,3…N,N表示参与全聚焦图像融合的重聚焦图像帧数;
Figure BDA0001843520970000029
代表任意2幅高频子图像对应点的均衡拉普拉斯算子的差值;max[·]、min[·]为取最大值、最小值操作;HH为自定义门限阈值(HH取0.1,因为当两者差值小于0.1时表示两者区域均衡拉普拉斯值差异很小可以忽略不计),当差值的最小值大于门限阈值时,取N帧图像中均衡拉普拉斯能量最大者所对应的高频系数作为融合系数,当两者差值小于门限阈值时,由多帧图像高频系数乘以权重因子来决定最后的融合系数,其中权重因子
Figure BDA0001843520970000031
in,
Figure BDA0001843520970000028
Represents the respective high-frequency sub-images of each refocusing image at different spatial depths after wavelet decomposition, n=1, 2, 3...N, N represents the number of refocusing image frames participating in the fusion of all-focus images;
Figure BDA0001843520970000029
Represents the difference of the equalization Laplacian of the corresponding points of any two high-frequency sub-images; max[ ], min[ ] are the maximum and minimum operations; H H is the custom threshold (H H Take 0.1, because when the difference between the two is less than 0.1, it means that the difference between the two regional equalized Laplaces is very small and can be ignored), when the minimum value of the difference is greater than the threshold, the equalized Laplacian in N frames of images is taken. The high-frequency coefficient corresponding to the one with the largest energy is used as the fusion coefficient. When the difference between the two is less than the threshold, the final fusion coefficient is determined by multiplying the high-frequency coefficient of the multi-frame image by the weighting factor. The weighting factor
Figure BDA0001843520970000031

本发明采用小波变换的方法来进行图像的融合。首先解码4D光场并采用数字重聚焦算法得到不同深度的多聚焦图像,通过对各多聚焦图像集进行小波分解及塔型重构构建高、低频子图像集,最后提出区域均衡拉普拉斯算子、像素可见度函数分别构建融合图像的高、低频小波系数实现图像融合。本算法有效实现了光场原始数据到全聚焦图像的转化,有效避免传统空域图像融合算法的块效应,获得较高质量的光场全聚焦图像,图像融合质量较传统算法有所提高。The invention adopts the method of wavelet transform to perform image fusion. Firstly, the 4D light field is decoded and the digital refocusing algorithm is used to obtain multi-focus images of different depths. The high and low frequency sub-image sets are constructed by wavelet decomposition and tower reconstruction of each multi-focus image set. Finally, the regional balanced Laplacian is proposed. The high and low frequency wavelet coefficients of the fused image are respectively constructed by the operator and the pixel visibility function to realize image fusion. This algorithm effectively realizes the conversion of the original light field data to the all-focus image, effectively avoids the block effect of the traditional spatial image fusion algorithm, and obtains a high-quality light-field all-focus image, and the image fusion quality is improved compared with the traditional algorithm.

附图说明Description of drawings

下面结合附图对本发明做进一步详细的说明。The present invention will be described in further detail below with reference to the accompanying drawings.

图1为本发明算法的流程图。Figure 1 is a flow chart of the algorithm of the present invention.

图2为光场双平面参数化模型。Figure 2 is a biplane parameterized model of the light field.

图3为光场相机数字重聚焦原理图。Figure 3 is a schematic diagram of the digital refocusing of the light field camera.

图4为BL算子示意图。FIG. 4 is a schematic diagram of the BL operator.

图5为Leaves样本图像的融合过程演示图。Figure 5 is a demonstration diagram of the fusion process of Leaves sample images.

图6为Flower样本图像不同融合算法对比图。Figure 6 is a comparison diagram of different fusion algorithms of Flower sample images.

图7为Forest样本图像不同融合算法对比图。Figure 7 is a comparison diagram of different fusion algorithms of Forest sample images.

图8为Zither样本图像不同融合算法对比图。Figure 8 is a comparison diagram of different fusion algorithms of Zither sample images.

具体实施方式Detailed ways

为使本发明的目的、特征和优点能够更为明显易懂,下面结合附图对本发明的具体实施方式做详细的说明。In order to make the objects, features and advantages of the present invention more clearly understood, the specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

如图1所示,本算法的具体流程为:As shown in Figure 1, the specific process of this algorithm is:

步骤1)光场原图经数据解码后得到4D光场,选择不同的αn(n=1,2,3…),利用数字重聚焦技术得到不同空间深度的重聚焦图像

Figure BDA0001843520970000032
Step 1) The original image of the light field is decoded to obtain a 4D light field, select different α n (n=1, 2, 3...), and use digital refocusing technology to obtain refocusing images of different spatial depths
Figure BDA0001843520970000032

步骤2)计算每一帧重聚焦图像的小波高、低频子图像

Figure BDA0001843520970000033
Step 2) Calculate the wavelet high and low frequency sub-images of the refocusing image of each frame
Figure BDA0001843520970000033

步骤3)高、低频子图像分别采用BL算子和PV函数作为图像融合清晰度评价指标,实现高低频系数的融合;Step 3) High and low frequency sub-images respectively adopt BL operator and PV function as image fusion sharpness evaluation indicators, to realize the fusion of high and low frequency coefficients;

步骤4)最后经小波逆变换得到融合后的全聚焦图像。Step 4) Finally, the fused all-focus image is obtained by inverse wavelet transform.

步骤1)的具体过程如下:根据光场的双平面参数化模型,如图2所示,空间任一光线均可用其与两平面的交点确定,设光场相机的主透镜平面为(u,v)面,传感器平面为(x,y)面,光场相机记录的4D光场为LF(x,y,u,v),由经典光辐射公式可得到全光相机焦平面的积分图像:The specific process of step 1) is as follows: According to the biplane parameterized model of the light field, as shown in Figure 2, any ray in space can be determined by its intersection with the two planes, and the main lens plane of the light field camera is set as (u, v) plane, the sensor plane is the (x, y) plane, the 4D light field recorded by the light field camera is LF (x, y, u, v), the integral image of the focal plane of the plenoptic camera can be obtained by the classical light radiation formula :

Figure BDA0001843520970000041
Figure BDA0001843520970000041

其中F表示主透镜平面和焦平面间的距离,用X×Y×U×V表示4D光场矩阵LF(x,y,u,v)的大小。如果将像平面由F移动到F′,新的4D光场矩阵用LF′(x′,y′,u′,v′)表示,此时相机焦平面重聚焦图像表示为:where F represents the distance between the main lens plane and the focal plane, and X×Y×U×V represents the size of the 4D light field matrix LF (x, y, u, v). If the image plane is moved from F to F', the new 4D light field matrix is represented by L F' (x', y', u', v'), and the refocused image of the camera focal plane is represented as:

Figure BDA0001843520970000042
Figure BDA0001843520970000042

令F′=αn·F,为了方便图形表示,取4D空间的一个切面来得到坐标间的几何关系,如图3所示。根据相似三角形原理,可得到新光场与原始光场的坐标满足:Let F′=α n ·F, for the convenience of graphical representation, take a tangent plane in the 4D space to obtain the geometric relationship between the coordinates, as shown in Figure 3. According to the similar triangle principle, it can be obtained that the coordinates of the new light field and the original light field satisfy:

x′=u+(x-u)·αn=αn·x+(1-αn)·u (3)x'=u+(xu)·α nn ·x+(1-α n )·u (3)

u′=u (4)u′=u (4)

同理可得到:Similarly, we can get:

y′=v+(y-v)·αn=αn·y+(1-αn)·v (5)y′=v+(yv)·α nn ·y+(1-α n )·v (5)

v′=v (6)v′=v (6)

公式(3)-(6)可表示为矩阵形式:Equations (3)-(6) can be expressed in matrix form:

Figure BDA0001843520970000043
Figure BDA0001843520970000043

其中,[x′,y′,u′,v′]T表示行向量[x′,y′,u′,v′]的转置,

Figure BDA0001843520970000044
表示坐标变换矩阵,具体形式如下:Among them, [x', y', u', v'] T represents the transpose of the row vector [x', y', u', v'],
Figure BDA0001843520970000044
Represents the coordinate transformation matrix, the specific form is as follows:

Figure BDA0001843520970000045
Figure BDA0001843520970000045

公式(7)也等价于下式:Equation (7) is also equivalent to the following:

Figure BDA0001843520970000046
Figure BDA0001843520970000046

根据公式(9),公式(2)可改写为:According to formula (9), formula (2) can be rewritten as:

Figure BDA0001843520970000047
Figure BDA0001843520970000047

改变αn的取值,即可达到改变像平面的位置的目的,继而得到不同空间深度的重聚焦图片。By changing the value of α n , the purpose of changing the position of the image plane can be achieved, and then refocusing pictures of different spatial depths can be obtained.

步骤2)的具体过程如下:根据小波变换图像融合理论,通过小波变换将待融合图像分解到一系列频率信道中,利用其分解后的塔形结构构建高、低频子图像,该过程可描述为:The specific process of step 2) is as follows: According to the image fusion theory of wavelet transform, the image to be fused is decomposed into a series of frequency channels through wavelet transform, and the high and low frequency sub-images are constructed by using the decomposed tower structure. The process can be described as: :

Figure BDA0001843520970000051
Figure BDA0001843520970000051

Figure BDA0001843520970000052
Figure BDA0001843520970000052

其中(x,y)表示图像坐标系,(i,j)表示小波域坐标系,W[·]表示小波塔型分解操作符,WH[·]表示小波塔型分解后提取高频系数(高频子图像),WL[·]表示小波塔型分解后提取低频系数(低频子图像)。where (x, y) represents the image coordinate system, (i, j) represents the wavelet domain coordinate system, W[ ] represents the wavelet tower decomposition operator, and W H [ ] represents the high frequency coefficients extracted after the wavelet tower decomposition ( high-frequency sub-image), W L [·] represents the extraction of low-frequency coefficients (low-frequency sub-image) after wavelet tower decomposition.

步骤3)中采用的BL算子为区域均衡拉普拉斯算子,该算子的表达式如下:The BL operator adopted in step 3) is the regional equilibrium Laplacian operator, and the expression of this operator is as follows:

Figure BDA0001843520970000053
Figure BDA0001843520970000053

其中,其中S×T表示均衡区域大小,且S、T只能取奇数;s、t表示水平垂直方向二阶导步长;

Figure BDA0001843520970000054
表示权重因子,距离中心点越近的点,权重因子越大,对拉普拉斯算子值贡献越大,反之,距离中心点越远,对拉普拉斯算子值贡献越小。图4为S=5、T=5时的均衡拉普拉斯算子。Among them, S×T represents the size of the equalization area, and S and T can only take odd numbers; s and t represent the second-order guide step size in the horizontal and vertical directions;
Figure BDA0001843520970000054
Indicates the weight factor. The closer the point is to the center point, the larger the weight factor, and the greater the contribution to the Laplacian value. On the contrary, the farther away from the center point, the smaller the contribution to the Laplacian value. Figure 4 shows the equalized Laplacian operator when S=5 and T=5.

小波变换的高频子图像反映图像的亮度突变特性,即边界特性,拉普拉斯算子能对任何走向的边界和线条进行锐化,且保持各向同性特性,在对高频子图像进行清晰度评价时被广泛使用。针对拉普拉斯算子在x方向和y方向的二阶导数极可能出现符号相反的情况,同时充分考虑了周边点对当前位置清晰度评价函数的影响,本发明提出区域均衡拉普拉斯算子,通过增加二阶导的数量和方向来实现能量均衡。The high-frequency sub-image of the wavelet transform reflects the brightness mutation characteristics of the image, that is, the boundary characteristics. The Laplacian operator can sharpen the boundaries and lines of any direction, and maintain the isotropic characteristics. It is widely used in sharpness evaluation. In view of the situation that the second derivative of the Laplacian operator in the x direction and the y direction may have opposite signs, and at the same time, the influence of the surrounding points on the current position definition evaluation function is fully considered. The invention proposes a regional balanced Laplacian The operator achieves energy balance by increasing the number and direction of the second derivative.

考虑到微透镜标定误差会引起重聚焦图像产生局部噪声,拉普拉斯算子对噪声敏感的缺点,在对高频子图像进行融合前先进行双边滤波预处理,本发明基于区域均衡拉普拉斯算子的高频系数融合规则如下:Considering that the microlens calibration error will cause local noise in the refocusing image, and the Laplacian operator is sensitive to noise, bilateral filtering preprocessing is performed before the high-frequency sub-image is fused. The present invention is based on regional equalization Laplacian The high-frequency coefficient fusion rules of the Lass operator are as follows:

Figure BDA0001843520970000055
Figure BDA0001843520970000055

其中,

Figure BDA0001843520970000056
代表不同空间深度各重聚焦图像经小波分解后各自的高频子图像,n=1,2,3···N,N表示参与全聚焦图像融合的重聚焦图像帧数;D(BLαn(i,j))代表任意2幅高频子图像对应点的均衡拉普拉斯算子的差值;max[·]、min[·]为取最大值、最小值操作;HH为自定义门限阈值(HH取0.1,因为当两者差值小于0.1时表示两者区域均衡拉普拉斯值差异很小可以忽略不计),当差值的最小值大于门限阈值时,取N帧图像中均衡拉普拉斯能量最大者所对应的高频系数作为融合系数,当两者差值小于门限阈值时,由多帧图像高频系数乘以权重因子来决定最后的融合系数,其中权重因子
Figure BDA0001843520970000061
in,
Figure BDA0001843520970000056
Represents the respective high-frequency sub-images of each refocusing image at different spatial depths after wavelet decomposition, n=1, 2, 3...N, N represents the number of refocusing image frames participating in all-focus image fusion; D(BL αn ( i,j)) represents the difference between the equalized Laplacian operators of the corresponding points of any two high-frequency sub-images; max[ ], min[ ] are the maximum and minimum operations; HH is the user-defined threshold Threshold (HH takes 0.1, because when the difference between the two is less than 0.1, it means that the difference between the two regional equilibrium Laplacian values is very small and can be ignored), when the minimum value of the difference is greater than the threshold, the equalization in N frame images is taken. The high-frequency coefficient corresponding to the one with the largest Laplacian energy is used as the fusion coefficient. When the difference between the two is less than the threshold, the final fusion coefficient is determined by multiplying the high-frequency coefficient of the multi-frame image by the weighting factor. The weighting factor
Figure BDA0001843520970000061

步骤2)中小波塔型分解得到的低频系数,主要反应了原图像的平均灰度特征。计算低频融合系数的最简单方法是加权平均法,但加权平均法会降低融合图像的对比度并丢失原图像中一些有用信息。此外,空间频率法、点锐度算子等一些计算梯度的方法也被应用到低频融合系数的计算中。在本发明的光场图像低频系数融合中,借鉴了基于人类视觉特性的图像可见度(Image visibility,简称VI)的概念,其定义如下:The low-frequency coefficient obtained by the wavelet tower decomposition in step 2) mainly reflects the average grayscale feature of the original image. The easiest way to calculate the low-frequency fusion coefficient is the weighted average method, but the weighted average method will reduce the contrast of the fused image and lose some useful information in the original image. In addition, some gradient calculation methods such as spatial frequency method and point sharpness operator are also applied to the calculation of low-frequency fusion coefficients. In the light field image low-frequency coefficient fusion of the present invention, the concept of image visibility (Image visibility, VI for short) based on human visual characteristics is used for reference, and its definition is as follows:

Figure BDA0001843520970000062
Figure BDA0001843520970000062

其中P×Q表示图像I(i,j)的大小;

Figure BDA0001843520970000063
表示图像I(i,j)的平均值;γ表示视觉常量,其取值范围为0.6~0.7,VI的值越大,代表图像可见度越高。where P×Q represents the size of the image I(i,j);
Figure BDA0001843520970000063
Represents the average value of the image I(i,j); γ represents the visual constant, and its value ranges from 0.6 to 0.7. The larger the value of VI, the higher the visibility of the image.

在低频子图像的融合过程中,如果直接采用(15)式计算,只能得到整幅图像的VI值,无法用于多幅图像的区域级或像素级融合。为了合理建立有效的低频系数评价指标,我们对式(15)进行改进,建立基于像素的图像可见度函数(Pixel visibility,简称PV),具体表达式如下:In the fusion process of low-frequency sub-images, if the formula (15) is directly used for calculation, only the VI value of the whole image can be obtained, and it cannot be used for regional or pixel-level fusion of multiple images. In order to reasonably establish an effective low-frequency coefficient evaluation index, we improve Equation (15) and establish a pixel-based image visibility function (Pixel visibility, PV for short), the specific expression is as follows:

Figure BDA0001843520970000064
Figure BDA0001843520970000064

其中,S×T表示以当前像素点为中心的矩形邻域,且S、T只能取奇数;s、t表示在矩形邻域内水平垂直方向的扫描步长;

Figure BDA0001843520970000065
表示S×T区域像素的平均灰度值。在低频系数融合过程中,采用与高频系数相同的融合规则。Among them, S×T represents the rectangular neighborhood centered on the current pixel, and S and T can only take odd numbers; s and t represent the scanning step size in the horizontal and vertical directions in the rectangular neighborhood;
Figure BDA0001843520970000065
Represents the average gray value of pixels in the S×T area. In the fusion process of low frequency coefficients, the same fusion rules as high frequency coefficients are used.

最后将融合后的高低频系数经小波逆变换得到融合后的全聚焦图像。Finally, the fused high and low frequency coefficients are subjected to wavelet inverse transform to obtain the fused all-focus image.

上面对本发明小波域光场全聚焦图像生成算法进行详细描述,下面通过具体实例来验证本算法的有效性。The above is a detailed description of the wavelet domain light field all-focus image generation algorithm of the present invention, and the validity of the algorithm is verified by a specific example below.

本发明采用Lytro光场相机拍摄的原图进行了实验。图5(a)为光场原图,图5(b)、(c)、(d)分别为α=0.52、α=0.78、α=0.98时根据公式(10)计算得到的三幅不同空间深度的多聚焦图像,聚焦深度从前景逐渐变化到背景。图5(e)为采用本发明方法计算得到的全聚焦图像,红色虚线所框区域清晰度明显高于(b)图对应区域,黄色虚线所框区域清晰度明显高于(c)图对应区域,白色虚线所框区域清晰度明显高于(d)图对应区域,可见本发明算法可有效利用光场原图像得到全聚焦图像。The present invention uses the original image captured by the Lytro light field camera to carry out the experiment. Figure 5(a) is the original image of the light field, and Figures 5(b), (c), and (d) are three different spaces calculated according to formula (10) when α=0.52, α=0.78, and α=0.98, respectively A multi-focus image of depth, where the depth of focus gradually changes from the foreground to the background. Figure 5(e) is an all-focus image calculated by the method of the present invention, the definition of the area framed by the red dotted line is obviously higher than that of the corresponding area of (b), and the clarity of the area framed by the yellow dotted line is obviously higher than that of the corresponding area of (c) , the clarity of the area framed by the white dashed line is obviously higher than that of the corresponding area in (d), it can be seen that the algorithm of the present invention can effectively use the original image of the light field to obtain a fully focused image.

为了从视觉上评价本发明所提算法的优势,选取三种经典的基于小波变换的图像融合方法(Sobel算法、Prewitt算法、Laplace算法)与本发明所提算法进行对比,实验数据采用三组光场原图(Flower、Forest、Zither)。In order to visually evaluate the advantages of the algorithm proposed by the present invention, three classical image fusion methods based on wavelet transform (Sobel algorithm, Prewitt algorithm, Laplace algorithm) are selected for comparison with the algorithm proposed by the present invention. Field original image (Flower, Forest, Zither).

图6、图7、图8为对应实验结果:每幅图的(a)、(b)分别为三幅光场原图对应α=1、α=2时得到的重聚焦图像,聚焦深度从前景变换到背景;图6-8的(c)、(d)、(e)、(f)为Sobel算法、Prewitt算法、传统拉普拉斯算法及本发明算法得到的全聚焦图像。从视觉效果看,图6Sobel算法、Prewitt算法得到的融合图像在虚线所框矩形区域的清晰度明显不如本发明算法;图7虚线所框区域Sobel算法、Prewitt算法得到清晰度也明显不如本发明算法;图8采用Prewitt算法融合的植物叶子对应虚线所框区域(核对虚线框的位置)的清晰度也明显不如本发明算法;说明本发明所提光场全聚焦图像融合方法在视觉效果上具有一定优势。Figure 6, Figure 7, and Figure 8 are the corresponding experimental results: (a) and (b) of each figure are the refocusing images obtained when the three original light field images correspond to α=1 and α=2, respectively. The depth of focus is from The foreground is transformed into the background; (c), (d), (e), and (f) of Figures 6-8 are all-focus images obtained by the Sobel algorithm, the Prewitt algorithm, the traditional Laplace algorithm and the algorithm of the present invention. From the visual effect, the sharpness of the fusion image obtained by the Sobel algorithm and the Prewitt algorithm in Fig. 6 is obviously inferior to the algorithm of the present invention in the rectangular area framed by the dotted line; the clarity obtained by the Sobel algorithm and the Prewitt algorithm in the area framed by the dotted line in Fig. 7 is also obviously inferior to the algorithm of the present invention. Fig. 8 adopts the sharpness of the area framed by the corresponding dotted line (checking the position of the dotted line frame) of the plant leaves fused by the Prewitt algorithm also obviously not as good as the algorithm of the present invention; Explain that the light field all-focus image fusion method of the present invention has a certain visual effect. Advantage.

另外,考虑到人眼视觉限制,本发明进一步选取了一些客观评价指标对图像质量进行评价,验证本发明算法的优越性。本发明分别选取信息熵(E)、平均梯度(AG)、图像清晰度(FD)和边缘强度(EI)作为评价指标,对图6、图7、图8中多种方法得到的全聚焦图像的质量进行评价。In addition, considering the visual limitation of human eyes, the present invention further selects some objective evaluation indicators to evaluate the image quality to verify the superiority of the algorithm of the present invention. The present invention selects information entropy (E), average gradient (AG), image sharpness (FD) and edge intensity (EI) as evaluation indicators respectively, and compares the all-focus images obtained by various methods in Fig. 6 , Fig. 7 and Fig. 8 quality is evaluated.

其中E是度量信息大小的一个物理量,其值越大表示图像信息量越大。AG可以敏感的反应图像对微小细节反差能力,其值越高,代表它的能力越强。FD代表图像清晰程度,其值越高,代表其清晰程度越好。EI反映了图像的边缘强度,其值越高,代表图像边缘越清晰,具体评价指标对应结果如表1、表2、表3所示。Among them, E is a physical quantity that measures the size of the information, and the larger the value, the greater the amount of image information. AG can sensitively reflect the contrast ability of the image to small details. The higher the value, the stronger its ability. FD stands for image clarity, and the higher the value, the better the clarity. EI reflects the edge strength of the image. The higher the value, the clearer the edge of the image. The corresponding results of the specific evaluation indicators are shown in Table 1, Table 2, and Table 3.

对比表中数据可知,本发明算法在图像的四种客观评价指标上,均优于其它三种传统的小波变换方法,体现了本发明算法的可行性和有效性。It can be seen from the data in the comparison table that the algorithm of the present invention is superior to the other three traditional wavelet transform methods in the four objective evaluation indexes of the image, which reflects the feasibility and effectiveness of the algorithm of the present invention.

表1 Flower样本图像不同融合算法性能指标比较Table 1 Comparison of performance indicators of different fusion algorithms for Flower sample images

Table 1 Comparison ofperformance indexes ofdifferentfusion algorithmsforFlower sample imagesTable 1 Comparison of performance indexes of different fusion algorithms for Flower sample images

EE FDFD AGAG EIEI Sobel算法Sobel's algorithm 6.86766.8676 6.89916.8991 6.24706.2470 66.534066.5340 Prewitt算法Prewitt's algorithm 6.86346.8634 6.32706.3270 5.83265.8326 62.642062.6420 Laplace算法Laplace algorithm 6.88306.8830 7.68377.6837 6.86686.8668 72.307372.3073 本发明算法The algorithm of the invention 6.88966.8896 7.84987.8498 7.00557.0055 73.720373.7203

表2 Cucurbit样本图像不同融合算法性能指标比较Table 2 Comparison of performance indicators of different fusion algorithms for Cucurbit sample images

Table 2 Comparison ofperformance indexes ofdifferent fusionalgorithms for Cucurbit sample imagesTable 2 Comparison of performance indexes of different fusion algorithms for Cucurbit sample images

EE FDFD AGAG EIEI Sobel算法Sobel's algorithm 5.75445.7544 2.91362.9136 2.53282.5328 26.515726.5157 Prewitt算法Prewitt's algorithm 5.74925.7492 2.57662.5766 2.29052.2905 24.273524.2735 Laplace算法Laplace algorithm 5.80115.8011 3.52353.5235 3.00183.0018 31.013431.0134 本发明算法The algorithm of the invention 5.80995.8099 3.63053.6305 3.08753.0875 31.903331.9033

表3 Zither样本图像不同融合算法性能指标比较Table 3 Comparison of performance indicators of different fusion algorithms for Zither sample images

Table 3 Comparison ofperformance indexes ofdifferent fusionalgorithms forZither sample imagesTable 3 Comparison of performance indexes of different fusion algorithms for Zither sample images

EE FDFD AGAG EIEI Sobel算法Sobel's algorithm 6.29356.2935 5.18655.1865 4.46754.4675 48.385448.3854 Prewitt算法Prewitt's algorithm 6.26956.2695 4.51824.5182 4.01844.0184 43.756643.7566 Laplace算法Laplace algorithm 6.27166.2716 5.67735.6773 4.86494.8649 52.447452.4474 本发明算法The algorithm of the invention 6.29876.2987 6.29876.2987 4.94254.9425 53.150153.1501

本发明完成了光场原图到全聚焦图像的计算,采用基于小波域清晰度评价的方法实现了全聚焦图像的融合,避免了传统空域图像融合算法造成的块效应。首先对解码得到4D光场数据进行空间变换与投影,得到用于全聚焦图像融合的多聚焦图像,然后通过对各多聚焦图像集进行小波分解及塔型重构构建高、低频子图像集实现图像融合。在小波高频子图像的融合中,本发明提出了基于区域均衡拉普拉斯算子的清晰度评价函数;在小波低频子图像融合中,本发明提出了基于像素可见度的清晰度评价函数,来提高全聚焦图像的融合质量。通过上面的实验表明,本发明所提方法与基于小波变换的传统算法相比,最终融合图像从主观视觉到客观指标上都得到了提高。The invention completes the calculation of the original image of the light field to the all-focus image, realizes the fusion of the all-focus image by the method based on the wavelet domain sharpness evaluation, and avoids the block effect caused by the traditional spatial domain image fusion algorithm. Firstly, the 4D light field data obtained by decoding is spatially transformed and projected to obtain a multi-focus image for all-focus image fusion. Then, the high and low frequency sub-image sets are constructed by wavelet decomposition and tower reconstruction of each multi-focus image set. Image fusion. In the fusion of wavelet high-frequency sub-images, the present invention proposes a sharpness evaluation function based on the area equalization Laplacian operator; in the fusion of wavelet low-frequency sub-images, the present invention proposes a sharpness evaluation function based on pixel visibility, to improve the fusion quality of all-in-focus images. The above experiments show that, compared with the traditional algorithm based on wavelet transform, the method proposed in the present invention improves the final fusion image from subjective vision to objective indicators.

上面结合附图对本发明的实施例作了详细说明,但是本发明并不限于上述实施例,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The embodiments of the present invention have been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned embodiments, and within the scope of knowledge possessed by those of ordinary skill in the art, various kind of change.

Claims (2)

1.小波域光场全聚焦图像生成方法,其特征在于,按照以下步骤实现:1. A method for generating an all-focus image in a wavelet domain light field, characterized in that, it is realized according to the following steps: 步骤1):将光场原图经数据解码后得到4D光场,选择不同的αn,n=1,2,3···N,利用数字重聚焦技术得到不同空间深度的重聚焦图像
Figure FDA0003128743490000011
Step 1): Decode the original image of the light field to obtain a 4D light field, select different α n , n=1, 2, 3...N, and use digital refocusing technology to obtain refocusing images of different spatial depths
Figure FDA0003128743490000011
步骤2)计算每一帧重聚焦图像的小波高、低频子图像
Figure FDA0003128743490000012
Step 2) Calculate the wavelet high and low frequency sub-images of each frame of the refocusing image
Figure FDA0003128743490000012
步骤3)对高、低频子图像分别采用区域均衡拉普拉斯BL算子和像素可见度PV函数作为图像融合清晰度评价指标,实现高低频系数的融合;Step 3) respectively adopting the regional balanced Laplacian BL operator and the pixel visibility PV function as the image fusion sharpness evaluation index for the high and low frequency sub-images to realize the fusion of high and low frequency coefficients; BL算子为区域均衡拉普拉斯算子,该算子的表达式如下:The BL operator is a regional equilibrium Laplace operator, and the expression of the operator is as follows:
Figure FDA0003128743490000013
Figure FDA0003128743490000013
其中,其中S×T表示均衡区域大小,且S、T只能取奇数;s、t表示水平、垂直方向二阶导步长;
Figure FDA0003128743490000014
表示权重因子,距离中心点越近的点,权重因子越大,对拉普拉斯算子值贡献越大,反之,距离中心点越远,对拉普拉斯算子值贡献越小;
Among them, S×T represents the size of the equalization area, and S and T can only take odd numbers; s and t represent the second-order guide step size in the horizontal and vertical directions;
Figure FDA0003128743490000014
Represents the weight factor, the closer the point is to the center point, the larger the weight factor, the greater the contribution to the Laplacian operator value, and vice versa, the farther away from the center point, the smaller the contribution to the Laplacian operator value;
PV函数为像素可见度函数,具体表达式如下:The PV function is the pixel visibility function, and the specific expression is as follows:
Figure FDA0003128743490000015
Figure FDA0003128743490000015
其中,S×T表示以当前像素点为中心的矩形邻域,且S、T只能取奇数;s、t表示在矩形邻域内水平垂直方向的扫描步长;
Figure FDA0003128743490000016
表示S×T区域像素的平均灰度值;
Among them, S×T represents the rectangular neighborhood with the current pixel as the center, and S and T can only take odd numbers; s and t represent the scanning step size in the horizontal and vertical directions in the rectangular neighborhood;
Figure FDA0003128743490000016
Represents the average gray value of pixels in the S×T area;
步骤4)将高低频系数经过小波逆变换得到融合后的全聚焦图像。Step 4) The high and low frequency coefficients are subjected to wavelet inverse transformation to obtain a fused all-focus image.
2.根据权利要求1所述的小波域光场全聚焦图像生成方法,其特征在于:低频系数和高频系数融合规则相同,以高频系数融合为例,规则如下:2. The wavelet domain light field all-focus image generation method according to claim 1, characterized in that: the low-frequency coefficient and high-frequency coefficient fusion rules are the same, and the high-frequency coefficient fusion is taken as an example, and the rules are as follows:
Figure FDA0003128743490000017
Figure FDA0003128743490000017
其中,
Figure FDA0003128743490000018
代表不同空间深度各重聚焦图像经小波分解后各自的高频子图像,n=1,2,3···N,N表示参与全聚焦图像融合的重聚焦图像帧数;
Figure FDA0003128743490000019
代表任意2幅高频子图像对应点的均衡拉普拉斯算子的差值;max[·]、min[·]为取最大值、最小值操作;HH为自定义门限阈值,当差值的最小值大于门限阈值时,取N帧图像中均衡拉普拉斯能量最大者所对应的高频系数作为融合系数,当两者差值小于门限阈值时,由多帧图像高频系数乘以权重因子来决定最后的融合系数,其中权重因子
Figure FDA0003128743490000021
in,
Figure FDA0003128743490000018
Represents the respective high-frequency sub-images of each refocusing image at different spatial depths after wavelet decomposition, n=1, 2, 3...N, N represents the number of refocusing image frames participating in all-focus image fusion;
Figure FDA0003128743490000019
Represents the difference of the equalization Laplacian of the corresponding points of any two high-frequency sub-images; max[ ], min[ ] are the operations of taking the maximum and minimum values; H H is the user-defined threshold threshold, when the difference When the minimum value of the value is greater than the threshold, the high-frequency coefficient corresponding to the one with the largest equalized Laplacian energy in the N frames of images is taken as the fusion coefficient. The final fusion coefficient is determined by the weight factor, where the weight factor
Figure FDA0003128743490000021
CN201811259275.1A 2018-10-26 2018-10-26 Wavelet Domain Light Field All-Focus Image Generation Algorithm Active CN109447930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811259275.1A CN109447930B (en) 2018-10-26 2018-10-26 Wavelet Domain Light Field All-Focus Image Generation Algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811259275.1A CN109447930B (en) 2018-10-26 2018-10-26 Wavelet Domain Light Field All-Focus Image Generation Algorithm

Publications (2)

Publication Number Publication Date
CN109447930A CN109447930A (en) 2019-03-08
CN109447930B true CN109447930B (en) 2021-08-20

Family

ID=65547793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811259275.1A Active CN109447930B (en) 2018-10-26 2018-10-26 Wavelet Domain Light Field All-Focus Image Generation Algorithm

Country Status (1)

Country Link
CN (1) CN109447930B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330757B (en) * 2019-08-05 2022-11-29 复旦大学 Complementary color wavelet measurement for evaluating color image automatic focusing definition
CN110662014B (en) * 2019-09-25 2020-10-09 江南大学 A method for 3D display of 4D data of light field camera with large depth of field
CN111145134B (en) * 2019-12-24 2022-04-19 太原科技大学 Block effect-based microlens light field camera full-focus image generation algorithm
CN112132771B (en) * 2020-11-02 2022-05-27 西北工业大学 Multi-focus image fusion method based on light field imaging
CN112801913A (en) * 2021-02-07 2021-05-14 佛山中纺联检验技术服务有限公司 Method for solving field depth limitation of microscope
CN113487526B (en) * 2021-06-04 2023-08-25 湖北工业大学 Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients
CN116847209B (en) * 2023-08-29 2023-11-03 中国测绘科学研究院 Log-Gabor and wavelet-based light field full-focusing image generation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5952957A (en) * 1998-05-01 1999-09-14 The United States Of America As Represented By The Secretary Of The Navy Wavelet transform of super-resolutions based on radar and infrared sensor fusion
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 An Image Fusion Processing Method Based on Statistical Signals in Wavelet Domain
CN108537756A (en) * 2018-04-12 2018-09-14 大连理工大学 Single image to the fog method based on image co-registration
CN108581869A (en) * 2018-03-16 2018-09-28 深圳市策维科技有限公司 A kind of camera module alignment methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5952957A (en) * 1998-05-01 1999-09-14 The United States Of America As Represented By The Secretary Of The Navy Wavelet transform of super-resolutions based on radar and infrared sensor fusion
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 An Image Fusion Processing Method Based on Statistical Signals in Wavelet Domain
CN108581869A (en) * 2018-03-16 2018-09-28 深圳市策维科技有限公司 A kind of camera module alignment methods
CN108537756A (en) * 2018-04-12 2018-09-14 大连理工大学 Single image to the fog method based on image co-registration

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multifocus image fusion scheme using focused region detection;Y. Chai;《Optics Communications》;20110901;全文 *
区域清晰度的小波变换图像融合算法研究;叶明;《电子测量与仪器学报》;20150930;全文 *

Also Published As

Publication number Publication date
CN109447930A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN109447930B (en) Wavelet Domain Light Field All-Focus Image Generation Algorithm
Yoon et al. Light-field image super-resolution using convolutional neural network
CN109754377B (en) A Multi-Exposure Image Fusion Method
CN104217404B (en) Haze sky video image clearness processing method and its device
Rajkumar et al. A comparative analysis on image quality assessment for real time satellite images
CN107203985B (en) An End-to-End Deep Learning Framework for Multi-exposure Image Fusion
CN113379661B (en) Dual-branch convolutional neural network device for infrared and visible light image fusion
CN111369466B (en) Image distortion correction enhancement method of convolutional neural network based on deformable convolution
CN105959684A (en) Stereo image quality evaluation method based on binocular fusion
Kodama et al. Efficient reconstruction of all-in-focus images through shifted pinholes from multi-focus images for dense light field synthesis and rendering
CN109978774B (en) Denoising fusion method and device for multi-frame continuous equal exposure images
CN107993208A (en) It is a kind of based on sparse overlapping group prior-constrained non local full Variational Image Restoration method
CN104363369A (en) Image restoration method and device for optical field camera
Wang et al. A graph-based joint bilateral approach for depth enhancement
CN109544487A (en) A kind of infrared image enhancing method based on convolutional neural networks
CN111145134A (en) Algorithm for all-focus image generation of microlens light field camera based on block effect
CN108447028A (en) Underwater image quality improving method based on multi-scale fusion
CN116847209B (en) Log-Gabor and wavelet-based light field full-focusing image generation method and system
CN109949256B (en) Astronomical image fusion method based on Fourier transform
Singh et al. Weighted least squares based detail enhanced exposure fusion
CN112686829B (en) 4D light field full focusing image acquisition method based on angle information
Liu et al. Eaf-wgan: Enhanced alignment fusion-wasserstein generative adversarial network for turbulent image restoration
CN111429368B (en) A Multi-exposure Image Fusion Method with Adaptive Detail Enhancement and Ghost Elimination
CN112651911A (en) High dynamic range imaging generation method based on polarization image
Ye et al. Lfienet: Light field image enhancement network by fusing exposures of lf-dslr image pairs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant