WO2018171008A1 - 一种基于光场图像的高光区域修复方法 - Google Patents

一种基于光场图像的高光区域修复方法 Download PDF

Info

Publication number
WO2018171008A1
WO2018171008A1 PCT/CN2017/083307 CN2017083307W WO2018171008A1 WO 2018171008 A1 WO2018171008 A1 WO 2018171008A1 CN 2017083307 W CN2017083307 W CN 2017083307W WO 2018171008 A1 WO2018171008 A1 WO 2018171008A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
highlight
point
light field
unsaturated
Prior art date
Application number
PCT/CN2017/083307
Other languages
English (en)
French (fr)
Inventor
王好谦
许晨雪
王兴政
方璐
张永兵
戴琼海
Original Assignee
深圳市未来媒体技术研究院
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市未来媒体技术研究院, 清华大学深圳研究生院 filed Critical 深圳市未来媒体技术研究院
Publication of WO2018171008A1 publication Critical patent/WO2018171008A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal

Definitions

  • the invention relates to the field of computer vision and digital image processing, and in particular to a method for repairing a highlight region based on a light field image.
  • High light also known as specular reflection.
  • the highlight of the image brings difficulties and challenges to the realization of many applications.
  • High light is actually a very common phenomenon in real scenes. It is the change of the color and brightness of the surface of the object caused by the change of illumination at different viewing angles, reflecting the optical reflection characteristics of the surface of the object.
  • high-light pixels tend to have high brightness, thus covering the color, contour, and texture of the surface of the object.
  • the saturated highlight directly leads to the loss of local area information, so the highlight is usually regarded as the flaw of the image.
  • many algorithms in the fields of computer vision, computer graphics, and pattern recognition assume that the surface of an object contains only diffuse reflections, ignoring the presence of highlights or treating high light as noise or anomaly.
  • image segmentation such algorithms usually assume that the surface brightness of the object changes uniformly or smoothly; while stereo view matching, object recognition and tracking algorithms attempt to pixel match images of the same or similar scenes taken under different conditions, so they need
  • the surface of the object should be as uniform as possible in color and brightness under different shooting conditions. Therefore, using these algorithms to process images with high light reflections can result in significant errors.
  • most objects in the real world contain diffuse and specular reflections on their surfaces.
  • it is essential to ensure that the image can be applied to traditional computer vision, pattern recognition and other algorithms. It is important to accurately detect the highlight and restore the original image information under the high light cover.
  • the light field camera incorporates a microlens array in front of the sensor that records the angle and position of any light reaching the imaging plane at the same time as a single exposure, completely depicting the four-dimensional field.
  • the light field image carries the spatial and angular total four-dimensional light field information, people can change the viewpoint and digital refocusing in the subsequent processing; and the characteristic of the high light reflection is that the illumination causes the color and brightness of the surface of the object to change under different viewing angles. Therefore, the use of light field imaging technology to record the advantages of rich light information will effectively help the recovery of high-light areas.
  • the main object of the present invention is to provide a deficiencies in the prior art.
  • the present invention adopts the following technical solutions:
  • a method for repairing high-light image based on multi-viewpoint characterized in that the method comprises:
  • A1 acquiring a four-dimensional light field image and a corresponding depth image
  • A2 extracting the central viewpoint image from the four-dimensional light field image, initially determining the spatial domain coordinates of the highlight target point, refocusing the four-dimensional light field image according to the input depth image, and obtaining the angular domain characteristic of the highlight target point and dividing into a saturated highlight point. And unsaturated highlight points;
  • A3 performing eigen image decomposition on an image of one viewpoint or multiple viewpoints, obtaining an image intrinsic reflection property, and finding intrinsic reflection information corresponding to the highlight target point;
  • A4 separating the diffuse reflection component from the unsaturated high-light point using the local region characteristic under the multi-view point, and repairing the unsaturated highlight point in combination with the intrinsic reflection information determined in step A3;
  • the diffuse reflection component of the adjacent pixel is used for propagation, and the saturated highlight point is repaired by combining the intrinsic reflection information determined in step A3.
  • step A2 the highlight points are detected and classified.
  • the brightness threshold setting method is used to find the spatial domain coordinates of the highlight target point under the central viewpoint
  • the depth image is used to refocus the light field image to find The corresponding pixel of the highlight target point at each viewpoint, as the pixel set of the point, calculates the variance of the RGB value of the pixel in the set. If the variance is less than a certain threshold, the point is divided into a saturated highlight point; if the variance is greater than the The threshold is then divided into unsaturated highlight points.
  • step A3 the intrinsic image decomposition method is used to separate the influence of illumination from a certain viewpoint or a plurality of viewpoint images to obtain a relatively stable intrinsic reflection property.
  • step A3 the global texture constraint is added to the eigenimage decomposition algorithm, and the intrinsic reflection information of the highlight region is recovered by the pixels that are not adjacent but have the same texture characteristics.
  • the original four-dimensional light field image I is used to initialize the light field image I d (x, y, u, v) after the unsaturated high-light spot is repaired, and then the diffuse reflection separated by the local region characteristic under the multi-view point is utilized.
  • the component D m and the intrinsic reflection property D i of the corresponding unsaturated highlight point are combined and reconstructed according to the set weights to repair the diffuse reflection information of the unsaturated highlight point, as follows:
  • I d (x, y, u, v) w m D m (x, y, u, v)+w i D i (x, y, u, v)
  • the diffuse reflection component is separated from the local region characteristic under the multi-view point.
  • the pixel set under the different viewpoints using the unsaturated highlight point is divided into two categories according to the clustering algorithm, diffuse reflection and
  • the specular reflection combination class which only contains the diffuse reflection class, calculates the two types of class centers M 1 and M 2 and the confidence degree, and uses the confidence and neighborhood window processing to subtract the specular reflection component from the light field image to obtain the diffuse reflection component D. m .
  • the light field image I d after the saturation high-spot repair is used to initialize the light field image I r (x, y, u, v) after the saturated high-light spot is repaired, and the saturated highlight point is used.
  • the weighted sum D n of the diffuse reflection component of the domain and the intrinsic reflection property D i of the corresponding saturated highlight point are combined to fix the color information of the saturated highlight point according to the set weight, as follows:
  • m denotes the sequence of the distance from the pixel to the saturated pixel p in ⁇ from small to large, from 1 to k
  • (x m , y m ) represents the pixel space domain coordinate of the distance m in the ⁇
  • the weight m represents ⁇ The weight corresponding to the pixel of the mth smallest distance p.
  • the four-dimensional light field image may be acquired by using a multi-viewpoint imaging device, and the multi-viewpoint imaging device includes a camera array or a light field camera.
  • the invention divides the highlight scene points into saturated highlight points and unsaturated highlight points, and combines the color, intensity and intrinsic reflection properties under different viewpoints for different types, and adopts corresponding methods for high-light restoration, and the invention can be improved by using the invention.
  • the image quality of the high-gloss surface captured by Lytro and other light field cameras or the camera array with a small baseline restores its original texture and color features, and is applied to 3D reconstruction, image segmentation, etc., which can effectively improve the 3D reconstruction quality and image segmentation of the scene. The accuracy.
  • FIG. 1 is a flow chart of an embodiment of a method for repairing a highlight region based on a light field image according to the present invention.
  • a method for repairing a highlight region based on a light field image is as follows:
  • Step A1 Input a 4D light field image (which can be acquired by a multi-view imaging device, such as a camera array, a light field camera, etc.), and a corresponding depth image.
  • a 4D light field image which can be acquired by a multi-view imaging device, such as a camera array, a light field camera, etc.
  • Step A2 extracting the central viewpoint image from the four-dimensional light field image, initially determining the spatial domain coordinates of the highlight target point, refocusing the light field image according to the known depth image, acquiring the angular domain characteristic of the highlight target point and dividing into saturated high light. Point and unsaturated highlights
  • Step A3 Perform eigen image decomposition on an image of a certain viewpoint or a plurality of viewpoints, obtain an image intrinsic reflection property, and find intrinsic reflection information corresponding to the highlight target point.
  • Step A4 For the unsaturated highlight spot, the diffuse reflection component is separated by the local region characteristic under the multi-view point, and the intrinsic reflection property determined by A3 is used to repair it.
  • Step A5 For the saturated highlight point, the diffuse reflection component of the adjacent pixel is used for propagation, and the intrinsic reflection property determined by A3 is used to repair it.
  • the operation can be as follows. It should be noted that the specific methods described in the following implementation process (such as intrinsic image decomposition algorithm, joint joint defocusing method and stereo view matching method for depth estimation, weighted KNN algorithm, etc.) are merely illustrative examples. The scope of the invention is not limited to the enumerated methods.
  • Step A1 Input a four-dimensional light field image (which can be acquired by a multi-viewpoint imaging device, such as a camera array, a light field camera, etc.).
  • the corresponding depth image may be extracted by passive depth estimation (stereo viewpoint matching, defocusing method, etc.) or active depth measurement (Kinect, etc.).
  • Step A2 For the detection and classification of highlight points, the spatial threshold coordinates of the highlight target points under the central viewpoint may be found by using the brightness threshold setting method for the central viewpoint image. Under the central viewpoint, if the brightness of a pixel is higher than h thres , mark it as a highlight target.
  • the depth value corresponding to the highlight target point from the depth map obtained in the previous step, refocusing the light field image, finding the corresponding pixel of the highlight target point under each viewpoint, and calculating the pixel in the set as the pixel set of the point
  • the variance of the RGB value if the variance is less than a certain threshold var thres , the point is divided into a saturated highlight point; if the variance is greater than the threshold, the point is divided into an unsaturated highlight point.
  • the saturated highlights are strong highlights at all viewpoints, and their diffuse reflection information is almost completely lost.
  • the intensity of the unsaturated highlights varies greatly under different viewpoints, and there are different combinations of diffuse and specular components.
  • the proportion of highlight pixels is not large. If the whole image is processed, the calculation amount and time complexity are very high. The purpose of this step is to reduce the amount of calculation of the highlight repair step, only for the detected highlights.
  • the processing of the pixels also preserves the stability of other areas of the image.
  • Step A3 Performing an intrinsic image decomposition on an image of a certain viewpoint or a plurality of viewpoints, and separating the influence of the illumination Taking the relatively stable intrinsic property of the reflection and finding the intrinsic reflection information corresponding to the highlight target point, it is one of the clues for the recovery and utilization of the highlight region information.
  • the eigenimage decomposition of the central viewpoint as an example:
  • i c (p), r c (p), and s c (p) respectively represent I c (p), R c (p), and S c (p) after taking the logarithm.
  • the present invention adds Retinex local constraints, global texture constraints and absolute scale constraints to the intrinsic image decomposition, which can transform the problem into the objective function minimization problem.
  • Retinex local constraints due to the addition of global texture constraints, pixels that are not adjacent or even far apart but have the same texture characteristics have the same reflection characteristics, and the diffuse reflection information is restored as one of the clues of the subsequent steps.
  • Step A4 First, the original four-dimensional light field image I is used to initialize the light field image I d (x, y, u, v) after the unsaturated high-light spot is repaired, and then the diffuse reflection repair is performed on the unsaturated high-light point.
  • the recovery of unsaturated high-light point information combines two clues. One is to use the diffuse reflection component D m separated by the local region characteristics under multi-view, and the other is the intrinsic reflection property D i of the corresponding unsaturated point obtained in A3. A certain weight combination.
  • D m the diffuse reflection component separated by the local region characteristics under multi-view
  • I d (x, y, u, v) w m D m (x, y, u, v)+w i D i (x, y, u, v)
  • (u, v) represents the angular domain coordinates of the ray
  • (x, y) represents the spatial domain coordinates of the unsaturated highlight points
  • w m and w i are the set weights.
  • the diffuse reflection component is separated from the local region characteristics under multi-viewpoint. For each unsaturated highlight point, the pixel set under different viewpoints using the unsaturated highlight point is divided into two categories according to the clustering algorithm. The combination of diffuse reflection and specular reflection With only the diffuse reflection class, the two types of category centers M 1 and M 2 and the confidence are calculated. Using the confidence and neighborhood window processing, the specular component at the corresponding position is subtracted from the light field image to obtain a diffuse reflection component D m (x, y, u, v).
  • R is the average intraclass distance
  • ⁇ 0 is the parameter that controls the brightness factor
  • ⁇ 1 is the parameter that controls the two types of center distance factors
  • ⁇ 2 is the parameter that controls the classification accuracy
  • indicates the modulo operation.
  • the intrinsic reflection property D i of the unsaturated region since the intrinsic image decomposition is performed only on the central viewpoint image in this example, the intrinsic reflection property D i of the unsaturated region can be copied to the respective viewpoints by the intrinsic reflection characteristic of the central viewpoint pixel. Corresponding to the pixel, D i (x, y, u, v) is obtained.
  • Step A5 first light field image using the light spot to repair the unsaturated high light field image I d I r (x, y, u , v) after the light spot to repair the high saturation initialized.
  • two clues are combined according to a certain weight, one is the weighted sum of the diffuse reflection components of the neighborhood, and the other is the intrinsic reflection attribute obtained in step A3.
  • the weighted sum of the diffuse reflection components of adjacent pixels can be implemented by a weighted KNN algorithm.
  • a saturated pixel p finds the k pixels that are not the saturated highlights closest to their spatial domain coordinates (x, y) (ie, non-high-light pixels or unsaturated high-light pixels recovered in the previous step) constitute a pixel set ⁇ , according to The following formula implements the repair of saturated highlight pixels, where (x, y) is the spatial domain coordinate of the saturated highlight point:
  • (u, v) represents the angular domain coordinates of the ray
  • (x, y) represents the spatial domain coordinates of the saturated highlight point
  • w n and w i are the set weights
  • D n represents the weighted sum of the neighborhood diffuse reflection components
  • D i represents the intrinsic reflection property of the corresponding saturated region:
  • m denotes the sequence number of the distance from the pixel to p in ⁇ from small to large, from 1 to k
  • (x m , y m ) represents the pixel space domain coordinate of the distance m in the ⁇
  • the weight m represents ⁇ .
  • the weight corresponding to the p-th small pixel, the closer the distance, the larger the weight, and the diffuse reflection information of the pixel closer to p has a greater influence on the diffuse reflection information of p.
  • the calculation of the weight m is not limited to the above embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种基于多视点的高光图像修复方法,包括:步骤A1:获取四维光场图像以及对应的深度图像;步骤A2:从四维光场图像提取中心视点图像,初步确定高光目标点的空间域坐标,依照输入的深度图像对四维光场图像进行重聚焦,获取高光目标点的角度域特性并划分成饱和高光点与非饱和高光点;步骤A3:对一个视点或多个视点的图像进行本征图像分解,得到图像本征反射属性,找到高光目标点对应的本征反射信息;步骤A4:对非饱和高光点,利用多视点下的局部区域特性分离出漫反射分量,结合步骤A3确定的本征反射信息,对非饱和高光点进行修复;步骤A5:对饱和高光点,利用临近像素点的漫反射分量进行传播,结合步骤A3确定的本征反射信息,对饱和高光点进行修复。利用该方法能够提高含有高光表面的图像的质量。

Description

一种基于光场图像的高光区域修复方法 技术领域
本发明涉及计算机视觉与数字图像处理领域,特别是涉及一种基于光场图像的高光区域修复方法。
背景技术
高光,也称镜面反射。在计算机视觉和模式识别领域,图像的高光给许多应用的实现效果带来困难与挑战。高光其实是现实场景中一种非常常见的现象,它是光照在不同视角下的变化引起物体表面颜色、亮度的变化,反映了物体表面的光学反射特征。在数字图像中,高光像素往往具有高亮度,因而遮盖了物体表面的颜色、轮廓、纹理,饱和的高光更是直接导致了局部区域信息的丢失,故高光通常被认为是图像的瑕疵。目前,在计算机视觉、计算机图像学和模式识别领域的很多算法都假设物体表面仅含有漫反射,忽略高光的存在或将高光当做噪声或异常处理。例如图像分割,这类算法通常假定物体表面亮度变化均匀或平滑;而立体视点匹配、物体识别和跟踪算法试图对不同条件下拍摄的具有相同或相似场景的图像进行像素匹配,因此他们所需的物体表面要在不同拍摄条件下颜色、亮度尽可能一致。因此,使用这些算法处理含有高光反射的图像可能会导致显著的错误。然而,现实世界中的绝大多数物体表面都包含漫反射和高光反射。为了从数字图像中准确提取物体的颜色、轮廓、纹理信息,保证图像能够应用于传统的计算机视觉、模式识别等算法,准确地检测出高光并恢复出高光掩盖下的原始图像信息至关重要。
近年来,随着计算摄像学和光场成像技术的发展,一系列光场采集系统(相机阵列、移动相机、光场相机)应运而生,为计算机视觉和图像处理中的很多应用提供了新的解决方案。由于传统相机拍照时只记录一个视点的信息,只能聚焦到一个深度,因而场景的大部分光线信息丢失。光场相机在传感器前加入了一个微透镜阵列,能够在单次曝光的同时记录到达成像平面任意光线的角度与位置,完全刻画四维光场。由于光场图像携带了空间、角度共四维光场信息,人们可在后续处理中变换视点和数字重聚焦;而高光反射的特点正是光照在不同视角下引起物体表面颜色、亮度的变化。因此,利用光场成像技术记录了丰富光线信息的优势,将对高光区域的复原问题带来有效的帮助。
发明内容
本发明的主要目的在于针对现有技术的不足,提供。
为实现上述目的,本发明采用以下技术方案:
一种基于多视点的高光图像修复方法,其特征在于,所述方法包括:
A1:获取四维光场图像以及对应的深度图像;
A2:从四维光场图像提取中心视点图像,初步确定高光目标点的空间域坐标,依照输入的深度图像对四维光场图像进行重聚焦,获取高光目标点的角度域特性并划分成饱和高光点与非饱和高光点;
A3:对一个视点或多个视点的图像进行本征图像分解,得到图像本征反射属性,找到高光目标点对应的本征反射信息;
A4:对非饱和高光点利用多视点下的局部区域特性分离出漫反射分量,结合步骤A3确定的本征反射信息,对非饱和高光点进行修复;
A5:对饱和高光点,利用临近像素点的漫反射分量进行传播,结合步骤A3确定的本征反射信息,对饱和高光点进行修复。
进一步地:
步骤A1中,采用图像深度估计方法或主动深度测量方法提取场景对应的深度图像,采用四维光场的双平面模型表示光场,光场图像I=I(x,y,u,v),其中(u,v)表示光线的角度域坐标,(x,y)表示其空间域坐标。
步骤A2中,对高光点进行检测和分类,针对中心视点图像,采用亮度阈值设定的方法,找出中心视点下高光目标点的空间域坐标,结合深度图像对光场图像进行重聚焦,找到高光目标点在各个视点下的对应像素,作为该点的像素集合,计算集合内像素RGB值的方差,若方差小于某一设定阈值,则将该点划分为饱和高光点;若方差大于该阈值,则将该点划分为非饱和高光点。
步骤A3中,利用本征图像分解的方法从某一视点或多个视点图像中分离光照的影响,获取相对稳定的本征反射属性。
步骤A3中,本征图像分解算法中添加全局纹理约束,利用不相邻但具有相同纹理特性的像素恢复出高光区域的本征反射信息。
所述步骤A4中,使用原四维光场图像I对非饱和高光点修复后的光场图像Id(x,y,u,v)进行初始化,再利用多视点下局部区域特性分离的漫反射分量Dm和相应非饱和高光点的本征反射属性Di,将两者按照设定的权重组合并修复非饱和高光点的漫反射信息,如下式:
Id(x,y,u,v)=wmDm(x,y,u,v)+wiDi(x,y,u,v)
其中(u,v)表示光线的角度域坐标,(x,y)表示非饱和高光点的空间域坐标,wm和wi为设定的权重。
所述步骤A4中,从多视点下局部区域特性分离漫反射分量,对每一非饱和高光点,利用非饱和高光点在不同视点下的像素集合按照聚类算法分为两类,漫反射和镜面反射结合类、仅含漫反射类,计算两类类别中心M1和M2及置信度,利用置信度和邻域窗口处理,从光场图像中减去镜面反射分量,得到漫反射分量Dm
所述步骤A5中,使用非饱和高光点修复后的光场图像Id对饱和高光点修复后的光场图像Ir(x,y,u,v)进行初始化,对饱和高光点,利用邻域漫反射分量的加权和Dn和相应饱和高光点的本征反射属性Di,将两者按照设定的权值进行组合修复饱和高光点的颜色信息,如下式:
Ir(x,y,u,v)=wnDn(x,y,u,v)+wiDi(x,y,u,v)
其中(u,v)表示光线的角度域坐标,(x,y)表示饱和高光点的空间域坐标,wn和wi为设定的权重。至此,非饱和高光点及饱和高光点的漫反射信息均得到修复,输出高光修复后的光场图像。
Dn通过以下公式计算:
Dn(x,y,u,v)=∑m=1,...kweightmId(xm,ym,u,v)
weightm=1/2m
其中,m表示Φ中像素距饱和像素p的距离从小到大排序的序号,从1取到k,(xm,ym)表示Φ中距离p第m小的像素空间域坐标,weightm表示Φ中距离p第m小的像素对应的权值。
所述步骤A1中,四维光场图像可利用多视点成像设备采集,所述多视点成像设备采集包括相机阵列或光场相机。
本发明的有益效果:
本发明将高光场景点划分为饱和高光点与非饱和高光点,针对不同类型,综合其在不同视点下的颜色、强度以及本征反射属性,采用对应方法进行高光修复,利用本发明能够提高类似Lytro等光场相机或基线较小的相机阵列拍摄的含有高光表面的图像质量,恢复其原有纹理、颜色特征,应用于三维重建、图像分割等领域,能有效提升场景三维重建质量和图像分割的准确性。
附图说明
图1为本发明基于光场图像的高光区域修复方法一种实施例的流程图。
具体实施方式
以下对本发明的实施方式作详细说明。应该强调的是,下述说明仅仅是示例性的,而不是为了限制本发明的范围及其应用。
参阅图1,在一种实施例中,本发明所提出的一种基于光场图像的高光区域修复方法,所述方法如下:
步骤A1:输入一幅四维光场图像(可利用多视点成像设备采集,如相机阵列、光场相机等),以及对应深度图像
步骤A2:从四维光场图像提取中心视点图像,初步确定高光目标点的空间域坐标,依照已知的深度图像对光场图像进行重聚焦,获取高光目标点的角度域特性并划分成饱和高光点与非饱和高光点
步骤A3:对某一视点或多个视点的图像进行本征图像分解,得到图像本征反射属性,找到高光目标点对应的本征反射信息
步骤A4:对非饱和高光点,利用多视点下的局部区域特性分离出漫反射分量,结合A3确定的本征反射属性,对其进行修复
步骤A5:对饱和高光点,利用临近像素点的漫反射分量进行传播,结合A3确定的本征反射属性,对其进行修复。
在具体的实施方案中,可按下面方式操作。需注意的是,在下面的实施过程中所述的具体方法(如本征图像分解算法、联合联合散焦法和立体视点匹配法进行深度估计、加权KNN算法等)都仅为列举说明,本发明所涵盖的范围不局限于所列举的这些方法。
步骤A1:输入一幅四维光场图像(可利用多视点成像设备采集,如相机阵列、光场相机等)。对应深度图像可由被动深度估计(立体视点匹配、散焦法等)或主动深度测量(Kinect等)提取。以光场相机Lytro为例,一次成像经过微透镜中心标定、去马赛克、去噪等预处理之后,得到光场图像I=I(x,y,u,v),其中(u,v)表示光线的角度域坐标,(x,y)表示其空间域坐标,可使用联合散焦法和立体视点匹配法,并使用MRF优化的方法获取深度。
步骤A2:对于高光点的检测和分类,可针对中心视点图像,采用亮度阈值设定的方法,找出中心视点下高光目标点的空间域坐标。在中心视点下,若某像素的亮度高于hthres,将其标记为高光目标点。从上一步得到的深度图中得到对应于该高光目标点的深度值,对光场图像进行重聚焦,找到高光目标点在各个视点下的对应像素,作为该点的像素集合,计算集合内像素RGB值的方差,若方差小于某一设定阈值varthres,则将该点划分为饱和高光点;若方差大于该阈值,则将该点划分为非饱和高光点。饱和高光点在各个视点下均为较强高光,其漫反射信息几乎完全丢失;非饱和高光点在不同视点下的颜色强度变化很大,存在不同的漫反射和镜面反射分量的组合。对于一幅普通图像,高光像素所占比例不大,若对整幅图像进行处理,运算量和时间复杂度非常高,这一步的目的在于减少高光修复步骤的运算量,仅对检测出的高光像素进行处理,也保持了图像其他区域的稳定性。
步骤A3:对某一视点或多个视点的图像进行本征图像分解,分离光照的影响,获 取相对稳定的反射本征属性,找到高光目标点对应的本征反射信息,作为高光区域信息恢复利用的线索之一。以对中心视点进行本征图像分解为例说明:
对中心视点图像Ic,像素p的强度Ic(p)可由其本征反射特性Rc(p)和本征光照特性Sc(p)相乘表示,即Ic(p)=Rc(p)×Sc(p),对等式两边取对数,得到:
ic(p)=rc(p)+sc(p)
ic(p)、rc(p)、sc(p)分别表示取对数后的Ic(p)、Rc(p)、Sc(p)。
由于图像的本征反射特性与光照受到多种约束的影响,本发明将Retinex局部约束、全局纹理约束和绝对尺度约束加入到本征图像分解中,可将问题转化目标函数最小化问题。其中由于全局纹理约束的添加,可利用不相邻甚至相隔甚远但具有相同纹理特性的像素具有相同反射特性的原理,恢复其漫反射信息,作为后续步骤的线索之一。
步骤A4:首先使用原四维光场图像I对非饱和高光点修复后的光场图像Id(x,y,u,v)进行初始化,再对非饱和高光点进行漫反射修复。非饱和高光点信息的恢复结合两种线索,一是利用多视点下局部区域特性分离的漫反射分量Dm,二是A3中得到的相应非饱和点的本征反射属性Di,两者按照一定的权重组合。如下式:
Id(x,y,u,v)=wmDm(x,y,u,v)+wiDi(x,y,u,v)
(u,v)表示光线的角度域坐标,(x,y)表示非饱和高光点的空间域坐标,wm和wi为设定的权重。
从多视点下的局部区域特性分离出漫反射分量,对每一非饱和高光点,利用非饱和高光点在不同视点下的像素集合按照聚类算法分为两类,漫反射和镜面反射结合类、仅含漫反射类,计算两类类别中心M1和M2及置信度。利用置信度和邻域窗口处理,从光场图像中减去对应位置下的镜面反射分量,得到漫反射分量Dm(x,y,u,v)。
其中,计算漫反射和镜面反射结合类的类别中心的颜色值M1(x,y)和仅含漫反射类的类别中心的颜色值M2(x,y)。按照下式计算置信度Conf(x,y):
Figure PCTCN2017083307-appb-000001
其中R是平均类内距离,β0是控制亮度因素的参数,β1是控制两类中心距离因素的参数,β2为控制分类准确性的参数,|·|表示求模运算。
为提升算法的鲁棒性,在以当前非饱和高光点(x,y)为中心、空间域坐标内m×m大小的搜索窗口内,为其中每个像素的|M1(x′,′)-M2(x′,′)|(可看成是镜面反射分量)引入权值w,按照下式即可得到漫反射分量Dm
Dm(x,y,u,v)=I(x,y,u,v)-<w×|M1(x′,y′)-M2(x′,y′)|>
w=e-γ/(Conf(x′,y′)×|I(x,y,u,v)-M1(x′,y′)|)
其中,(x′,y′)是像素(x,y)的搜索窗口内像素,<.>表示求期望,Conf(x′,y′)表示其置信度,参数γ可设为常量1。
对于非饱和区域的本征反射属性Di,由于本例仅对中心视点图像进行本征图像分解,非饱和区域的本征反射属性Di可由中心视点像素的本征反射特性复制至各个视点下对应像素,得到Di(x,y,u,v)。
步骤A5:首先使用非饱和高光点修复后的光场图像Id对饱和高光点修复后的光场图像Ir(x,y,u,v)进行初始化。对饱和高光点的修复,按照一定权值组合两种线索,一是邻域漫反射分量的加权和,二是步骤A3中得到的本征反射属性。相邻像素漫反射分量的加权和,可利用加权KNN算法实现。例如一个饱和像素p,找到距离其空间域坐标(x,y)最近的k个不属于饱和高光的像素(即非高光像素或在前一步中恢复的非饱和高光像素)组成像素集合Φ,按照下式实现饱和高光像素的修复,(x,y)为饱和高光点的空间域坐标:
Ir(x,y,u,v)=wnDn(x,y,u,v)+wiDi(x,y,u,v)
其中(u,v)表示光线的角度域坐标,(x,y)表示饱和高光点的空间域坐标,wn和wi为设定的权重,Dn表示邻域漫反射分量的加权和,Di表示相应饱和区域的本征反射属性:
Dn(x,y,u,v)=∑m=1,...kweightmId(xm,ym,u,v)
weightm=1/2m
其中,m表示Φ中像素距p的距离从小到大排序的序号,从1取到k,(xm,ym)表示Φ中距离p第m小的像素空间域坐标,weightm表示Φ中距离p第m小的像素对应的权值,距离越近,权值越大,表示离p越近的像素的漫反射信息对p的漫反射信息影响更大。weightm的计算不限于上述实施例。
以上内容是结合具体/优选的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,其还可以对这些已描述的实施方式做出若干替代或变型,而这些替代或变型方式都应当视为属于本发明的保护范围。

Claims (10)

  1. 一种基于多视点的高光图像修复方法,其特征在于,所述方法包括:
    A1:获取四维光场图像以及对应的深度图像;
    A2:从四维光场图像提取中心视点图像,初步确定高光目标点的空间域坐标,依照输入的深度图像对四维光场图像进行重聚焦,获取高光目标点的角度域特性并划分成饱和高光点与非饱和高光点;
    A3:对一个视点或多个视点的图像进行本征图像分解,得到图像本征反射属性,找到高光目标点对应的本征反射信息;
    A4:对非饱和高光点利用多视点下的局部区域特性分离出漫反射分量,结合步骤A3确定的本征反射信息,对非饱和高光点进行修复;
    A5:对饱和高光点,利用临近像素点的漫反射分量进行传播,结合步骤A3确定的本征反射信息,对饱和高光点进行修复。
  2. 如权利要求1所述的方法,其特征在于,步骤A1中,采用图像深度估计方法或主动深度测量方法提取场景对应的深度图像,采用四维光场的双平面模型表示光场,光场图像I=I(x,y,u,v),其中(u,v)表示光线的角度域坐标,(x,y)表示其空间域坐标。
  3. 如权利要求1所述的方法,其特征在于,步骤A2中,对高光点进行检测和分类,针对中心视点图像,采用亮度阈值设定的方法,找出中心视点下高光目标点的空间域坐标,结合深度图像对光场图像进行重聚焦,找到高光目标点在各个视点下的对应像素,作为该点的像素集合,计算集合内像素RGB值的方差,若方差小于某一设定阈值,则将该点划分为饱和高光点;若方差大于该阈值,则将该点划分为非饱和高光点。
  4. 如权利要求1所述的方法,其特征在于,步骤A3中,利用本征图像分解的方法从某一视点或多个视点图像中分离光照的影响,获取相对稳定的本征反射属性。
  5. 如权利要求4所述的方法,其特征在于,步骤A3中,本征图像分解算法中添加全局纹理约束,利用不相邻但具有相同纹理特性的像素恢复出高光区域的本征反射信息。
  6. 如权利要求1所述的方法,其特征在于,所述步骤A4中,使用原四维光场图像I对非饱和高光点修复后的光场图像Id(x,y,u,v)进行初始化,再利用多视点下局部区域特性分离的漫反射分量Dm和相应非饱和高光点的本征反射属性Di,将两者按照设定的权 重组合并修复非饱和高光点的漫反射信息,如下式:
    Id(x,y,u,v)=wmDm(x,y,u,v)+wiDi(x,y,u,v)
    其中(u,v)表示光线的角度域坐标,(x,y)表示非饱和高光点的空间域坐标,wm和wi为设定的权重。
  7. 如权利要求6所述的方法,其特征在于,所述步骤A4中,从多视点下局部区域特性分离漫反射分量,对每一非饱和高光点,利用非饱和高光点在不同视点下的像素集合按照聚类算法分为两类,漫反射和镜面反射结合类、仅含漫反射类,计算两类类别中心M1和M2及置信度,利用置信度和邻域窗口处理,从光场图像中减去镜面反射分量,得到漫反射分量Dm
  8. 如权利要求1所述的方法,其特征在于,所述步骤A5中,使用非饱和高光点修复后的光场图像Id对饱和高光点修复后的光场图像Ir(x,y,u,v)进行初始化,对饱和高光点,利用邻域漫反射分量的加权和Dn和相应饱和高光点的本征反射属性Di,将两者按照设定的权值进行组合修复饱和高光点的颜色信息,如下式:
    Ir(x,y,u,v)=wnDn(x,y,u,v)+wiDi(x,y,u,v)
    其中(u,v)表示光线的角度域坐标,(x,y)表示饱和高光点的空间域坐标,wn和wi为设定的权重。至此,非饱和高光点及饱和高光点的漫反射信息均得到修复,输出高光修复后的光场图像。
  9. 如权利要求8所述的方法,其特征在于,Dn通过以下公式计算:
    Dn(x,y,u,v)=Σm=1,...kweightmId(xm,ym,u,v)
    weightm=1/2m
    其中,m表示Φ中像素距饱和像素p的距离从小到大排序的序号,从1取到k,(xm,ym)表示Φ中距离p第m小的像素空间域坐标,weightm表示Φ中距离p第m小的像素对应的权值。
  10. 如权利要求1所述的方法,其特征在于,所述步骤A1中,四维光场图像可利用多视点成像设备采集,所述多视点成像设备采集包括相机阵列或光场相机。
PCT/CN2017/083307 2017-03-21 2017-05-05 一种基于光场图像的高光区域修复方法 WO2018171008A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710170590.6 2017-03-21
CN201710170590.6A CN107103589B (zh) 2017-03-21 2017-03-21 一种基于光场图像的高光区域修复方法

Publications (1)

Publication Number Publication Date
WO2018171008A1 true WO2018171008A1 (zh) 2018-09-27

Family

ID=59675905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/083307 WO2018171008A1 (zh) 2017-03-21 2017-05-05 一种基于光场图像的高光区域修复方法

Country Status (2)

Country Link
CN (1) CN107103589B (zh)
WO (1) WO2018171008A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474921A (zh) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 基于镜面高光去除的抗噪光场深度测量方法、系统及介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377524B (zh) * 2018-10-29 2021-02-23 山东师范大学 一种单幅图像深度恢复方法和系统
CN109859125B (zh) * 2019-01-14 2022-10-21 广东工业大学 基于形态学检测与小波变换的图像高光修复方法
CN109961488A (zh) * 2019-03-25 2019-07-02 中国银联股份有限公司 一种实物图像生成方法及装置
CN109974625B (zh) * 2019-04-08 2021-02-09 四川大学 一种基于色相优化灰度的彩色物体结构光三维测量方法
CN113472997B (zh) * 2020-03-31 2022-11-04 北京小米移动软件有限公司 图像处理方法及装置、移动终端及存储介质
CN112419185B (zh) * 2020-11-20 2021-07-06 湖北工业大学 基于光场迭代的精确高反光去除方法
CN112465940B (zh) * 2020-11-25 2021-10-15 北京字跳网络技术有限公司 图像渲染方法、装置、电子设备及存储介质
CN112837243B (zh) * 2021-03-05 2023-05-30 华侨大学 联合整体与局部信息的阴道镜图像高光消除的方法及装置
CN114066777B (zh) * 2021-11-30 2022-07-15 安庆师范大学 一种光场图像角度重建方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120057040A1 (en) * 2010-05-11 2012-03-08 Byung Kwan Park Apparatus and method for processing light field data using a mask with an attenuation pattern
CN105023249A (zh) * 2015-06-26 2015-11-04 清华大学深圳研究生院 基于光场的高光图像修复方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8842912B2 (en) * 2011-05-19 2014-09-23 Foveon, Inc. Method for processing highlights and saturated regions in a digital image
CN106127818B (zh) * 2016-06-30 2019-10-11 珠海金山网络游戏科技有限公司 一种基于单幅图像的材质外观获取系统及方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120057040A1 (en) * 2010-05-11 2012-03-08 Byung Kwan Park Apparatus and method for processing light field data using a mask with an attenuation pattern
CN105023249A (zh) * 2015-06-26 2015-11-04 清华大学深圳研究生院 基于光场的高光图像修复方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, HAOQIAN ET AL.: "Light Field Imaging Based Accurate Image Specular Highlight Remova", PLOS, vol. 11, no. 6, 2 June 2016 (2016-06-02), pages e0156173, XP055539952, Retrieved from the Internet <URL:https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0156173&type=printable> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474921A (zh) * 2023-12-27 2024-01-30 中国科学院长春光学精密机械与物理研究所 基于镜面高光去除的抗噪光场深度测量方法、系统及介质
CN117474921B (zh) * 2023-12-27 2024-05-07 中国科学院长春光学精密机械与物理研究所 基于镜面高光去除的抗噪光场深度测量方法、系统及介质

Also Published As

Publication number Publication date
CN107103589B (zh) 2019-09-06
CN107103589A (zh) 2017-08-29

Similar Documents

Publication Publication Date Title
WO2018171008A1 (zh) 一种基于光场图像的高光区域修复方法
Feng et al. Local background enclosure for RGB-D salient object detection
JP6438403B2 (ja) 結合された深度キューに基づく平面視画像からの深度マップの生成
Yang et al. Polarimetric dense monocular slam
JP5197279B2 (ja) コンピュータによって実施されるシーン内を移動している物体の3d位置を追跡する方法
WO2017076106A1 (zh) 图像的拼接方法和装置
US9338437B2 (en) Apparatus and method for reconstructing high density three-dimensional image
US20200320727A1 (en) Method and apparatus for generating a three-dimensional model
JP6515039B2 (ja) 連続的な撮影画像に映り込む平面物体の法線ベクトルを算出するプログラム、装置及び方法
CN108154491B (zh) 一种图像反光消除方法
WO2018133119A1 (zh) 基于深度相机进行室内完整场景三维重建的方法及系统
Cherian et al. Accurate 3D ground plane estimation from a single image
Zhao et al. Learning perspective undistortion of portraits
Wang et al. Robust color correction in stereo vision
JP6272071B2 (ja) 画像処理装置、画像処理方法及びプログラム
KR101921608B1 (ko) 깊이 정보 생성 장치 및 방법
Shen et al. Depth map enhancement method based on joint bilateral filter
KR101825218B1 (ko) 깊이 정보 생성 장치 및 방법
JP7312026B2 (ja) 画像処理装置、画像処理方法およびプログラム
Hu et al. Color image guided locality regularized representation for Kinect depth holes filling
CN113723432B (zh) 一种基于深度学习的智能识别、定位追踪的方法及系统
JP2023065296A (ja) 平面検出装置及び方法
CN113225484B (zh) 快速获取屏蔽非目标前景的高清图片的方法及装置
Tomioka et al. Depth map estimation using census transform for light field cameras
Im et al. Robust depth estimation using auto-exposure bracketing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17902170

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17902170

Country of ref document: EP

Kind code of ref document: A1